Fiddler Auditor: Open-source tool evaluates the robustness of large language models

Fiddler Auditor is an open-source tool designed to evaluate the robustness of Large Language Models (LLMs) and Natural Language Processing (NLP) models.

Fiddler Auditor

LLMs can sometimes produce unwarranted content, potentially create hostile responses, and may disclose confidential information they were trained with, regardless of whether they were explicitly asked to do so.

The tool uses adversarial examples, out-of-distribution inputs, and linguistic variations to help developers and researchers identify potential weaknesses and improve the performance of their LLMs and NLP solutions.

Fiddler Auditor supports:

  • Red-teaming LLMs for your use-case with prompt perturbation
  • Integration with LangChain
  • Custom evaluation metrics
  • Generative and Discriminative NLP models
  • Comparison of LLMs

Here’s an example report generated by Fiddler Auditor:

Fiddler Auditor

The software is available for download on GitHub.

Don't miss