Skip to content

The Infosys Responsible AI toolkit incorporates various features including safety, security, explainability, fairness, bias and hallucination detection to ensure AI solutions are trustworthy and transparent.

License

Notifications You must be signed in to change notification settings

Infosys/Infosys-Responsible-AI-Toolkit

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Infosys-Responsible-AI-Toolkit

The Infosys Responsible AI toolkit provides a set of APIs to integrate safety,security, privacy, explainability, fairness, and hallucination detection into AI solutions, ensuring trustworthiness and transparency.

Repositories and Installation Instructions

The following table lists the modules of the Infosys Responsible AI Toolkit. Installation instructions for each module can be found in the corresponding README file within the module's directory.

# Module Functionalities Repository name(s)
1 ModerationLayer APIs
(Comprehensive suite of Safety, Privacy, Explainability, Fairness and Hallucination tenets)
To regulate the content of prompts and responses generated by LLMs responsible-ai-moderationlayer,
responsible-ai-moderationModel
2 Explainability APIs** Get Explainability to LLM responses,
Global and local explainability for Regression, Classification and Timeseries Models
responsible-ai-llm-explain,
responsible-ai-explainability,
Model Details,
Reporting
3 Fairness & Bias API Check Fairness and detect Biases associated with LLM prompts and responses and also for traditional ML models responsible-ai-fairness
4 Hallucination API Detect and quantify Hallucination in LLM responses under RAG scenarios responsible-ai-hallucination
5 Privacy API Detect and anonymize or encrypt or highlight PII information in prompts for LLMs or in its responses responsible-ai-privacy
6 Safety API Detects and anonymize toxic and profane text associated with LLMs responsible-ai-safety
7 Security API For different types of security attacks and defenses on tabular and image data, prompt injection and jailbreak checks responsible-ai-security

** Endpoints for explainability are located in both the explainability and moderation layer repositories. Refer to the README files in these repositories for more details on specific features.

Please refer Features and Endpoints document for more details on endpoints and their usage.

Modules for the Responsible AI Toolkit Interface

The Responsible AI toolkit provides a user-friendly interface for seamless experimentation and alignment with various Responsible AI principles. The following APIs are required to activate and utilize the toolkit's UI. To access the toolkit through the interface, refer to the README files associated with the listed repositories.

# Module Functionalities Repository name(s)
1 MFE An Angular micro frontend app serves as a user interface where users can easily interact with and consume various backend endpoints through independently developed, modular components. responsible-ai-mfe
2 SHELL A shell application in a micro frontend architecture acts as the central hub, orchestrating and loading independent frontend modules. It provides a unified user interface where users can interact with different micro frontends, consume backend endpoints. responsible-ai-shell
3 Backend A Python backend module focused on registration and authentication handles user account management, including user registration, login, password validation, and session management. responsible-ai-backend
4 Telemetry A python backend module defining the various tenets structure for ingestion of the API's data into Elasticsearch indexes. It provided customizable input validation and insertion of data coming from tenets into elasticsearch, which can be further displayed using kibana. responsible-ai-telemetry
5 File Storage Python module that provides versatile APIs for seamless integration across multiple microservices, enabling efficient file management with Azure Blob Storage. It supports key operations such as file upload, retrieval, and updates, offering a robust solution for handling files in Azure Blob Storage. responsible-ai-file-storage
6 Benchmarking Displays stats related to benchmarking large language models (LLMs) across various categories such as fairness, privacy, truthfulness and ethics. It helps evaluate and compare LLM performance in these critical areas. responsible-ai-llm-benchmarking

For technical details and usage instructions on the Infosys Responsible AI toolkit's features, please refer to the documentation.

Toolkit features at a glance

Generative AI Models

Safety, Security & Privacy Model Transparency Text Quality Linguistic Quality
* Prompt Injection Score
* Jailbreak Score
* Privacy check
* Profanity check
* Restricted Topic check
* Toxicity check
* Refusal check
* Sentiment check
* Fairness & Bias check
* Hallucination Score
* Explainability Methods:
- Thread of Thoughts (ThoT)
- Chain of Thoughts (CoT)
- Graph of Thoughts (GoT)
- Chain of Verification (CoVe)
* Token Importance
* Invisible Text, Gibberish checks
* Ban Code check
* Completeness check
* Conciseness check
* Text Quality check
* Text Relevance check
* Uncertainty Score
* Coherence Score
* Language Critique Coherence
* Language Critique Fluency
* Language Critique Grammar
* Language Critique Politeness

Machine Learning Models

Security Fairness Explainability
* Simulate Adverserial Attacks
* Recommend Defense Mechanisms
* Bias Detection Methods:
- Statistical Parity Difference
- Disparate Impact Ratio
- Four Fifth's Rule
- Cohen's D
* Mitigation Methods:
- Equalized Odds
- Re-weighing
* Global Explainability using SHAP
* Local Explainability using LIME

Upcoming Features

  • Logic of Thoughts(LoT) for enhanced Explainability
  • Fairness Auditing for continuous monitoring and mitigation of Biases
  • Red Teaming to identify and mitigate AI model security threats
  • Multi-lingual support for Prompt injection and Jailbreak in Moderation models
  • Multilingual Feature support to privacy & safety modules

Note: These API-based guardrails are optimized for Azure OpenAI. Users employing alternative LLMs should make the necessary client configuration adjustments. For Azure OpenAI api subscription, follow instructions provided in the Microsoft Azure website.

Please check out the contribution page to share your feedback or suggest improvements.

Have more questions? Connect with us at [email protected].

About

The Infosys Responsible AI toolkit incorporates various features including safety, security, explainability, fairness, bias and hallucination detection to ensure AI solutions are trustworthy and transparent.

Resources

License

Stars

Watchers

Forks

Packages

No packages published