xai-compare
is an open-source library that provides a suite of tools to systematically compare and evaluate the quality of explanations generated by different Explainable AI (XAI) methods. This package facilitates the development of new XAI methods and promotes transparent evaluations of such methods.
xai-compare
includes a variety of XAI techniques like SHAP, LIME, and Permutation Feature Importance, and introduces advanced comparison techniques such as consistency measurement and feature selection analysis. It is designed to be flexible, easy to integrate, and ideal for enhancing model transparency and interpretability across various applications.
You can find our ReadTheDocs (RTD) documentation here.
The package can be installed from PyPI:
Using pip:
pip install xai-compare
xai-compare
supports three popular model-agnostic XAI methods:
- SHAP values provide global interpretations of a model's output by attributing each feature's contribution to the predicted outcome.
- Depending on the model type, the script initializes an appropriate explainer such as
shap.TreeExplainer
for tree-based models,shap.LinearExplainer
for linear models, orshap.KernelExplainer
for more general models. It then uses SHAP to analyze and explain the behavior of the model.
- LIME provides local interpretations of individual predictions by approximating the model's behavior around specific data points.
- The script initializes a LimeTabularExplainer and explains local predictions of the model using LIME.
- Permutation Feature Importance assesses the impact of each feature on a model’s prediction by measuring the decrease in the model’s performance when the values of a feature are randomly shuffled.
- The script measures this dependency by calculating the decrease in model performance after permuting each feature, averaged over multiple permutations.
The FeatureSelection class in xai-compare
is a robust tool for optimizing machine learning models by identifying and prioritizing the most influential features. This class leverages a variety of explainers, including SHAP, LIME, and Permutation Importance, to evaluate feature relevance systematically. It facilitates the iterative removal of less significant features, allowing users to understand the impact of each feature on model performance. This approach not only improves model efficiency but also enhances interpretability, making it easier to understand and justify model decisions.
The Consistency class assesses the stability and reliability of explanations provided by various explainers across different splits of data. This class is crucial for determining whether the insights provided by model explainers are consistent regardless of data variances.
The notebooks below demonstrate different use cases for xai-compare
package. For hands-on experience and to explore the notebooks in detail, visit the notebooks directory in the repository.
Feature Selection Comparison Notebook
Consistency Comparison Notebook
We're seeking individuals with expertise in machine learning, preferably explainable artificial intelligence (XAI), and proficiency in Python programming. If you have a background in these areas and are passionate about enhancing machine learning model transparency, we welcome your contributions. Join us in shaping the future of interpretable AI.
This project is licensed under the MIT License - see the LICENSE file for details.
- SHAP and LIME libraries are used for model interpretability.