InterpretML is an open-source python package for training interpretable models and explaining blackbox systems. Interpretability is essential for:
- Model debugging - Why did my model make this mistake?
- Detecting bias - Does my model discriminate?
- Human-AI cooperation - How can I understand and trust the model's decisions?
- Regulatory compliance - Does my model satisfy legal requirements?
- High-risk applications - Healthcare, finance, judicial, ...
Historically, the most intelligible models were not very accurate, and the most accurate models were not intelligible. Microsoft Research has developed an algorithm called the Explainable Boosting Machine (EBM)* which has both high accuracy and intelligibility. EBM uses modern machine learning techniques like bagging and boosting to breathe new life into traditional GAMs (Generalized Additive Models). This makes them as accurate as random forests and gradient boosted trees, and also enhances their intelligibility and editability.
Notebook for reproducing table
Dataset/AUROC | Domain | Logistic Regression | Random Forest | XGBoost | Explainable Boosting Machine |
---|---|---|---|---|---|
Adult Income | Finance | .907±.003 | .903±.002 | .922±.002 | .928±.002 |
Heart Disease | Medical | .895±.030 | .890±.008 | .870±.014 | .916±.010 |
Breast Cancer | Medical | .995±.005 | .992±.009 | .995±.006 | .995±.006 |
Telecom Churn | Business | .804±.015 | .824±.002 | .850±.006 | .851±.005 |
Credit Fraud | Security | .979±.002 | .950±.007 | .981±.003 | .975±.005 |
In addition to EBM, InterpretML also supports methods like LIME, SHAP, linear models, partial dependence, decision trees and rule lists. The package makes it easy to compare and contrast models to find the best one for your needs.
* EBM is a fast implementation of GA2M. Details on the algorithm can be found here.
Python 3.5+ | Linux, Mac OS X, Windows
pip install -U interpret
Let's fit an Explainable Boosting Machine
from interpret.glassbox import ExplainableBoostingClassifier
ebm = ExplainableBoostingClassifier()
ebm.fit(X_train, y_train)
# EBM supports pandas dataframes, numpy arrays, and handles "string" data natively.
Understand the model
from interpret import show
ebm_global = ebm.explain_global()
show(ebm_global)
Understand individual predictions
ebm_local = ebm.explain_local(X_test, y_test)
show(ebm_local)
And if you have multiple models, compare them
show([logistic_regression, decision_tree])
- Interpretable models for binary classification
- Interpretable models for regression
- Blackbox interpretability for binary classification
- Blackbox interpretability for regression
Currently we're working on:
- Multiclass Classification Support
- Missing Values Support
- Improved Categorical Encoding
...and lots more! Get in touch to find out more.
If you are interested contributing directly to the code base, please see CONTRIBUTING.md.
InterpretML was originally created by (equal contributions): Samuel Jenkins & Harsha Nori & Paul Koch & Rich Caruana
Many people have supported us along the way. Check out ACKNOWLEDGEMENTS.md!
We also build on top of many great packages. Please check them out!
plotly | dash | scikit-learn | lime | shap | salib | skope-rules | gevent | joblib | pytest | jupyter
There are multiple ways to get in touch:
- Email us at [email protected]
- Or, feel free to raise a GitHub issue
Security issues and bugs should be reported privately, via email, to the Microsoft Security Response Center (MSRC) at [email protected]. You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Further information, including the MSRC PGP key, can be found in the Security TechCenter.