Skip to content

rezacsedu/XAI-for-bioinformatics

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Explainale AI (XAI) for bioinformatics

Codes and supplementary materials for our paper "Explainable AI for Bioinformatics: Importance, Methods, Tools, and Applications", submitted to Briefings in Bioinformatics journal. This repo will be updated periodically.

Notebooks

We provided several interactive Jupyter notebooks showing how interpretable ML techniques can be used to improve the interpretability for bioinformatics research use cases. Please note that some notebooks don't accompany the datasets, mainly due to NDA agreements.

Paers and books on interpretable ML methods

We categorize the papers and books based on interpretable ML methods

Books

  • A Guide for Making Black Box Models Explainable. Molnar 2019 pdf

Surveys (papers)

  • A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI. Tjao et al. 2020 pdf
  • Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey. Das et al. 2020 pdf
  • Interpretable machine learning: definitions, methods, and applications. Murdoch et al. 2019 pdf
  • A brief survey of visualization methods for deep learning models from the perspective of Explainable AI. Chalkiadakis 2018 pdf
  • A Survey Of Methods For Explaining Black Box Models. Guidotti et al. 2018 pdf
  • Explaining Explanations: An Overview of Interpretability of Machine Learning. Gilpin et al. 2019 pdf
  • Explainable Artificial Intelligence: a Systematic Review. Vilone at al. 2020 pdf

Attribution maps and gradient-based (papers)

  • DTCAV: Automating Interpretability: Discovering and Testing Visual Concepts Learned by Neural Networks. Ghorbani et al. 2019 pdf
  • AM: Visualizing higher-layer features of a deep network. Erhan et al. 2009 pdf
  • Deep inside convolutional networks: Visualising image classification models and saliency maps. Simonyan et al. 2013 pdf
  • DeepVis: Understanding Neural Networks through Deep Visualization. Yosinski et al. ICML workshop 2015 pdf
  • Visualizing and Understanding Recurrent Networks. Kaparthey et al. ICLR 2015 pdf
  • Feature Removal Is A Unifying Principle For Model Explanation Methods. Covert et al. 2020 pdf
  • Gradient: Deep inside convolutional networks: Visualising image classification models and saliency maps. Simonyan et al. 2013 pdf
  • Guided-backprop: Striving for simplicity: The all convolutional net. Springenberg et al. 2015 pdf
  • SmoothGrad: removing noise by adding noise. Smilkov et al. 2017 pdf
  • DeepLIFT: Learning important features through propagating activation differences. Shrikumar et al. 2017 pdf
  • IG: Axiomatic Attribution for Deep Networks. Sundararajan et al. 2018 pdf
  • EG: Learning Explainable Models Using Attribution Priors. Erion et al. 2019 pdf
  • LRP: Beyond saliency: understanding convolutional neural networks from saliency prediction on layer-wise relevance propagation pdf
  • DTD: Explaining NonLinear Classification Decisions With Deep Tayor Decomposition pdf
  • CAM: Learning Deep Features for Discriminative Localization. Zhou et al. 2016 link
  • Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. Selvaraju et al. 2017 pdf
  • Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks. Chattopadhyay et al. 2017 pdf
  • Smooth Grad-CAM++: An Enhanced Inference Level Visualization Technique for Deep Convolutional Neural Network Models. Omeiza et al. 2019 pdf
  • NormGrad: There and Back Again: Revisiting Backpropagation Saliency Methods. Rebuffi et al. CVPR 2020 pdf
  • Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks. Wang et al. CVPR 2020 workshop pdf
  • Relevance-CAM: Your Model Already Knows Where to Look. Lee et al. CVPR 2021 pdf
  • LIFT-CAM: Towards Better Explanations of Class Activation Mapping. Jung & Oh ICCV 2021 pdf.

Sensitivity and perturbation-based (papers)

  • Generative causal explanations of black-box classifiers. O’Shaughnessy et al. 2020 pdf
  • Removing input features via a generative model to explain their attributions to classifier's decisions. Agarwal et al. 2019 pdf
  • Challenging common interpretability assumptions in feature attribution explanations? Dinu et al. NeurIPS workshop 2020 pdf
  • The effectiveness of feature attribution methods and its correlation with automatic evaluation scores. Nguyen, Kim, Nguyen 2021 pdf
  • Deletion & Insertion: Randomized Input Sampling for Explanation of Black-box Models. Petsiuk et al. BMVC 2018 pdf
  • DiffROAR: Do Input Gradients Highlight Discriminative Features? Shah et al. NeurIPS 2021 pdf
  • RISE: Randomized Input Sampling for Explanation of Black-box Models. Petsiuk et al. BMVC 2018 pdf
  • LIME: Why should i trust you?: Explaining the predictions of any classifier. Ribeiro et al. 2016 pdf
  • LIME-G: Removing input features via a generative model to explain their attributions to classifier's decisions. Agarwal & Nguyen. ACCV 2020 pdf
  • SHAP: A Unified Approach to Interpreting Model Predictions. Lundberg et al. 2017 pdf
  • IM: Interpretation of NLP models through input marginalization. Kim et al. EMNLP 2020 pdf.

Rule- and counterfactual explanations (papers)

  • Local Rule-based Explanations of Black Box Decision Systems. Guidotti et al. 2021 pdf
  • FIDO: Explaining image classifiers by counterfactual generation. Chang et al. ICLR 2019 pdf
  • CEM: Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives. Dhurandhar & Chen et al. NeurIPS 2018 pdf
  • Counterfactual Explanations for Machine Learning: A Review. Verma et al. 2020 pdf
  • Interpreting Neural Network Judgments via Minimal, Stable, and Symbolic Corrections. Zhang et al. 2018 pdf
  • Counterfactual Visual Explanations. Goyal et al. 2019 pdf
  • Generative Counterfactual Introspection for Explainable Deep Learning. Liu et al. 2019 pdf.

Knowledge-based (papers)

  • ReasonChainQA: Text-based Complex Question Answering with Explainable Evidence Chains. Zhu et al. 2022 pdf
  • Knowledge-graph-based explainable AI: A systematic review. Rajabi et al. 2022 link
  • Knowledge-based XAI through CBR: There is more to explanations than models can tell. Weber et al. 2021 pdf
  • The Role of Human Knowledge in Explainable AI. Tocchetti et al. 2022 link.

XAI with focus on HCI (papers)

  • Question-Driven Design Process for Explainable AI User Experiences Liao 2021 pdf
  • Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior? Hase & Bansal ACL 2020 pdf
  • Teach Me to Explain: A Review of Datasets for Explainable NLP. Wiegreffe & Marasović 2021 pdf
  • Yang, S. C. H., & Shafto, P. Explainable Artificial Intelligence via Bayesian Teaching. NIPS 2017 pdf
  • Explainable AI for Designers: A Human-Centered Perspective on Mixed-Initiative Co-Creation pdf
  • ICADx: Interpretable computer aided diagnosis of breast masses. Kim et al. 2018 pdf
  • Neural Network Interpretation via Fine Grained Textual Summarization. Guo et al. 2018 pdf
  • LS-Tree: Model Interpretation When the Data Are Linguistic. Chen et al. 2019 pdf.

Distilling DNNs into more interpretable models (papers)

  • Interpreting CNNs via Decision Trees pdf
  • Distilling a Neural Network Into a Soft Decision Tree pdf
  • Improving the Interpretability of Deep Neural Networks with Knowledge Distillation. Liu et al. 2018 pdf.

Application areas

Computer Vision

  • Multimodal explanations: Justifying decisions and pointing to the evidence. Park et al. CVPR 2018 pdf
  • IA-RED2: Interpretability-Aware Redundancy Reduction for Vision Transformers. Pan et al. NeurIPS 2021 pdf
  • Transformer Interpretability Beyond Attention Visualization. Hila et al. CVPR 2021 pdf

NLP

  • Deletion_BERT: Double Trouble: How to not explain a text classifier’s decisions using counterfactuals synthesized by masked language models. Pham et al. 2022 pdf
  • Considering Likelihood in NLP Classification Explanations with Occlusion and Language Modeling. Harbecke et al. 2020 pdf.

Interpretable ML tools and libraries

GUI tools

  • DeepVis: Deep Visualization Toolbox. Yosinski et al. ICML 2015 code
  • SWAP: Generate adversarial poses of objects in a 3D space. Alcorn et al. CVPR 2019 code
  • AllenNLP: Query online NLP models with user-provided inputs and observe explanations (Gradient, Integrated Gradient, SmoothGrad). Last accessed 03/2020 demo
  • 3DB: A framework for analyzing computer vision models with simulated data code.

Libraries

Citation request

If you use the code of this repository in your research, please consider citing the folowing papers:

@article{karim_xai_bio_2022,
      title={Explainable AI for Bioinformatics: Methods, Tools, and Applications},
      author={Karim, Md Rezaul and Beyan, Oya and Zappa, Achille and Costa, Ivan G and Rebholz-Schuhmann, Dietrich and Cochez, Michael and Decker, Stefan},
      journal={Briefings in bioinformatics},
      volume={XXXX},
      number={XXXX},
      pages={XXXX},
      year={2023},
      publisher={Oxford University Press}
      }

Contributing

If you find more related work, which are not listed here, please create a PR or sugest by filing issues. Your contribution will be highly appreciated. For any questions, feel free to open an issue or contact at [email protected].

About

Explainable AI for Bioinformatics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published