AIC XAI 문제 1번 2팀 레파지토리
attribution_method
ㄴ Paper
ㄴ Implementation
Evaluation
ㄴ Paper
ㄴ Implementation
- Zeiler, Matthew D., and Rob Fergus. "Visualizing and understanding convolutional networks." ECCV. 2014.
- Smilkov, Daniel, et al. "Smoothgrad: removing noise by adding noise." ICML Workshop. 2017.
- Fong, Ruth C., and Andrea Vedaldi. "Interpretable explanations of black boxes by meaningful perturbation." CVPR. 2017.
- Selvaraju, RamprasaathR., et al. "Grad-cam: Visual explanations from deep networks via gradient-based localization." CVPR. 2017.
- Zhang, Quanshi, Ying NianWu, and Song-Chun Zhu. "Interpretable convolutional neural networks." CVPR. 2018.
- Wagner, Jorg, et al. "Interpretable and Fine-Grained Visual Explanations for Convolutional Neural Networks." CVPR. 2019.
- Ancona, Marco, et al. "Towards better understanding of gradient-based attribution methods for deep neural networks." ICLR. 2018.
- Hooker, Sara, et al. "Evaluating feature importance estimates." ICML Workshop. 2018.
- Nie, Weili, Yang Zhang, and Ankit Patel. "A theoretical explanation for perplexing behaviors of backpropagation-based visualizations." ICML. 2018.
- Adebayo, Julius, et al. "Sanity checks for saliency maps." NIPS. 2018.
- Yang, Mengjiao, and Been Kim. "BIM: Towards Quantitative Evaluation of Interpretability Methods with Ground Truth." arXivpreprint arXiv:1907.09701 (2019).