Discovery of functional motifs by association to clinical features using Graph Neural Networks.
python train_test_controller.py --aggregators 'max' --bs 16 --dropout 0.0 --en my_experiment --epoch 200 --factor 0.8 --fcl 256 --gcn_h 64 --lr 0.001 --min_lr 0.0001 --model PNAConv --num_of_ff_layers 1 --num_of_gcn_layers 2 --patience 5 --scalers 'identity' --weight_decay 1e-05
python train_test_controller.py --aggregators None --bs 16 --dropout 0.0 --en my_experiment --epoch 200 --factor 0.2 --fcl 128 --gcn_h 64 --lr 0.001 --min_lr 2e-05 --model GATConv --num_of_ff_layers 1 --num_of_gcn_layers 3 --patience 20 --scalers None --weight_decay 0
python gnnexplainer.py --aggregators 'max' --bs 16 --dropout 0.0 --fcl 256 --gcn_h 64 --model PNAConv --num_of_ff_layers 1 --num_of_gcn_layers 2 --scalers 'identity' --idx 10
python lime.py --aggregators 'max' --bs 16 --dropout 0.0 --fcl 256 --gcn_h 64 --model PNAConv --num_of_ff_layers 1 --num_of_gcn_layers 2 --scalers 'identity' --idx 10
python shap.py --aggregators 'max' --bs 16 --dropout 0.0 --fcl 256 --gcn_h 64 --model PNAConv --num_of_ff_layers 1 --num_of_gcn_layers 2 --scalers 'identity' --idx 10
Original Graph | SubGraph |
---|---|
- Awesome graph explainability papers
- Towards Explainable Graph Neural Networks
- Parameterized Explainer for Graph Neural Networks (Github-PyTorch)
- Parameterized Explainer for Graph Neural Networks (Github-TensorFlow)
- GNNExplainer - DGL
- GNNExplainer on MUTAG (Graph Classification) - Colab
- GNNExplainer on MUTAG (Graph Classification) - Colab 2
- Captum - Paper
- Captum - Website
- Explainability in Graph Neural Networks
- Distributed Computation of Attributions using Captum
- Workshop on GNNExplainer with Graph Classification (MUTAG)
- Extending Gnnexplainer for graph classification - PyG
- Explainability Techniques for Graph Convolutional Networks [Code]
- SHAP-Library
- GraphSVX: Shapley Value Explanations for Graph Neural Networks
- The Shapley Value in Machine Learning
- GRAPHSHAP: Motif-based Explanations for Black-box Graph Classifiers
- GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks
- CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks
- Explaining Graph Neural Networks with Structure-Aware Cooperative Games
- Reliable Graph Neural Network Explanations Through Adversarial Training
- Explainable AI Video Series
- awesome-machine-learning-interpretability
- PGM-Explainer: Probabilistic Graphical Model Explanations for Graph Neural Networks
- An Explainable AI Library for Benchmarking Graph Explainers-GXAI [Code]
- On Explainability of Graph Neural Networks via Subgraph Explorations [Code]
- Data Representing Ground-Truth Explanations to Evaluate XAI Methods
- Towards Ground Truth Explainability on Tabular Data
- Interpretable Machine Learning
- “Why Should I Trust You?” Explaining the Predictions of Any Classifier
- Quantus
- Papers and code of Explainable AI esp. w.r.t. Image classificiation
- OpenXAI
- How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations
- histocartography
- Towards Explainable Graph Representations in Digital Pathology
- An Causal XAI Diagnostic Model for Breast Cancer Based on Mammography Reports
- Predicting Cell Type and Extracting Key Genes using Single Cell Multi-Omics Data and Graph Neural Networks
- Graph Representation Learning in Biomedicine
- Predicting the Survival of Cancer Patients With Multimodal Graph Neural Network
- scDeepSort: a pre-trained cell-type annotation method for single-cell transcriptomics using deep learning with a weighted graph neural network [Code]
- A survey on graph-based deep learning for computational histopathology
- Explaining decisions of graph convolutional neural networks: patient-specific molecular subnetworks responsible for metastasis prediction in breast cancer