Skip to content

This is a repository for reproducibility purposes. In this research, a homicide crime prediction model was developed and different Explainable Artificial Intelligence measures were applied to it.

License

Notifications You must be signed in to change notification settings

josesousaribeiro/Pred2Town-and-XAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Paper:

Black Box Model Explanations and the Human Interpretability Expectations - An Analysis in the Context of Homicide Prediction

This repository was created to contain all additional information from the article, for reproducibility purposes.

Autors:

José Ribeiro - site: https://sites.google.com/view/jose-sousa-ribeiro

Níkolas Carneiro - site: https://br.linkedin.com/in/nikolas-carneiro-62b6568

Ronnie Alves (Leader) - site: https://sites.google.com/site/alvesrco

Abstract:

Strategies based on Explainable Artificial Intelligence - XAI have promoted better human interpretability of the results of black box machine learning models. The XAI measures being currently used (Ciu, Dalex, Eli5, Lofo, Shap, and Skater) provide various forms of explanations, including global rankings of relevance of attributes. Current research points to the need for further studies on how these explanations meet the Interpretability Expectations of human experts and how they can be used to make the model even more transparent while taking into account specific complexities of the model and dataset being analyzed, as well as important human factors of sensitive real-world contexts/problems. Intending to shed light on the explanations generated by XAI measures and their interpretabilities, this research addresses a real-world classification problem related to homicide prediction, duly endorsed by the scientific community, replicated its proposed black box model and used 6 different XAI measures to generate explanations and 6 different human experts to generate what this research referred to as Interpretability Expectations - IE. The results were computed by means of comparative analysis and identification of relationships among all the attribute ranks produced, and ~49% concordance was found among attributes indicated by means of XAI measures and human experts, ~41% exclusively by XAI measures and ~10% exclusively by human experts. The results allow for answering: "Do the different XAI measures generate similar explanations for the proposed problem?", "Are the interpretability expectations generated among different human experts similar?", "Do the explanations generated by XAI measures meet the interpretability expectations of human experts?" and "Can Interpretability Explanations and Expectations work together?", all of which concerning the context of homicide prediction. In this article, a tool is also proposed (called ConeXi) for combining different ranks, with or without loss of elements, in order to enable the combination of human expert ranks with ranks generated from XAI measures.

Link: https://arxiv.org/abs/2210.10849

Description for execution:

All data regarding the reproducibility of this work can be found in this repository.

data_input: is the pred2town dataset used;

data_output: is the split train/test of the pred2town dataset;

model: presents the model Random Forest - RF already properly trained;

df_models_info.csv: presents some important information about the performance of the analyzed RF model.

XAI_Pred2Town.ipynb: is all the source code used to execute the experiments presented in this research. It should be noted that this notebook is properly commented, documented and separated into sections for better understanding in case of an execution.

Note 1: To run the notebook XAI_Pred2Town.ipynb, it is suggested to use Google Colab, for a better and faster execution of the tool.

Note 2: To run XAI_Pred2Town.ipynb it will be necessary to use the file 'Pred2Town_Pre-processed_by_Orange_binary_class_with_metadata_clean.csv' in the 'data loading' session.

Note 3: If you prefer, you can access the ConeXi library from the link: https://github.com/josesousaribeiro/ConeXi.

Expert consultation:

Applied research link with a specialist in the area of criminal data:

https://sites.google.com/view/survey-peridico-pred2town/in%C3%ADcio

Cite this work:

@article{ribeiro2022black,
  title={Black Box Model Explanations and the Human Interpretability Expectations--An Analysis in the Context of Homicide Prediction},
  author={Ribeiro, Jos{\'e} and Carneiro, N{\'\i}kolas and Alves, Ronnie},
  journal={arXiv preprint arXiv:2210.10849},
  year={2022}
}

About

This is a repository for reproducibility purposes. In this research, a homicide crime prediction model was developed and different Explainable Artificial Intelligence measures were applied to it.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published