Skip to content

chenchuw/CS542-FinalProject

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

36 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Interpretability in machine learning models by Local Interpretable Model-Agnostic Explanations (LIME)

Team member: Bingquan Cai, Chuwei Chen, Xiaowei Ge


Abstract

Machine learning models are widely adopted in many fields, and specialists make important decisions based on the machine learning model’s prediction. These decisions could be vital in fields like medical, autonomous driving, etc. Thus, understanding the reasons behind a prediction is necessary to make the prediction trustworthy.

In this project, we applied LIME, an explanation technique that can explain any classifier’s predictions with human interpretable features. We applied it to three machine learning models: SVM, random forest, and CNN that take either 1D textual data or 2D image data. Our work follows by evaluations of LIME’s results on the three machine learning models. The LIME results we obtained explained how these machine learning models get their prediction, and why sometimes a model gets the wrong prediction. We also compared the LIME result between different models to understand the difference in underlying logic between them.

For details, check the file CS542_report_LIME.pdf

About

Boston University 2022 SUMMER CS542 Final Project

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •