Skip to content

Feature selection package based on SHAP and target permutation, for pandas and Spark

License

Notifications You must be signed in to change notification settings

manuel-calzolari/shapicant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PyPi Conda ReadTheDocs

shapicant

shapicant is a feature selection package based on SHAP [LUN] and target permutation, for pandas and Spark.

It is inspired by PIMP [ALT], with some differences:

  • PIMP fits a probability distribution to the population of null importances or, alternatively, uses a non-parametric estimation of the PIMP p-values. Instead, shapicant only implements the non-parametric estimation.
  • For the non-parametric estimation, PIMP computes the fraction of null importances that are more extreme than the true importance (i.e. r/n). Instead, shapicant computes it as (r+1)/(n+1) [NOR].
  • PIMP uses the Gini importance of Random Forest models or the Mutual Information criterion. Instead, shapicant uses SHAP values.
  • While feature importance measures such as the Gini importance show an absolute feature importance, SHAP provides both positive and negative impacts. Instead of taking the mean absolute value of the SHAP values for each feature as feature importance, shapicant takes the mean value for positive and negative SHAP values separately. The true importance needs to be consistently higher than null importances for both positive and negative impacts. For multi-class classification, the true importance needs to be higher for at least one of the classes.
  • While feature importance measures such as the Gini importance of Random Forest models are computed on the training set, SHAP values can be computed out-of-sample. Therefore, shapicant allows to compute them on a distinct validation set. To decide whether to compute them on the training set or on a validation set, you can refer to this discussion for "Training vs. Test Data" (it talks about PFI [BRE], which is a different algorithm, but the general idea is still applicable).

Permuting the response vector instead of permuting features has some advantages:

  • The dependence between predictor variables remains unchanged.
  • The number of permutations can be much smaller than the number of predictor variables for high dimensional datasets (unlike PFI [BRE]) and there is no need to add shadow features (unlike Boruta [KUR]).
  • Since the features set does not change during iterations, the distributed implementation is more straightforward.

Installation

Dependencies

shapicant requires:

  • Python (>= 3.6)
  • shap (>= 0.36.0)
  • numpy
  • pandas
  • scikit-learn
  • tqdm

For Spark, we also need:

  • pyspark (>= 3.0)
  • pyarrow

User installation

The easiest way to install shapicant is using pip

pip install shapicant

or conda

conda install -c conda-forge shapicant

Documentation

Installation documentation, API reference and examples can be found on the documentation.

References

[LUN]Lundberg, S., & Lee, S.I. (2017). A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems (pp. 4765–4774).
[ALT]Altmann, A., Toloşi, L., Sander, O., & Lengauer, T. (2010). Permutation importance: a corrected feature importance measure Bioinformatics, 26 (10), 1340-1347.
[NOR]North, B. V., Curtis, D., & Sham, P. C. (2002). A note on the calculation of empirical P values from Monte Carlo procedures. American journal of human genetics, 71 (2), 439–441.
[BRE](1, 2) Breiman, L. (2001). Random Forests Machine Learning, 45 (1), 5–32.
[KUR]Kursa, M., & Rudnicki, W. (2010). Feature Selection with Boruta Package Journal of Statistical Software, 36, 1-13.

About

Feature selection package based on SHAP and target permutation, for pandas and Spark

Resources

License

Stars

Watchers

Forks

Languages