forked from thomasp85/lime
-
Notifications
You must be signed in to change notification settings - Fork 0
/
DESCRIPTION
61 lines (61 loc) · 1.66 KB
/
DESCRIPTION
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
Package: lime
Type: Package
Title: Local Interpretable Model-Agnostic Explanations
Version: 0.5.0.9000
Date: 2019-06-13
Authors@R:
c(person(given = "Thomas Lin",
family = "Pedersen",
role = c("cre", "aut"),
email = "[email protected]",
comment = c(ORCID = "0000-0002-5147-4711")),
person(given = 'Michaël',
family = 'Benesty',
role = c('aut'),
email = "[email protected]"))
Maintainer: Thomas Lin Pedersen <[email protected]>
Description: When building complex models, it is often difficult to explain why
the model should be trusted. While global measures such as accuracy are
useful, they cannot be used for explaining why a model made a specific
prediction. 'lime' (a port of the 'lime' 'Python' package) is a method for
explaining the outcome of black box models by fitting a local model around
the point in question an perturbations of this point. The approach is
described in more detail in the article by Ribeiro et al. (2016)
<arXiv:1602.04938>.
License: MIT + file LICENSE
URL: https://lime.data-imaginist.com
BugReports: https://github.com/thomasp85/lime/issues
Encoding: UTF-8
LazyData: true
RoxygenNote: 6.1.1
Roxygen: list(markdown = TRUE)
VignetteBuilder: knitr
Imports: glmnet,
stats,
ggplot2,
tools,
stringi,
Matrix,
Rcpp,
assertthat,
htmlwidgets,
shiny,
shinythemes,
methods,
grDevices,
gower
Suggests: xgboost,
testthat,
mlr,
h2o,
text2vec,
MASS,
covr,
knitr,
rmarkdown,
sessioninfo,
magick,
keras,
ranger
LinkingTo: Rcpp,
RcppEigen