Skip to content

Commit

Permalink
wonky plot from check_model() on a glmmTMB example (#680)
Browse files Browse the repository at this point in the history
  • Loading branch information
strengejacke committed Feb 12, 2024
1 parent 082f9e8 commit b46c7d1
Show file tree
Hide file tree
Showing 8 changed files with 82 additions and 47 deletions.
2 changes: 1 addition & 1 deletion DESCRIPTION
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Type: Package
Package: performance
Title: Assessment of Regression Models Performance
Version: 0.10.8.12
Version: 0.10.8.13
Authors@R:
c(person(given = "Daniel",
family = "Lüdecke",
Expand Down
10 changes: 10 additions & 0 deletions NEWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,16 @@
* Clarification in the documentation of the `estimator` argument for
`performance_aic()`.

* Improved plots for overdispersion-checks for negative-binomial models from
package *glmmTMB* (affects `check_overdispersion()` and `check_mnodel()`).

* For model of class `glmmTMB`, deviance residuals are now used in the
`check_model()` plot.

* Improved (better to understand) error messages for `check_model()`,
`check_collinearity()` and `check_outliers()` for models with non-numeric
response variables.

## Bug fixes

* Fixed issue in `binned_residuals()` for models with binary outcome, where
Expand Down
5 changes: 5 additions & 0 deletions R/check_collinearity.R
Original file line number Diff line number Diff line change
Expand Up @@ -487,6 +487,11 @@ check_collinearity.zerocount <- function(x,
result <- vector("numeric")
na_terms <- vector("numeric")

# sanity check - models with offset(?) may contain too many term assignments
if (length(term_assign) > ncol(v)) {
term_assign <- term_assign[seq_len(ncol(v))]
}

for (term in 1:n.terms) {
subs <- which(term_assign == term)
if (length(subs)) {
Expand Down
45 changes: 23 additions & 22 deletions R/check_model.R
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@ check_model.default <- function(x,

minfo <- insight::model_info(x, verbose = FALSE)

ca <- tryCatch(
assumptions_data <- tryCatch(
if (minfo$is_bayesian) {
suppressWarnings(.check_assumptions_stan(x, ...))
} else if (minfo$is_linear) {
Expand All @@ -196,10 +196,10 @@ check_model.default <- function(x,
}
)

if (inherits(ca, c("error", "simpleError"))) {
if (inherits(assumptions_data, c("error", "simpleError"))) {
pattern <- "(\n|\\s{2,})"
replacement <- " "
cleaned_string <- gsub(pattern, replacement, ca$message)
cleaned_string <- gsub(pattern, replacement, assumptions_data$message)
insight::format_error(
paste("`check_model()` returned following error:", cleaned_string),
paste0("\nIf the error message does not help identifying your problem, another reason why `check_model()` failed might be that models of class `", class(x)[1], "` are not yet supported.") # nolint
Expand All @@ -218,21 +218,22 @@ check_model.default <- function(x,
show_dots <- is.null(n) || n <= 1e5
}

attr(ca, "panel") <- panel
attr(ca, "dot_size") <- dot_size
attr(ca, "line_size") <- line_size
attr(ca, "check") <- check
attr(ca, "alpha") <- alpha
attr(ca, "dot_alpha") <- dot_alpha
attr(ca, "show_dots") <- isTRUE(show_dots)
attr(ca, "detrend") <- detrend
attr(ca, "colors") <- colors
attr(ca, "theme") <- theme
attr(ca, "model_info") <- minfo
attr(ca, "overdisp_type") <- list(...)$plot_type
attr(ca, "bandwidth") <- bandwidth
attr(ca, "type") <- type
ca
attr(assumptions_data, "panel") <- panel
attr(assumptions_data, "dot_size") <- dot_size
attr(assumptions_data, "line_size") <- line_size
attr(assumptions_data, "check") <- check
attr(assumptions_data, "alpha") <- alpha
attr(assumptions_data, "dot_alpha") <- dot_alpha
attr(assumptions_data, "show_dots") <- isTRUE(show_dots)
attr(assumptions_data, "detrend") <- detrend
attr(assumptions_data, "colors") <- colors
attr(assumptions_data, "theme") <- theme
attr(assumptions_data, "model_info") <- minfo
attr(assumptions_data, "overdisp_type") <- list(...)$plot_type
attr(assumptions_data, "bandwidth") <- bandwidth
attr(assumptions_data, "type") <- type
attr(assumptions_data, "model_class") <- class(x)[1]
assumptions_data
}


Expand Down Expand Up @@ -267,7 +268,7 @@ check_model.stanreg <- function(x,
dot_alpha = 0.8,
colors = c("#3aaf85", "#1b6ca8", "#cd201f"),
theme = "see::theme_lucid",
detrend = FALSE,
detrend = TRUE,
show_dots = NULL,
bandwidth = "nrd",
type = "density",
Expand Down Expand Up @@ -306,7 +307,7 @@ check_model.model_fit <- function(x,
dot_alpha = 0.8,
colors = c("#3aaf85", "#1b6ca8", "#cd201f"),
theme = "see::theme_lucid",
detrend = FALSE,
detrend = TRUE,
show_dots = NULL,
bandwidth = "nrd",
type = "density",
Expand Down Expand Up @@ -339,7 +340,7 @@ check_model.model_fit <- function(x,
dat <- list()

dat$VIF <- .diag_vif(model, verbose = verbose)
dat$QQ <- .diag_qq(model, verbose = verbose)
dat$QQ <- .diag_qq(model, model_info = model_info, verbose = verbose)
dat$REQQ <- .diag_reqq(model, level = 0.95, model_info = model_info, verbose = verbose)
dat$NORM <- .diag_norm(model, verbose = verbose)
dat$NCV <- .diag_ncv(model, verbose = verbose)
Expand All @@ -366,7 +367,7 @@ check_model.model_fit <- function(x,
dat <- list()

dat$VIF <- .diag_vif(model, verbose = verbose)
dat$QQ <- .diag_qq(model, verbose = verbose)
dat$QQ <- .diag_qq(model, model_info = model_info, verbose = verbose)
dat$HOMOGENEITY <- .diag_homogeneity(model, verbose = verbose)
dat$REQQ <- .diag_reqq(model, level = 0.95, model_info = model_info, verbose = verbose)
dat$OUTLIERS <- check_outliers(model, method = "cook")
Expand Down
33 changes: 25 additions & 8 deletions R/check_model_diagnostics.R
Original file line number Diff line number Diff line change
Expand Up @@ -35,11 +35,13 @@

# prepare data for QQ plot ----------------------------------

.diag_qq <- function(model, verbose = TRUE) {
if (inherits(model, c("lme", "lmerMod", "merMod", "glmmTMB", "gam"))) {
.diag_qq <- function(model, model_info = NULL, verbose = TRUE) {
if (inherits(model, c("lme", "lmerMod", "merMod", "gam"))) {
res_ <- stats::residuals(model)
} else if (inherits(model, "geeglm")) {
res_ <- stats::residuals(model, type = "pearson")
} else if (inherits(model, "glmmTMB")) {
res_ <- stats::residuals(model, type = "deviance")
} else if (inherits(model, "glm")) {
res_ <- .safe(abs(stats::rstandard(model, type = "deviance")))
} else {
Expand All @@ -61,7 +63,7 @@
return(NULL)
}

if (inherits(model, "glm")) {
if (inherits(model, c("glm", "glmerMod")) || (inherits(model, "glmmTMB") && isFALSE(model_info$is_linear))) {
fitted_ <- stats::qnorm((stats::ppoints(length(res_)) + 1) / 2)
} else {
fitted_ <- stats::fitted(model)
Expand Down Expand Up @@ -294,11 +296,26 @@

# data for negative binomial models
if (faminfo$is_negbin && !faminfo$is_zero_inflated) {
d <- data.frame(Predicted = stats::predict(model, type = "response"))
d$Residuals <- insight::get_response(model) - as.vector(d$Predicted)
d$Res2 <- d$Residuals^2
d$V <- d$Predicted * (1 + d$Predicted / insight::get_sigma(model))
d$StdRes <- insight::get_residuals(model, type = "pearson")
if (inherits(model, "glmmTMB")) {
d <- data.frame(Predicted = stats::predict(model, type = "response"))
d$Residuals <- insight::get_residuals(model, type = "pearson")
d$Res2 <- d$Residuals^2
d$StdRes <- insight::get_residuals(model, type = "pearson")
if (faminfo$family == "nbinom1") {
# for nbinom1, we can use "sigma()"
d$V <- insight::get_sigma(model)^2 * stats::family(model)$variance(d$Predicted)
} else {
# for nbinom2, "sigma()" has "inverse meaning" (see #654)
d$V <- (1 / insight::get_sigma(model)^2) * stats::family(model)$variance(d$Predicted)
}
} else {
## FIXME: this is not correct for glm.nb models?
d <- data.frame(Predicted = stats::predict(model, type = "response"))
d$Residuals <- insight::get_response(model) - as.vector(d$Predicted)
d$Res2 <- d$Residuals^2
d$V <- d$Predicted * (1 + d$Predicted / insight::get_sigma(model))
d$StdRes <- insight::get_residuals(model, type = "pearson")
}
}

# data for zero-inflated poisson models
Expand Down
2 changes: 1 addition & 1 deletion tests/testthat/test-model_performance.bayesian.R
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ test_that("model_performance.BFBayesFactor", {
})
expect_null(p)


skip_on_os("linux")
mod <- BayesFactor::regressionBF(mpg ~ cyl, mtcars, progress = FALSE)
modF <- lm(mpg ~ cyl, mtcars)
p <- model_performance(mod)
Expand Down
2 changes: 2 additions & 0 deletions tests/testthat/test-r2_bayes.R
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
skip_on_os("linux")

test_that("r2_BayesFactor", {
skip_if_not_installed("BayesFactor")
set.seed(1)
Expand Down
30 changes: 15 additions & 15 deletions vignettes/check_model.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ Most plots seen here can also be generated by their dedicated functions, e.g.:
- Binned residuals: `binned_residuals()`
- Check for overdispersion: `check_overdispersion()`

## Linear models: Are all assumptions for linear models met?
# Linear models: Are all assumptions for linear models met?

We start with a simple example for a linear model.

Expand Down Expand Up @@ -87,9 +87,9 @@ Now let's take a closer look for each plot. To do so, we ask `check_model()` to
diagnostic_plots <- plot(check_model(m1, panel = FALSE))
```

### Posterior predictive checks
## Posterior predictive checks

The first plot is based on `check_predictions()`. Posterior predictive checks can be used to look for systematic discrepancies between real and simulated data. It helps to see whether the type of model (distributional family) fits well to the data (_Gelman and Hill, 2007, p. 158_). Posterior predictive checks can be used to "look for systematic discrepancies between real and simulated data" (_Gelman et al. 2014, p. 169_).
The first plot is based on `check_predictions()`. Posterior predictive checks can be used to "look for systematic discrepancies between real and simulated data" (_Gelman et al. 2014, p. 169_). It helps to see whether the type of model (distributional family) fits well to the data (_Gelman and Hill, 2007, p. 158_).

```{r eval=all(successfully_loaded[c("see", "ggplot2")])}
# posterior predicive checks
Expand All @@ -113,11 +113,11 @@ plot(out)

As you can see, the green line in this plot deviates visibly from the blue lines. This may indicate that our linear model is not appropriate, since it does not capture the distributional nature of the response variable properly.

#### How to fix this?
### How to fix this?

The best way, if there are serious concerns that the model does not fit well to the data, is to use a different type (family) of regression models. In our example, it is obvious that we should better use a Poisson regression.

#### Plots for discrete outcomes
### Plots for discrete outcomes

For discrete or integer outcomes (like in logistic or Poisson regression), density plots are not always the best choice, as they look somewhat "wiggly" around the actual values of the dependent variables. In this case, use the `type` argument of the `plot()` method to change the plot-style. Available options are `type = "discrete_dots"` (dots for observed and replicated outcomes), `type = "discrete_interval"` (dots for observed, error bars for replicated outcomes) or `type = "discrete_both"` (both dots and error bars).

Expand All @@ -134,7 +134,7 @@ out <- check_predictions(m3)
plot(out, type = "discrete_both")
```

### Linearity
## Linearity

This plot helps to check the assumption of linear relationship. It shows whether predictors may have a non-linear relationship with the outcome, in which case the reference line may roughly indicate that relationship. A straight and horizontal line indicates that the model specification seems to be ok.

Expand All @@ -160,7 +160,7 @@ out <- plot(check_model(m, panel = FALSE))
out[[2]]
```

#### How to fix this?
### How to fix this?

If the green reference line is not roughly flat and horizontal, but rather - like in our example - U-shaped, this may indicate that some of the predictors probably should better be modeled as quadratic term. Transforming the response variable might be another solution when linearity assumptions are not met.

Expand All @@ -175,7 +175,7 @@ out[[2]]

**Some caution is needed** when interpreting these plots. Although these plots are helpful to check model assumptions, they do not necessarily indicate so-called "lack of fit", e.g. missed non-linear relationships or interactions. Thus, it is always recommended to also look at [effect plots, including partial residuals](https://strengejacke.github.io/ggeffects/articles/introduction_partial_residuals.html).

### Homogeneity of variance - detecting heteroscedasticity
## Homogeneity of variance - detecting heteroscedasticity

This plot helps to check the assumption of equal (or constant) variance, i.e. homoscedasticity. To meet this assumption, the variance of the residuals across different values of predictors is similar and does not notably increase or decrease. Hence, the desired pattern would be that dots spread equally above and below a roughly straight, horizontal line and show no apparent deviation.

Expand All @@ -202,7 +202,7 @@ But why does the diagnostic plot used in `check_model()` look different? `check_
diagnostic_plots[[3]]
```

#### How to fix this?
### How to fix this?

There are several ways to address heteroscedasticity.

Expand All @@ -212,7 +212,7 @@ There are several ways to address heteroscedasticity.

3. Transforming the response variable, for instance, taking the `log()`, may also help to avoid issues with heteroscedasticity.

### Influential observations - outliers
## Influential observations - outliers

Outliers can be defined as particularly influential observations, and this plot helps detecting those outliers. Cook's distance (_Cook 1977_, _Cook & Weisberg 1982_) is used to define outliers, i.e. any point in this plot that falls outside of Cook's distance (the dashed lines) is considered an influential observation.

Expand All @@ -223,11 +223,11 @@ diagnostic_plots[[4]]

In our example, everything looks well.

#### How to fix this?
### How to fix this?

Dealing with outliers is not straightforward, as it is not recommended to automatically discard any observation that has been marked as "an outlier". Rather, your _domain knowledge_ must be involved in the decision whether to keep or omit influential observation. A helpful heuristic is to distinguish between error outliers, interesting outliers, and random outliers (_Leys et al. 2019_). _Error outliers_ are likely due to human error and should be corrected before data analysis. _Interesting outliers_ are not due to technical error and may be of theoretical interest; it might thus be relevant to investigate them further even though they should be removed from the current analysis of interest. _Random outliers_ are assumed to be due to chance alone and to belong to the correct distribution and, therefore, should be retained.

### Multicollinearity
## Multicollinearity

This plot checks for potential collinearity among predictors. In a nutshell multicollinearity means that once you know the effect of one predictor, the value of knowing the other predictor is rather low. Multicollinearity might arise when a third, unobserved variable has a causal effect on each of the two predictors that are associated with the outcome. In such cases, the actual relationship that matters would be the association between the unobserved variable and the outcome.

Expand All @@ -244,11 +244,11 @@ The variance inflation factor (VIF) indicates the magnitude of multicollinearity

Our model clearly suffers from multicollinearity, as all predictors have high VIF values.

#### How to fix this?
### How to fix this?

Usually, predictors with (very) high VIF values should be removed from the model to fix multicollinearity. Some caution is needed for interaction terms. If interaction terms are included in a model, high VIF values are expected. This portion of multicollinearity among the component terms of an interaction is also called "inessential ill-conditioning", which leads to inflated VIF values that are typically seen for models with interaction terms _(Francoeur 2013)_. In such cases, re-fit your model without interaction terms and check this model for collinearity among predictors.

### Normality of residuals
## Normality of residuals

In linear regression, residuals should be normally distributed. This can be checked using so-called Q-Q plots (quantile-quantile plot) to compare the shapes of distributions. This plot shows the quantiles of the studentized residuals versus fitted values.

Expand All @@ -261,7 +261,7 @@ diagnostic_plots[[6]]

In our example, we see that most data points are ok, except some observations at the tails. Whether any action is needed to fix this or not can also depend on the results of the remaining diagnostic plots. If all other plots indicate no violation of assumptions, some deviation of normality, particularly at the tails, can be less critical.

#### How to fix this?
### How to fix this?

Here are some remedies to fix non-normality of residuals, according to _Pek et al. 2018_.

Expand Down

0 comments on commit b46c7d1

Please sign in to comment.