-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trying to bridge the emmeans gap #1232
Comments
Isn't this sufficient? avg_predictions(A, df = insight::get_df(A)) |
Unfortunately, the joint test features are currently very limited. I will look into this in the future, and probably delegate some of the computation to the One tip: Whenever possible, avoid the For your interaction, I would say that this requires adopting a radically different point of view. IMHO, the only people who can reasonably be expected to make sense of your code are extremely advanced In contrast, the The code below is obviously more verbose than I guess what I’m saying is that Of course, YMMV! library(afex)
library(emmeans)
library(marginaleffects)
data(md_12.1)
A <- aov_ez("id", "rt", md_12.1, within = c("angle", "noise"))
# emmeans
ems <- emmeans(A, ~ angle + noise)
w <- data.frame(
"None vs Some" = c(-2, 1, 1) / 2,
"High vs Med" = c(0, 1, -1)
)
contrast(ems, interaction = list(angle = w, noise = "pairwise"))
> angle_custom noise_pairwise estimate SE df t.ratio p.value
> None.vs.Some absent - present -162 16.2 9 -9.970 <.0001
> High.vs.Med absent - present 84 24.0 9 3.500 0.0067
# marginaleffects
hyp <- function(x) {
dplyr::reframe(x,
term = c("None vs Some", "High vs Med"),
estimate = c(estimate[angle == "X0"] - mean(estimate[angle != "X0"]),
estimate[angle == "X8"] - estimate[angle == "X4"]),
.by = "noise")[, 2:3] |>
dplyr::summarize(estimate = diff(estimate), .by = "term")
}
predictions(A,
newdata = datagrid(angle = unique, noise = unique),
hypothesis = hyp)
>
> Term Estimate Std. Error z Pr(>|z|) S 2.5 % 97.5 %
> None vs Some -162 16.2 -9.97 <0.001 75.4 -194 -130
> High vs Med 84 24.0 3.50 <0.001 11.1 37 131
>
> Type: response
> Columns: term, estimate, std.error, statistic, p.value, s.value, conf.low, conf.high |
Yes, for fixed effects models. rmANOVA of mixed effect models require something other than the residual df.
I didn't see an option for joint test outside Re:
Thanks 😊 Also thanks for the example. I think I might be able to build a function generating function that can bridge the emmeans way and the marginal effects way to some extent. Would that be something you would consider adding? |
That's really intriguing! Yes, in principle, that's something that I'd consider. But fair warning: I'm annoyingly picky about user interface. One thing I don't want to do is replicate the What does "spirit" mean? Not sure... I guess what I'm saying is that it would be important to discuss the ultimate user interface before writing any code, because there's a likelihood I won't merge if we can't find an interface that makes sense and is transparent. |
I'll start super ambitious and we can work our way back. Broadly defining a contrast as any comparisons between groups of estimates, possibly conditionally. So we would need a function that can define these groups that are to be compared, how to compare them, and what other variable to condition on. library(marginaleffects)
data("mtcars")
mtcars$cyl <- factor(mtcars$cyl)
mod <- lm(mpg ~ cyl * am, data = mtcars)
contr <- function(..., comparison = "difference", by = NULL, transform = identity, aggregate = mean) {
# magic goes here
}
# generates a function that can be fed to `hypothesis=`
hyp <- contr(
c(4, 8) %vs% 6,
4 %vs% 8,
comparison = "ratio",
by = "am"
)
avg_predictions(mod, variables = c("cyl", "am"),
hypothesis = hyp) The This definition excludes other types of contrasts, such as polynomial contrasts. These can be defined by passing a vector/matrix of contrast weights (in which case the |
Interesting. I think it’s useful to spell something out explicitly. In your examples, there are two very different operations:
Philosophically,
Consider a simple example. First, we fit a model and make predictions for each row of the original data (no aggregation). Since library(marginaleffects)
library(tibble)
mod <- lm(mpg ~ factor(cyl) * am, data = mtcars)
predictions(mod) |> nrow()
> [1] 32 Let’s aggregate/average unit-level predictions for each avg_predictions(mod, by = "cyl")
>
> cyl Estimate Std. Error z Pr(>|z|) S 2.5 % 97.5 %
> 6 19.7 1.146 17.2 <0.001 218.5 17.5 22.0
> 4 26.7 0.914 29.2 <0.001 618.7 24.9 28.5
> 8 15.1 0.810 18.6 <0.001 255.0 13.5 16.7
>
> Type: response
> Columns: cyl, estimate, std.error, statistic, p.value, s.value, conf.low, conf.high Now, say you want to “collapse” the 4/6 and 8 bydf = tribble(
~ cyl, ~ by,
4, "small",
6, "small",
8, "large"
)
avg_predictions(mod, by = bydf)
>
> By Estimate Std. Error z Pr(>|z|) S 2.5 % 97.5 %
> small 24.0 0.715 33.5 <0.001 816.9 22.6 25.4
> large 15.1 0.810 18.6 <0.001 255.0 13.5 16.7
>
> Type: response
> Columns: estimate, std.error, statistic, p.value, s.value, conf.low, conf.high, by As noted above, the philosophy is to do contrasts using the avg_predictions(mod, by = bydf, hypothesis = ~ sequential)
>
> Hypothesis Estimate Std. Error z Pr(>|z|) S 2.5 % 97.5 %
> (large) - (small) -8.87 1.08 -8.21 <0.001 52.0 -11 -6.75
>
> Type: response
> Columns: hypothesis, estimate, std.error, statistic, p.value, s.value, conf.low, conf.high Finally, it is true that But I think it’s useful to think about the operations separately, because any implementation of a new contrasting interface that goes solely through What I want to avoid is having two completely parallel way to approach the same two-step problem. |
This is interesting - for a single contrast that really gets us 90% there - we would just need a way to add interaction contrasts and conditional contrasts. But - contrasts often appear as families of contrasts. E.g., pairwise, consecutive, orthogonal... Which the |
This feature would be trivial to add. For example, this one-liner PR passes almost all existing tests, and allows this: > mod <- lm(mpg ~ factor(cyl) * am, data = mtcars)
> bydf = tribble(
+ ~ cyl, ~ by,
+ 4, "a",
+ 6, "a",
+ 8, "b",
+ 4, "x",
+ 6, "y",
+ 8, "y"
+ )
> avg_predictions(mod, by = bydf)
By Estimate Std. Error z Pr(>|z|) S 2.5 % 97.5 %
a 24.0 0.715 33.5 <0.001 816.9 22.6 25.4
x 26.7 0.914 29.2 <0.001 618.7 24.9 28.5
y 16.6 0.662 25.2 <0.001 461.6 15.4 17.9
b 15.1 0.810 18.6 <0.001 255.0 13.5 16.7
Type: response
Columns: estimate, std.error, statistic, p.value, s.value, conf.low, conf.high, by |
Yes, but that would require typing out that The docs do say that more complex aggregation -
So I'm not sure that allowing for this overlap is inconsistent. |
I'd certainly be interested in more ergonomic ways to specify the |
BTW, I’m not sure if you were aware that we can use matrices in the I could easily imagine someone creating a utility function to leverage that. Of course, one may need access to the data frame itself to know the correct indices, so perhaps you’re right and that has to happen inside the custom library(afex)
library(emmeans)
library(marginaleffects)
data(md_12.1)
A <- aov_ez("id", "rt", md_12.1, within = c("angle", "noise"))
h = matrix(c(
-1,1,.5,-.5,.5,-.5,
-1,2,.1,-.5,.8,-.5),
ncol = 2
)
colnames(h) <- c("None vs. Some", "Something else")
predictions(A,
hyp = h,
newdata = datagrid(angle = unique, noise = unique)
)
>
> Term Estimate Std. Error z Pr(>|z|) S 2.5 % 97.5 %
> None vs. Some -162 16.2 -9.97 <0.001 75.4 -194 -130
> Something else 284 38.1 7.47 <0.001 43.5 210 359
>
> Type: response
> Columns: term, estimate, std.error, statistic, p.value, s.value, conf.low, conf.high |
Here are some more thoughts:
Really these is some overlap between the
library(marginaleffects)
library(emmeans)
#> Welcome to emmeans.
#> Caution: You lose important information if you filter this package's results.
#> See '? untidy'
mtcars$cyl <- factor(mtcars$cyl)
mtcars$am <- factor(mtcars$am)
mod <- lm(mpg ~ cyl * am, mtcars)
(p <- avg_predictions(mod, variables = c("cyl", "am")))
#>
#> am cyl Estimate Std. Error z Pr(>|z|) S 2.5 % 97.5 %
#> 1 6 20.6 1.751 11.75 <0.001 103.4 17.1 24.0
#> 1 4 28.1 1.072 26.19 <0.001 499.7 26.0 30.2
#> 1 8 15.4 2.144 7.18 <0.001 40.4 11.2 19.6
#> 0 6 19.1 1.516 12.61 <0.001 118.8 16.2 22.1
#> 0 4 22.9 1.751 13.08 <0.001 127.5 19.5 26.3
#> 0 8 15.1 0.875 17.19 <0.001 217.7 13.3 16.8
#>
#> Type: response
#> Columns: cyl, am, estimate, std.error, statistic, p.value, s.value, conf.low, conf.high
emmobj(coef(p), vcov(p), lapply(insight::get_datagrid(p), unique))
#> cyl am estimate SE df asymp.LCL asymp.UCL
#> 6 1 20.6 1.751 NA 17.1 24.0
#> 4 1 28.1 1.072 NA 26.0 30.2
#> 8 1 15.4 2.144 NA 11.2 19.6
#> 6 0 19.1 1.516 NA 16.2 22.1
#> 4 0 22.9 1.751 NA 19.5 26.3
#> 8 0 15.1 0.875 NA 13.3 16.8
#>
#> Confidence level used: 0.95 (This can be expanded to also support estimates created from Bayesian models.) This could be a feature for power users. You would then have a few tiers of contrast complexity:
|
That’s an interesting idea, but I don't think it is possible to implement without fundamental (and enormous) changes to the current code architecture. My guess is that your mental model for
But behind the scenes, that's not at all what is going on. What library(marginaleffects)
mod <- lm(mpg ~ hp + am, mtcars)
d1 <- data.frame(hp = 100, am = 0)
d2 <- data.frame(hp = 101, am = 1)
predict(mod, d2) - predict(mod, d1)
> 1
> 5.218198
comparisons(mod,
variables = c("hp", "am"),
cross = TRUE,
newdata = datagrid(hp = 100, am = 0))
>
> C: am C: hp hp am Estimate Std. Error z Pr(>|z|) S 2.5 % 97.5 %
> 1 - 0 +1 100 0 5.22 1.08 4.83 <0.001 19.4 3.1 7.34
>
> Type: response
> Columns: rowid, term, contrast_am, contrast_hp, estimate, std.error, statistic, p.value, s.value, conf.low, conf.high, hp, am, predicted_lo, predicted_hi, predicted, mpg We are always doing a “counterfactual comparison” between two specific values for each focal predictor. Two specific values of We have to fill actual data frames with those specific values, and feed it to
This is super cool! I had no idea. But what is the value proposition here? If someone is deep enough into contrasts to know those more advanced |
My mental model for mtcars$cyl <- factor(mtcars$cyl)
mod <- glm(am ~ cyl + hp,
family = binomial(),
data = mtcars)
d1 <- data.frame(hp = 100:101, cyl = "4")
d2 <- data.frame(hp = 100:101, cyl = "6")
d3 <- data.frame(hp = 100:101, cyl = "8")
w <- c(-2, 1, 1)/2
cbind(predict(mod, d1, type = "response"),
predict(mod, d2, type = "response"),
predict(mod, d3, type = "response")) %*% w
#> [,1]
#> 1 -0.7293757
#> 2 -0.7304380
Well, {marginaleffects} supports more models (afaik), and produces the estimates (adjusted predictions, AMEs...) using a completely different approach. |
Aaah, I see, thanks for clarifying! To summarize, here are the proposals I've seen in this thread, and my current assessment:
|
To follow up on the simple effects issue, we can use code like this:
or
which both yield
Note that you have to use The reason you need two calls to
should always be the same as
I agree that the |
I'm once again trying to switch to {marginaleffects}, but have found two cases that are buggy or still very hard to execute (sorry for mixing a bug report with a "discussion"):
1. Simple effects (conditional joint tests)
2. Interaction contrasts
Created on 2024-10-14 with reprex v2.1.1
3. Automatic dfs?
Currently all tests are, by default, wald-z tests, which can be counter-conservative (especially in small samples). Are there any plans for automatic df extraction from the model?
The text was updated successfully, but these errors were encountered: