-
Notifications
You must be signed in to change notification settings - Fork 3
Statistical Rethinking (McElreath) discussion group: proposed outline
(See this overview discussion, which I should probably move that into the current wiki.)
Discuss organization, installing relevant software packages, goals and interests
Supplement with McElreath video on 'what standard statistical tests actually do' vs what people think they are doing.
Garden of forking data (I suggest we skip the 'coding of building that diagram')
- we really discuss our intuitive understanding of Bayesian updating and the examples, and cool stuff about it
Building a model (the globe-tossing story)
- Do the estimating and plotting with the Kurz implementation, incorporate a succinct version of this into our own methods book/notes
Components of the model
- Watching/coding the cool updating
- Consider implications for 'what can we learn from small amounts of data' (and the VOI stuff)
Let's dig into Grid approximation, Quadratic approximation, and (a bit) MCMC, and how these work and how to code them
A. From a grid-approximate posterior)
B. To simulate prediction
This might be 1 or 2 weeks, I'm not sure
Normal distributions
A language for describing models, the globe-tossing model
Gaussian model of height (first part)
Linear prediction:
- This adds a predictor variable to the above model of height
- Implements quadratic approximation and interpretation of results
Curves from lines: Polynomial models and Splines
DR: This material is interesting, but I think there are better resources on causal inference we might want to supplement this with. Still, given the importance of the issue and how he returns, to it, I think we probably should devote 1-2 weeks to causal inference.
Or could this material be skipped and deferred to a separate causal inference reading group?
Lots of material here, not sure how we should split it up but I think it needs at least 2 sessions
1-2 sessions?
(DR: perhaps not to be confused with mediation, which is very hard to measure.)
0-1 sessions
DR: Let's just skim this and mainly treat it as black box (as Nik suggested)
This gets rather mathematically technical but let's try not to be scared.
1 sessions/weeks for this one?
_Nik: idk how useful the maxent stuff would be. (You can use it to "justify" things like priors, but I think a better response to detractors is just to ask what values they think are plausible a priori and assess sensitivity to prior choice that way, or do an “expert elicitation” type thing])
Also seems mathy but worth it. 1 week, not as comprehensive
Also seems mathy but worth it. 1 week?
DR: This is ~the chapter I'm most interested in. Any way we can get here earlier?
This seems very important to our work, so maybe give it 2 weeks.
This seems to go in a bunch of fancy directions. Probably needs at least 2 weeks.
- Advanced varying slopes
- Instruments and causal designs (what's the connection to the varying slopes thing?)
# 15: Missing Data and Other Opportunities
A useful application. Hopefully if we get this far we'll be so smart we can do it in 1 week?
This doesn't really need a lot of work but we could use this as an opportunity for a recap discussion