-
Notifications
You must be signed in to change notification settings - Fork 432
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement fit in Stan; fully Bayesian (including sigma); different smoothing. #34
base: master
Are you sure you want to change the base?
Conversation
Oh---one more difference (and this is probably the true source of the larger uncertainty and R_t values). I treat the time series of exponential growth as the mean of a Poisson distribution that describes each day's new cases (this is similar to Bettencourt & Ribeiro (2008), where Delta-T is a stochastic process and the observations are drawn from it, though there is no really good reason to assume that the draw is Poisson, I suppose); but your model uses the prior day's observed case number to predict the next, and only applies the stochastic process in the prior. I can imagine that my approach leads to a lot more uncertainty in the time-series R_t (since it is truly a stochastic process, observed only through the Poisson draws; while you assume that the prior day's observations---which are perfectly known---grow exponentially to produce R_{t+1}). |
1. Re-parameterized the model to efficiently fit states like NY where there is lots of data (i.e. the posterior is likelihood-dominated). It goes slower on the low-data states, but with fewer divergences / bad fits overall. 2. More discussion / description of the model *including differences to rt.live* in notebook, particularly the choice to have expected counts depend on yesterday's *expected* rather than *observed* counts. 3. Marginalize over the serial interval, based on work from https://epiforecasts.io/covid/
See commit message for major update today; should still be merge-able automatically, but the model has gotten a bit more sophisticated, and there is a lot more explanatory text in the notebook about how it works. |
…t of days where the AR(1) prior dominates and days where the observational likelihood dominates.
Choose smoothing length scale to suppress power on scales of f > 1/(7 d) by an order of magnitude.
…ause most states total test data is crap.
…ed---most states above 1.
…ies and *then* add.
This may be a work in progress; I'm finding some differences in the analysis between my model and yours (in general, mine prefers larger values of R_t, particularly for the states with small values in your model). I think my model is also more conservative about the range of R_t, probably because I'm marginalizing over sigma, and sigma may not be particularly well determined (so sometimes the "smoothing" from small random increments in the prior for R_t is not as strong as yours).
Nevertheless, the should be safe to merge. Here is what I did:
I don't have the web skills to make the plots dynamical as your team does; but I did try to reproduce the "feel" of your plots for comparisons. Two key plots appear at the end of the notebook, and are reproduced here: