Speeding Up Sampling for an RLDDM #617
Replies: 2 comments 1 reply
-
Adjusting the likelihood function to compute the Q-values for each session/subject in parallel in the scan function massively sped things up. I'm still curious about adjusting the priors but the model is properly converging with enough tuning steps now. |
Beta Was this translation helpful? Give feedback.
-
@theonlydvr great to hear. Few comments on your original statements:
Another suggestion: Possibly try Are you interested in helping us include RLDDM functionality properly into HSSM? |
Beta Was this translation helpful? Give feedback.
-
I've been able to get an RLDDM working with HSSM; however, I've been having substantial issues with slow sampling (especially during tuning). It generally seems like the slow sampling occurs because of an excessive number of steps at each iteration (frequently hitting the max tree depth). This is especially problematic for the RLDDM because it needs to use the PyMC scan function which leads to relatively slow function evaluations. From my understanding, this kind of behavior normally occurs when the priors are somewhat misspecified. I was wondering if there was any way in HSSM currently to use the priors that were previously implemented in HDDM? The original HDDM priors used non-Gaussian distributions for the DDM parameters and I can't see any way to specify non-Gaussian, hierarchical regressions for parameters in the HSSM framework (I know Bambi can definitely handle them). Is there any way to use the original HDDM priors in HSSM?
Alternatively, does anyone have suggestions for potentially improving the sampling speed or reduce the number of steps per iteration? I can share any code that might be helpful!
Beta Was this translation helpful? Give feedback.
All reactions