-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache built kibblesets for later reuse #229
Comments
I had a different concept of how this might work come to mind. As described to @WValenti in Skype (with some edits):
(It'd probably also make deployment less of a headache for ISI.) |
As discussed in person, kibble is subject to extensive development for the addition of longitudinal information. This is going to be a long-term project as it requires significant noodling first, and will involve UI changes as well. |
Also, I believe in a cacheless society. Gimme credit for my experience - cache causes problems. ;-) |
I mean, I would prefer not to have "cache tables" myself, but until MariaDB/MySQL give us native support for materialized views, the performance penalties are way too severe. :( |
The most frequent slowdown we run into in regular operations is when a kibble set has to be built for a given variable. We're already running into situations where we have to "cache" these results (witness the proliferation of All-EXISTS cohorts for FHVs in particular), so it might be worthwhile to just go the next step and formalize this caching process.
This could potentially speed up creation of cohorts and custom variables and downloads (for example, previewing a potential cohort would populate the cache and thus make the ensuing creation - and subsequent previews - much faster), simplfy the process of envaluing a custom variable, and keep cohortInds from becoming an untenable slow monster. It also cleanly separates an existing de facto system caching function from vital user data. We'd just have to remember to purge the kibble cache any time variable data changes (not often), make sure the resulting cache table(s) is/are supremely well indexed, and we're golden.
Something to give some serious thought IMO.
The text was updated successfully, but these errors were encountered: