Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Type
Description of the changes
I'm back! In some capacity at least...
I decided to whip up a cache for the Postgres driver. In theory this cache is generic and could be its own layer above all drivers, I didn't want to change anything about the JSON driver, and for now, Postgres is the only other one :)
The cache is an LRU cache, which scales up and down in size based on the number of cogs loaded (read: drivers instantiated). Its max size per cog is admittedly somewhat arbitrary and meaningless, considering the values stored in the cache could range from single bools to massive dicts. However I think it's better to get something working and process-efficient out there rather than get stuck on a cache-sizing issue, and this is still bound to be a lot more memory-efficient than the JSON driver.
I'm interested to hear what people's experiences are with regards to this, especially on large bots. I haven't actually done any performance testing, but anecdotally it seems a lot faster ;)