-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Caches persisting issues when flushing with cache manager #17
Comments
I'll have a bit of a think about this. It is definitely a locking issue, we need to lock fetches until the transaction is complete. But doctrine cache doesn't provide this ability. I think using redis list type is going to mean that the issue isn't solved for any provider other than redis. |
BTW thanks heaps for tracking down and reporting this issue. It is definitely a major one for high concurrency websites. |
Sounds likely to me @stecman |
@camspiers Ah.. Off course I forgot the other caches totally! I locked my line of thought to redis only.. Of course the fix should be related to other cache providers so I'll have to revise my idea of the fix that I had. At least we know the reason for issues that has been bugging as for ages :). I tested this on a two concurrent requests on our local vagrant box. Mostly it happens on lists that contain more than +30 keys and we only use Redis. So I'd of course would like to hear that can you repro the issue also before anything else :). One hotfix that would help in this would have been to modify the cache manager to delete keys with a wildcard but I haven't checked is that commonly supported with the cache providers. At least that way if you get a too persistent key that goes missing from the list you could flush all the keys for that project. Also I haven't checked is the lists used in otherwise that could cause the saves not flushing right. I presume that is the case.. e.g. it deletes keys only found in the list? Heres some thoughts on the matter.. some of them are crazy and not in the order of feasible to not feasible :) :
|
I think the last one would be the most easiest to implement.. The lock file should have a TTL set so it will be flushed really soon so if the request runs to any errors it would remove them automatically also . Also it shouldn't prevent writing all keys.. Just for the set. |
@camspiers Did you have the change to have any thoughts on the matter yet? If some my suggestions sounds reasonable I'd be able to allocate time to produce a fix as a pull reguest. Thought testing other cache provides might be out of scope for us, but id test it fully with Redis at least. |
I have tidied up the commit and it seems to work. I'v tested with a local vagrant box and redis with 60 concurrent requests and its not causing any keys to be left around. |
@stecman can you have a look at this. I don't have bandwidth for more work at the moment. |
2 years late to the party, but we’ve just encountered this issue too on a high(er) traffic website |
We did another tweak also on that Sohova@c878669 |
Description
We have had similar issues that this pull request tries to prevent #13.
If you flush the cache with the cache manager there are some cache key orphans left in the system.
The culprit seems to be in the
class CacheInclude
with the function:That keeps a list of the keys and the urls that they belong to and the problem comes when you have loads of keys and concurrent requests to the server that gets out of sync.
And as the cache-managers delete function deletes only keys that are find in the array you end up in a situation that you need to manually flush redis cache for the site as the frontend just displays the key without checking it from the "list".
That needs to be changed to use Redis´s own list type to keep it in sync correctly.
Hopefully I'll have a tweak ready for testing in the next couple days.
Steps to repro
@stecman I added you here as you might be interested in this :)
The text was updated successfully, but these errors were encountered: