-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refinery::Resource not removed from cache on destroy #3520
Comments
Hi @evenreven Unfortunately I dont know how to help you with this issue. But, if you says your site is slow, you can use Nginx cache for serving Dragonfly resources. It means, that resources will be send from Nginx cache (by Nginx process), instead serving by some Ruby process. I have attached my nginx config. Check the proxy_cache_path directive.
|
Thanks, that's an interesting config for Dragonfly. But caching is not the part of my site which is slow. The main problem is N+1 queries, but with a warm cache, the site is actually quite fast. The problem is that the orphaned cache entries still get served from the cache even though the owner of the cached entry was deleted with the destroy action. I don't even know how to find it to delete it manually from Redis (where I assume it's served from, but it could be in a static file somewhere, I don't even know). If anything the app is too fast, I would actually be fine with not caching file resources at all (images, with imagemagick processing and all, is necessary to cache, though). I could try to remove the cache directive from config (the action dispatch flag) and rely on fragment caching for the processed images. That feels wrong, though. |
I'm unsure if this question belongs here or in Dragonfly upstream, but I'll try here.
I've uploaded hundreds of PDFs since I first created my Refinery app in 2015 (first 2.1.5, later upgraded to 4.0.3), and I've noticed this issue on and off. When I delete a
Refinery::Resource
in the CMS panel (say, to upload a new version of a document), the file is deleted from the filestore (I just use a local file store). However, the old direct link/system/resources/base64reallylongstring/old-document.pdf
still works, so it's still being served from the Dragonfly cache.Needless to say, I deleted the old document for a reason, and I would really like the link to go away from the internet (people could forward a link to the old document on email, for instance), and I also want to free up the space without needing to wait for a Redis LFU invalidation. I don't know much about the low level innards of the Rails cache handing, but shouldn't it invalidate the key when it's destroyed?
My site is extremely slow without caching due to some legacy architectural issues, so it's not an option to flush the entire cache when I delete a document.
Three questions:
id
or something (file_uid
?) to look it up and then expire that specific key?.meta
file behind?Thanks in advance! If this is upstream behaviour, feel free to tell me, and I'll file an issue there instead.
The text was updated successfully, but these errors were encountered: