You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I had a crazy idea when I should have been sleeping that it would be handy to be able to perform a clear cache operation on the target publishing server when pushing, e.g. large image updates.
That came out of a fear that when a client of mine makes a large set of edits to be pushed at once, users of the site might see some screwy pages until the cache refreshes. I have witnessed this myself updating media via git on command line. I normally have high levels of caching set for production servers.
You could do this by triggering another webhook on the editing instance. It could be a global checkbox on the file selection list UI.
This idea is not very incompatible with another half-baked plan of mine to split this plugin into "pushy-publisher" and "pushy-target".
The text was updated successfully, but these errors were encountered:
Do you think the person who is supposed to publish the changes is apt to decide whether a clear cache is needed?
* Does the user know the size of the push?
* Does the user know when the issue might occur?
* Does the user know the performance hiccup of clear cache on a large site?
You're right, those are serious concerns. This came about because I am expecting a client to push some big changes including media onto the production server. Probably not gonna happen often, I might try to coordinate a cache refresh from me.
When a copy/paste of a /user folder (maybe excluding plugins/themes) on the same server is faster then a remote fetch (the git changes), it might be an idea to apply the git changes onto a staging folder (no need to be a website) and copy/paste that folder into the production Grav folder.
Or only apply scheduled pulls on remote server during hours that user traffic is low.
I had a crazy idea when I should have been sleeping that it would be handy to be able to perform a clear cache operation on the target publishing server when pushing, e.g. large image updates.
That came out of a fear that when a client of mine makes a large set of edits to be pushed at once, users of the site might see some screwy pages until the cache refreshes. I have witnessed this myself updating media via git on command line. I normally have high levels of caching set for production servers.
You could do this by triggering another webhook on the editing instance. It could be a global checkbox on the file selection list UI.
This idea is not very incompatible with another half-baked plan of mine to split this plugin into "pushy-publisher" and "pushy-target".
The text was updated successfully, but these errors were encountered: