-
-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize/cache the packages.json index #81
Comments
There are a few places where caching could be implemented and various ways to do it, so I'm open to any suggestions. Caching the output would likely provide the greatest benefit since checksums are generated for every release any time A few things that need to be considered:
It might also be possible to see some gains by caching the result of the directory scan so that doesn't have to be done on each load. That would be easier to invalidate, but wouldn't help with the checksums (I'm guessing that's slow, but I don't have any benchmarks). Does anyone else have any thoughts? |
Now at the point where we're timing out due to how long it takes to build the package.json file.
|
If you're at the point where the request is timing out, it sounds like caching may only be part of the solution. Wouldn't the request time out when the cache goes stale and prevent any output? I've had a few people express interest in varying the output based on logged in status and the identity of the user/client. Doing that is possible right now, but if the cache doesn't account for it, then it removes a lot of flexibility, so I think it really does need to be considered. What you're requesting is definitely the simplest solution and would probably be adequate for most people, so maybe a basic add-on/extension plugin would work? |
I think the real win here would be batch or chunk processing or optimizing the build of the json file in some way. I think the caching just makes sense to have it available at a moment's notice to decrease overall build/composer install time. |
One reason I asked about the number of plugins being managed is because using I agree caching by default would be worthwhile, but I do want to make sure it doesn't limit flexibility. I don't have a lot of time to dedicate to this, so I may look into putting together a basic add-on for now, but I'm happy to review any pull requests or make suggestions. |
I realise this issue is a couple of years old now but has anyone managed to speed up Satispress at all? We're managing ~85 plugins with it. Satispress has been huge for us but it's getting very slow at this point. |
@Tawmu, maybe this is useful for you: #117 (comment). We still haven't tried it on our live SatisPress instance but it should work in theory and improve performance significantly. |
In regards to performance, reliability, I was just throwing the idea out to our DevOps/SysAdmin team about the idea of hosting the I recognize that what I'm talking about is different then the performance of serving up many plugins in the |
@timnolte The storage layer in SatisPress was abstracted so that you could use other services, you would just need to write an adapter that implements the Storage interface. I wrote the majority of an adapter for S3 awhile back, but it's not totally complete. I don't think it'd have a noticeable impact on performance one way or another unless the SatisPress server was being used by a large internal team or publicly. I think the main problem is that as the number of plugins and cached releases grows, it takes more and more time to generate |
@bradyvercher right, I recognized that this topic is more about the I did start poking around and saw the Storage interface. However, it seems like there isn't any sort of hook to provide a custom storage adapter which is what I think would be required here: satispress/src/ServiceProvider.php Lines 313 to 316 in 12923cf
Perhaps there is another way to provide an adapter that I'm just not finding. |
Hi @bradyvercher, first of all, thank you for all your work on SatisPress, it's a vital part of our CI/CD workflow and we wouldn't know what we'd do without it! I was happy to see that my colleagues @tyrann0us and @widoz have already contributed to SatisPress and even this discussion. (@tyrann0us is no doubt going to laugh at the length of this reply and seeing it's from me, so be it 😇) The SatisPress instance we run has grown to provide access to ~175 plugins and themes. I should add that this count could probably be brought down by 20/30% if we did some proper housekeeping. Still, a number below 100 seems like a stretch. In our case, the generated JSON currently weighs over 750KB. We're pretty sure that caching the output for the As @tyrann0us already pointed out referring to this comment, switching to a multisite approach to manage 'categories' of plugins helps us with the WordPress dashboard performance. Our DevOps' person's first attempt at coming up with a caching mechanism independent of WordPress / PHP failed today. In your opening reply, you wrote
How will the output vary apart from the fact that user / API key combos that aren't registered simply are denied access? |
Hi! Any progress in this issue or any solution to suggest? Satispress with ~100 plugins runs really slow for us. The multisite approach looks promising if you think of performance, but I don´t like the idea of having several satispress repos in the composer.json file and confusing to know which plugin is in what repo. Have anyone found a way around this? |
This is a misunderstanding; SatisPress is network-active, so you only have one repository (one vendor name), even if the underlaying WordPress installation is a multisite. |
Hi @tyrann0us!
Aha ok, so I install SatisPress on the main site so it will be network active and active on all underlaying sites. Then I create underlaying sites and name them as I want (for example by vendor as you did in some test), install plugins/themes in those underlaying sites (max 10-20 plugins per site). SatisPress on the main site will keep track on all of them so I only need on repo in composer.json file. This will also solve the problem with plugins not compatible with each other, just install them on different underlaying sites. Great! No performance issues with too many underlaying sites so far? Do you recommend to convert existing SatisPress installation to multisite, or start from scratch with a new one? |
@perforsberg77 for clarity with a WordPress multisite plugins are installed and managed at the network level not the site level. The performance aspect is just having plugins Active, in order to trigger automatic updates and such, on a per site basis. |
Aha ok, thanks @timnolte! |
I have all plugins on network level and then 12 subsites where I have activated 5-10 plugins per subsite. My problem now is that only themes/plugins activated on the main site is automatically updated. Not any theme/plugin activated on any subsite. How can I enable auto updates for all network installed plugins that are not network enabled, but subsite activated? |
Please see this comment #117 (comment):
In practice, this means that you need to set up a bash script like this: #!/bin/bash
LOG_FILE="/path/to/logs/auto-update.log"
echo "#### Upgrading WordPress core #####" >> $LOG_FILE
wp core update &>> $LOG_FILE
for SITE in $(wp site list --field=url 2>/dev/null); do # this does the trick
echo "#### Processing ${SITE} #####" >> $LOG_FILE
wp --url="${SITE}" theme update --all &>> $LOG_FILE
wp --url="${SITE}" plugin update --all &>> $LOG_FILE
wp --url="${SITE}" cron event run --due-now &>> $LOG_FILE
done Then, execute this script from a cronjob. We have the cronjob set to run every four hours. After having this setup in place for quite some time, we noticed that some plugins sometimes don't get updated. We haven't checked yet what exactly is causing the issues, and if them not being network-active could be the reason. But in general, this approach works fine. |
As stated here:
#19 (comment)
This is a pretty big hit performance wise. And seeing as there's not a strong reason NOT to cache this, I vote that we figure out a good static/cached file that gets updated any time there's a package updated.
The text was updated successfully, but these errors were encountered: