-
-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PostgREST Benchmark #9
Comments
I've done the load tests with the client(k6) and the server(pg/pgrst) being on separate ec2 instances on the same VPC(over local IP addresses). This is to avoid any networking issues and to make the tests reproducible. SetupDatabase
VPC
(The description of the VPC can be found here) Load test scenariosThe k6 scripts are done using constant request rates for 1 min. They can be found here.
ResultsHere are the summaries of the results. Full results for each test can be found here. t3anano
t2nano
Comments
|
I was not satisfied with the results I got from PostgREST. So I tried some things for increasing the req/s:
Once I'm done with the above improvements, I should be able to get around After that I'll proceed with writing the blog post. @kiwicopple What do you think.. Is that good? Edit: Improvements done. New GETSingle results on t3a.nano: |
I think that's amazing @steve-chavez - great job increasing throughput by 10% and completing the PR. got that we have this consistently running too, so that we can measure the changes to PostgREST over time Looking forward to the blog post. Make sure you cover the changes you made to get if faster (and your failed attempts) - i'm very curious! |
@steve-chavez could you run these on a larger instance (maybe t3a.large) and see if you can get any improvement. For KPS I can't improve on ~1200/s (have tried micro, medium, 2xlarge) even when I switch the instances to "Unlimited" mode |
@awalias Sure. For reads(GETSingle), I'm also getting similar results on t3a.nano, t3a.large and t3a.xlarge - all unlimited. However with a c5.xlarge I get a noticeable improvement. I'll post the results in a while. |
I've done more load tests on the latest version. This time using more t3a instances and c5xlarge. The results are also on the supabase.io benchmarks project. Latest version(7.0.1)t3anano
t3amicro
t3amedium
t3alarge
t3axlarge
c5xlarge
Comments
Edit 1: corrected results for GETSingleEmbed on t3axlarge. |
These are load tests on the new master version(unreleased) with the above improvements: Master versiont3anano
t3axlarge
c5xlarge
Comments
Edit 1: Corrected GETSingle t3a.xlarge results. |
First of all, amazing job with the performance improvements @steve-chavez - 51% increase (!!) on nano for GET single 😲. Wow
I find this strange. Perhaps the t3 architecture has something unusual - are these on standard CPU or unlimited CPU? Perhaps they are on standard, and the CPU is being capped? Either way - this does make one thing clear:
From these numbers, it seems much better to horizontally scale than vertically scale. The cost of getting a couple of extra vCPUs inside the same box is very high |
@kiwicopple Double checked that they were on It turns out that the req/s were improving slightly on GETSingle. I've corrected the results above. |
I've added load tests for RPC. I think with this we cover all the relevant PostgREST features. The scenarios are:
Results(only for t3a.nano, t3a.xlarge and c5.xlarge) on the above comments: v7.0.1 and new version. (k6 scripts added on #7) Comments
|
A couple more findings here. Unix Socket connection from PostgREST to PostgreSQLUsing Unix Socket instead of TCP Socket. This is only possible if pgrest/pg are on the same instance. Basically have this in the pgrest config: db-uri = "postgres://postgres@/postgres" Instead of: db-uri = "postgres://postgres@localhost/postgres" t3anano - Master version
Comments
|
Pool connectionsNumber of connections kept on the pgrest pool(db-pool, 10 by default). t3anano - Master versionI've done GETSingle load tests for different pool sizes.
Comments
|
Added a test for updates(on #16):
t3a.nano - Nightly versionRan the test for 15 minutes and got
|
Me and Ant found that while doing PATCH load tests on the read schema(1 mil rows, indexed), PostgREST gave a lot of 503 errors. The load tests were done on a
What happens is that in pg, an UPDATE is actually an INSERT plus a DELETE, that has to happen for each row and it takes a considerable amount of resources(the indexes have to be updated). This problem didn't appear on my previous PATCH test for chinook because it has a few hundred rows(less work for updating the index). So this is more of a db issue, and the simplest solution is to increase RAM. Still, I've ran load tests on the different t3a instances. Nightly versionPatching a single row on the read schema:
Comments
CREATE TABLE public.read (
id bigserial,
slug int,
unique(id) with (fillfactor=70)
)
WITH (fillfactor=70);
|
I've added Nginx to the benchmark setup. A default Nginx config can lower the throughput for almost 30%, but a good config(unix socket + keepalive) can reduce the loss to about 10%. t3a.nano - PostgREST nightly - Nginx with default configNginx default config means that a tcp connection is used to connect Nginx to PostgREST and there's no keepalive configured to the upstream server. GETSingle - 1437.202365/s
POSTSingle - 1160.350482/s
t3a.nano - PostgREST nightly - Nginx with best configNginx here connects to PostgREST through a unix socket and has keepalive 64. GETSingle - 1786.875745/s
POSTSingle - 1420.388499/s
Comments
|
I've also tested a t3anano standard, zero CPU Credits - standalone PostgREST nightly
Comments
|
Edit1: Corrected the tests according to #30 m5a instancesbenches on m5a instances for both pg and pgrest - with nginx included. m5a.large(50 VUs)GETSingle - 2577.743879/s
POSTSingle - 2516.502337/s
POSTBulk - 1661.129334/s
m5a.xlarge(50 VUs)GETSingle - 4430.749173/s
POSTSingle - 4173.195187/s
POSTBulk - 2730.889567/s
m5a.2xlarge(50 VUs)GETSingle - 7363.037795/s
POSTSingle - 6725.846335/s
POSTBulk - 3771.989959/s
|
Edit: Updated the numbers with the changes discussed in #34 Unlogged tableUsing the same setup as above, but with an unlogged table: m5a.large - POSTSingle - 2571.211303/s
m5a.xlarge - POSTSingle - 4482.84471/s
m5a.2xlarge - POSTSingle - 7532.095276/s
Comments
|
Chore
Describe the chore
As part of our move from Alpha to Beta, we will want to be clear about the limitations and performance of each of the components in Supabase. We need to do 3 things:
Additional context
Steve mentioned that there were some old benchmarks, it might just be a case of running these with the latest version of pgrst
The text was updated successfully, but these errors were encountered: