Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

newer versions benchmarks #2

Closed
c1tt1 opened this issue Mar 18, 2018 · 12 comments
Closed

newer versions benchmarks #2

c1tt1 opened this issue Mar 18, 2018 · 12 comments

Comments

@c1tt1
Copy link

c1tt1 commented Mar 18, 2018

I ran the benchmarks with the latest versions of all frameworks (for japronto it's still the same v0.1.1).

Version: python3.6

Here are my hardware specs:
MacBook Pro 2014
SSD storage
Processor 2.6 GHz Intel Core i5
RAM 8 GB

Aiohttp

wrk -d 10 -c 100 -t 12 --timeout 8 http://localhost:8000  # aiohttp
Running 10s test @ http://localhost:8000
  12 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    19.18ms    5.18ms  56.00ms   56.90%
    Req/Sec   418.88     59.88   646.00     78.49%
  50360 requests in 10.10s, 8.02MB read
Requests/sec:   4984.40
Transfer/sec:    812.89KB

wrk -d 10 -c 100 -t 12 --timeout 8 http://localhost:8000/db  # aiohttp
Running 10s test @ http://localhost:8000/db
  12 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    93.14ms   81.84ms 945.17ms   95.65%
    Req/Sec    99.92     25.83   171.00     78.02%
  11628 requests in 10.05s, 2.12MB read
Requests/sec:   1200.97
Transfer/sec:    215.60KB

Sanic

wrk -d 10 -c 100 -t 12 --timeout 8 http://localhost:8000  # sanic
Running 10s test @ http://localhost:8000
  12 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    24.92ms    9.54ms  72.07ms   57.54%
    Req/Sec   322.23     45.12   454.00     70.50%
  38703 requests in 10.07s, 4.98MB read
Requests/sec:   3845.06
Transfer/sec:    506.92KB

wrk -d 10 -c 100 -t 12 --timeout 8 http://localhost:8000/db  # sanic
Running 10s test @ http://localhost:8000/db
  12 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    99.10ms   18.04ms 203.67ms   72.95%
    Req/Sec    80.56     16.94   151.00     70.87%
  9689 requests in 10.06s, 1.48MB read
Requests/sec:    962.79
Transfer/sec:    150.44KB

I ran into lots of issues with japronto, in the end I didn't bother going ahead with it as it's not a serious framework. But the latest show performance improvements in aiohttp.

@samuelcolvin
Copy link
Owner

Interesting, thanks for contributing. I'll leave this open to make it easier to find for others.

@samuelcolvin
Copy link
Owner

all benchmarks updated.

@akotlar
Copy link

akotlar commented Feb 7, 2019

Could you double check your numbers? See: hail-is/hail#5242 ; on Sanic's latest version, got very different results, and in general was quite surprised by the reversal in performance order once a 3rd party asyncio library was introduced. At first blush data reversals like this suggest some testing issue.

@cllty show for instance that aiohttp was just faster than Sanic, consistently. Your updated bench shows Sanic is 2x faster, and then gets 5x slower. This is obviously surprising. My results show a consistent 2x benefit in favor of Sanic.

@samuelcolvin
Copy link
Owner

Sorry, are you asking me to update the numbers in the readme, or asking about the numbers above?

Please could you provide your numbers for comparison.

Probably need to update all the benchmarks again, the most recent are from 6 months ago.

@akotlar
Copy link

akotlar commented Feb 7, 2019

Sure!

Sanic Run 1:
Running 1m test @ http://localhost:8000/db
12 threads and 2000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 640.64ms 947.31ms 7.97s 85.89%
Req/Sec 385.62 137.55 2.32k 77.21%
274143 requests in 1.00m, 41.70MB read
Socket errors: connect 0, read 2072, write 0, timeout 26
Requests/sec: 4563.11
Transfer/sec: 710.67KB

Sanic Run 2:
alexkotlar:~/projects/aiohttp-vs-sanic-vs-japronto:$ wrk -d 60 -c 2000 -t 12 --timeout 8 http://localhost:8000/db
Running 1m test @ http://localhost:8000/db
12 threads and 2000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 615.91ms 878.25ms 7.86s 85.85%
Req/Sec 391.30 118.76 1.61k 72.83%
278943 requests in 1.00m, 42.46MB read
Socket errors: connect 0, read 2079, write 0, timeout 12
Requests/sec: 4642.59
Transfer/sec: 723.58KB

Sanic Run 3 (very large background task spike in last 1-2s of run):
alexkotlar:~/projects/aiohttp-vs-sanic-vs-japronto:$ wrk -d 60 -c 2000 -t 12 --timeout 8 http://localhost:8000/db
Running 1m test @ http://localhost:8000/db
12 threads and 2000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 543.65ms 839.00ms 7.93s 87.81%
Req/Sec 392.47 118.69 1.42k 73.81%
279206 requests in 1.00m, 42.54MB read
Socket errors: connect 0, read 2101, write 0, timeout 35
Requests/sec: 4646.20
Transfer/sec: 724.97KB

Aiohttp Run 1:
alexkotlar:~/projects/aiohttp-vs-sanic-vs-japronto:$ wrk -d 60 -c 2000 -t 12 --timeout 8 http://localhost:8000/db
Running 1m test @ http://localhost:8000/db
12 threads and 2000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 747.49ms 1.00s 7.88s 86.77%
Req/Sec 280.95 103.65 1.60k 79.52%
199147 requests in 1.00m, 36.47MB read
Socket errors: connect 0, read 2058, write 1, timeout 45
Requests/sec: 3313.70
Transfer/sec: 621.36KB

Aiohttp Run 2:
Running 1m test @ http://localhost:8000/db
12 threads and 2000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 696.00ms 967.04ms 7.93s 86.48%
Req/Sec 289.87 115.90 1.90k 83.92%
205188 requests in 1.00m, 37.54MB read
Socket errors: connect 0, read 2041, write 0, timeout 38
Requests/sec: 3414.95
Transfer/sec: 639.84KB

Aiohttp Run 3:
Running 1m test @ http://localhost:8000/db
12 threads and 2000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 670.88ms 898.81ms 7.89s 86.58%
Req/Sec 318.17 108.06 1.47k 74.96%
226300 requests in 1.00m, 41.34MB read
Socket errors: connect 0, read 2053, write 0, timeout 19
Requests/sec: 3765.55
Transfer/sec: 704.34KB

I'm asking about the numbers in the readme. Your bench looks fine, but I'm trying to distinguish between 3 possibilities:

  1. Your run of the bench had some background task activity that was inconsistent, in favor of aiohttp during the Postgres run.
  2. Sanic 0.7 was just generating a large number of errors / really failed to serve responses in a timely fashion, and that this was corrected in 0.8+.
  3. The issue you saw is stochastic, and Sanic 0.8+ does not fix it. In the latter case, I don't want to use Sanic, because it isn't stable.

@samuelcolvin
Copy link
Owner

samuelcolvin commented Feb 7, 2019

(edited your comment to make it readable. Really helpful if you can take a second to make these things easily readable)

Afraid I don't have time to look into this right now, but I would:

  • try the same tests with sanic==0.7
  • check you're using the same version of aiohttp
  • read this, not sure if any or all of this is now fixed in sanic.

@akotlar
Copy link

akotlar commented Feb 7, 2019

(edited your comment to make it readable. Really helpful if you can take a second to make these things easily readable)

Afraid I don't have time to look into this right now, but I would:

  • try the same tests with sanic==0.7
  • check you're using the same version of aiohttp
  • read this, not sure if any or all of this is now fixed in sanic.

Thanks. My request is that you try to re-run the test with Sanic >= .7. This is the first link that comes up when you Google Sanic vs aiohttp. The results are either not reproducible, as suggested by @cllty's tests (assuming he was also using 0.7), or no longer relevant as suggested by mine. In general any time you see a dramatic reversal in the order of results, for non-obvious reasons you should be skeptical.

  • read this, not sure if any or all of this is now fixed in sanic.

This is supposed to be fixed in 0.8.

@akotlar
Copy link

akotlar commented Feb 7, 2019

As a followup, I re-ran this with Sanic == 0.7 , aiohttp == 3.4.0

Running 1m test @ http://localhost:8000/db #sanic
  12 threads and 2000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   866.81ms    1.16s    8.00s    85.73%
    Req/Sec   350.21     95.11   790.00     69.88%
  250043 requests in 1.00m, 38.02MB read
  Socket errors: connect 0, read 198, write 3, timeout 204
Requests/sec:   4163.47
Transfer/sec:    648.24KB
Running 1m test @ http://localhost:8000/db #sanic run 2
  12 threads and 2000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   628.16ms  918.78ms   7.89s    86.06%
    Req/Sec   361.21    114.42     1.49k    77.14%
  255946 requests in 1.00m, 38.82MB read
  Socket errors: connect 0, read 2126, write 0, timeout 33
Requests/sec:   4258.64
Transfer/sec:    661.49KB
Running 1m test @ http://localhost:8000/db #sanic run 3
  12 threads and 2000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   692.16ms  982.13ms   7.94s    86.28%
    Req/Sec   346.24    102.01     1.58k    74.05%
  244206 requests in 1.00m, 37.25MB read
  Socket errors: connect 0, read 2093, write 0, timeout 40
Requests/sec:   4063.21
Transfer/sec:    634.74KB

---- aiohttp ---

Running 1m test @ http://localhost:8000/db #aiohttp
  12 threads and 2000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   851.56ms    1.27s    7.98s    85.64%
    Req/Sec   285.13    100.04     1.51k    79.29%
  201761 requests in 1.00m, 36.86MB read
  Socket errors: connect 0, read 2050, write 0, timeout 407
Requests/sec:   3357.16
Transfer/sec:    628.10KB
Running 1m test @ http://localhost:8000/db #aiohttp run 2
  12 threads and 2000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   884.92ms    1.28s    8.00s    85.73%
    Req/Sec   286.07     90.03     1.20k    74.28%
  202530 requests in 1.00m, 36.95MB read
  Socket errors: connect 0, read 2052, write 0, timeout 398
Requests/sec:   3370.01
Transfer/sec:    629.52KB
Running 1m test @ http://localhost:8000/db #aiohttp run 3
  12 threads and 2000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   842.50ms    1.23s    7.98s    85.58%
    Req/Sec   299.52     95.83     1.22k    73.82%
  211231 requests in 1.00m, 38.61MB read
  Socket errors: connect 0, read 2094, write 0, timeout 285
Requests/sec:   3514.60
Transfer/sec:    657.86KB

There is something deeply strange with your results.

@samuelcolvin
Copy link
Owner

samuelcolvin commented Feb 7, 2019

Ok, looked into this and the problem was the file descriptor limit, running ulimit -n 4096 first, I get:

wrk 130  102s ➤  wrk -d 60 -c 2000 -t 12 --timeout 8 http://127.0.0.1:8000/db  # aiohttp
Running 1m test @ http://127.0.0.1:8000/db
  12 threads and 2000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   884.92ms    1.50s    7.99s    85.18%
    Req/Sec   516.21    140.18     1.67k    70.28%
  369843 requests in 1.00m, 67.52MB read
  Socket errors: connect 0, read 0, write 0, timeout 2760
Requests/sec:   6156.76
Transfer/sec:      1.12MB
wrk 0 60.17s ➤  wrk -d 60 -c 2000 -t 12 --timeout 8 http://127.0.0.1:8000/db  # sanic
Running 1m test @ http://127.0.0.1:8000/db
  12 threads and 2000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   875.31ms    1.45s    8.00s    85.13%
    Req/Sec   699.99    180.35     2.34k    70.42%
  501335 requests in 1.00m, 76.10MB read
  Socket errors: connect 0, read 0, write 0, timeout 2129
Requests/sec:   8347.25
Transfer/sec:      1.27MB
wrk 0 60.18s ➤  

Which basically agrees with your numbers: sanic is 20% to 35% quicker.

I'll update the benchmarks.

@akotlar
Copy link

akotlar commented Feb 7, 2019

@samuelcolvin I really appreciate your time! That makes much more sense.

Also, it looks like you're using a machine whose CPU has much better IPC. I'm on a 2017 3.1ghz MacBook and have half the throughput!

@samuelcolvin
Copy link
Owner

samuelcolvin commented Feb 7, 2019

No problem, thanks for pointing this out.

There was a hint in the output: there were always 983 socket connect errors. Basically, it opened nearly 1024 connections, then couldn't open any more.

@akotlar
Copy link

akotlar commented Feb 7, 2019

Great catch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants