Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How -R works and why I can't corellate resulting Requests/sec with it? #127

Open
remort opened this issue Jul 21, 2022 · 5 comments
Open

Comments

@remort
Copy link

remort commented Jul 21, 2022

I can't understand why the value for -R parameter differs from what I have in 'Requests/sec' section of the resulting output.
Are only successful requests counted? Then I should be able to find all the rest listed under the 'Socket errors:' result section.
As I understand -R ought to create a constant load and try to send exactly as many requests per second as it set with -R.
But results look different.

@remort
Copy link
Author

remort commented Aug 2, 2022

Anyone? How do you use -R ?

@yukha-dw
Copy link

yukha-dw commented Aug 5, 2022

I'm not sure if this is true, but as for my understanding, wrk2 operates in pulses every 1 seconds and it will send R requests for each pulse. If the request takes time longer than 1 second, it won't send another request.

Requests/sec itself should be the approximation of number of requests created over duration.

@giltene
Copy link
Owner

giltene commented Aug 5, 2022

@remort can you include an example of -R not matching Requests/sec? I can see that happening if -R was set high enough that the number of threads (-t) and/or connections (-c) could not sustain it…

@remort
Copy link
Author

remort commented Aug 5, 2022

Just three measurements. The more load I give, the less RPS I get. Yes, it looks that target host cant handle more connections, but shouldn't we have Socket errors if server or client is overwhelmed with connections? Since there are no Socket errors I consider that we have not reach the limits yet.

./wrk -t 8 -c 1000 -d 1m  'http://high' -R 5000 
Running 1m test @ http://high
  8 threads and 1000 connections
  Thread calibration: mean lat.: 11.929ms, rate sampling interval: 72ms
  Thread calibration: mean lat.: 11.099ms, rate sampling interval: 66ms
  Thread calibration: mean lat.: 40.153ms, rate sampling interval: 151ms
  Thread calibration: mean lat.: 13.325ms, rate sampling interval: 84ms
  Thread calibration: mean lat.: 43.297ms, rate sampling interval: 153ms
  Thread calibration: mean lat.: 13.755ms, rate sampling interval: 84ms
  Thread calibration: mean lat.: 13.159ms, rate sampling interval: 84ms
  Thread calibration: mean lat.: 55.567ms, rate sampling interval: 161ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    24.36ms   25.93ms 137.98ms   77.55%
    Req/Sec   628.59    134.88     0.99k    63.76%
  297454 requests in 1.00m, 82.27MB read
Requests/sec:   4957.04
Transfer/sec:      1.37MB
./wrk -t 8 -c 1000 -d 1m  'http://high' -R 10000
Running 1m test @ http://high
  8 threads and 1000 connections
  Thread calibration: mean lat.: 429.933ms, rate sampling interval: 2002ms
  Thread calibration: mean lat.: 445.679ms, rate sampling interval: 1976ms
  Thread calibration: mean lat.: 341.873ms, rate sampling interval: 2467ms
  Thread calibration: mean lat.: 71.994ms, rate sampling interval: 104ms
  Thread calibration: mean lat.: 407.248ms, rate sampling interval: 2224ms
  Thread calibration: mean lat.: 429.438ms, rate sampling interval: 2484ms
  Thread calibration: mean lat.: 387.566ms, rate sampling interval: 2228ms
  Thread calibration: mean lat.: 327.129ms, rate sampling interval: 2404ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     2.36s     3.44s   17.09s    86.45%
    Req/Sec     1.22k    79.38     1.94k    72.48%
  549017 requests in 1.00m, 151.84MB read
Requests/sec:   9149.88
Transfer/sec:      2.53MB
./wrk -t 8 -c 1000 -d 1m  'http://high' -R 50000
Running 1m test @ http://high
  8 threads and 1000 connections
  Thread calibration: mean lat.: 1967.326ms, rate sampling interval: 7909ms
  Thread calibration: mean lat.: 1959.484ms, rate sampling interval: 7864ms
  Thread calibration: mean lat.: 1205.163ms, rate sampling interval: 7516ms
  Thread calibration: mean lat.: 1876.303ms, rate sampling interval: 7532ms
  Thread calibration: mean lat.: 1948.730ms, rate sampling interval: 7790ms
  Thread calibration: mean lat.: 947.939ms, rate sampling interval: 6402ms
  Thread calibration: mean lat.: 1211.105ms, rate sampling interval: 6930ms
  Thread calibration: mean lat.: 1200.534ms, rate sampling interval: 7421ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    12.09s     8.86s   49.84s    67.39%
    Req/Sec     3.69k   339.06     4.37k    56.00%
  1744986 requests in 1.00m, 482.60MB read
Requests/sec:  29083.43
Transfer/sec:      8.04MB

@giltene
Copy link
Owner

giltene commented Aug 7, 2022

Try higher values for -c and -t, and see what happens. The attempted RPS is not limited by the server’s ability to serve requests, but the longer the server takes to respond, the more connections are needed to keep up the same rate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants