diff --git a/tests/results/dp-perf/1.6.0/1.6.0-oss.md b/tests/results/dp-perf/1.6.0/1.6.0-oss.md new file mode 100644 index 0000000000..b74bf19770 --- /dev/null +++ b/tests/results/dp-perf/1.6.0/1.6.0-oss.md @@ -0,0 +1,92 @@ +# Results + +## Test environment + +NGINX Plus: false + +NGINX Gateway Fabric: + +- Commit: b61c61d3f9ca29c6eb93ce9b44e652c9a521b3a4 +- Date: 2025-01-13T16:47:24Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.30.6-gke.1596000 +- vCPUs per node: 16 +- RAM per node: 65853984Ki +- Max pods per node: 110 +- Zone: us-west1-b +- Instance Type: n2d-standard-16 + +## Summary: + +- Performance stayed consistent with 1.5.0 results. Average latency slightly increased across all routing methods. +- Errors that occurred are consistent with errors that occurred in the previous results. + +## Test1: Running latte path based routing + +```text +Requests [total, rate, throughput] 30000, 1000.04, 998.38 +Duration [total, attack, wait] 30s, 29.999s, 844.157µs +Latencies [min, mean, 50, 90, 95, 99, max] 309.6µs, 718.534µs, 681.308µs, 786.633µs, 827.114µs, 971.115µs, 18.386ms +Bytes In [total, mean] 4741568, 158.05 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 99.84% +Status Codes [code:count] 200:29951 503:49 +Error Set: +503 Service Temporarily Unavailable +``` + +## Test2: Running coffee header based routing + +```text +Requests [total, rate, throughput] 30000, 1000.01, 999.98 +Duration [total, attack, wait] 30.001s, 30s, 708.18µs +Latencies [min, mean, 50, 90, 95, 99, max] 519.443µs, 728.205µs, 716.283µs, 820.709µs, 859.918µs, 962.843µs, 6.974ms +Bytes In [total, mean] 4770000, 159.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +## Test3: Running coffee query based routing + +```text +Requests [total, rate, throughput] 30000, 1000.02, 1000.00 +Duration [total, attack, wait] 30s, 29.999s, 746.102µs +Latencies [min, mean, 50, 90, 95, 99, max] 533.22µs, 735.075µs, 722.549µs, 830.432µs, 871.714µs, 973.911µs, 6.9ms +Bytes In [total, mean] 5010000, 167.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +## Test4: Running tea GET method based routing + +```text +Requests [total, rate, throughput] 30000, 1000.02, 1000.00 +Duration [total, attack, wait] 30s, 29.999s, 737.741µs +Latencies [min, mean, 50, 90, 95, 99, max] 528.445µs, 724.715µs, 711.435µs, 816.76µs, 859.214µs, 967.474µs, 11.985ms +Bytes In [total, mean] 4680000, 156.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +## Test5: Running tea POST method based routing + +```text +Requests [total, rate, throughput] 30000, 1000.03, 1000.01 +Duration [total, attack, wait] 30s, 29.999s, 643.191µs +Latencies [min, mean, 50, 90, 95, 99, max] 538.368µs, 728.96µs, 714.974µs, 818.991µs, 860.142µs, 971.866µs, 11.543ms +Bytes In [total, mean] 4680000, 156.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` diff --git a/tests/results/dp-perf/1.6.0/1.6.0-plus.md b/tests/results/dp-perf/1.6.0/1.6.0-plus.md new file mode 100644 index 0000000000..0af8e72d12 --- /dev/null +++ b/tests/results/dp-perf/1.6.0/1.6.0-plus.md @@ -0,0 +1,90 @@ +# Results + +## Test environment + +NGINX Plus: true + +NGINX Gateway Fabric: + +- Commit: b61c61d3f9ca29c6eb93ce9b44e652c9a521b3a4 +- Date: 2025-01-13T16:47:24Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.30.6-gke.1596000 +- vCPUs per node: 16 +- RAM per node: 65853984Ki +- Max pods per node: 110 +- Zone: us-west1-b +- Instance Type: n2d-standard-16 + +## Summary: + +- Performance stayed consistent with 1.5.0 results. Average latency slightly increased across all routing methods. + +## Test1: Running latte path based routing + +```text +Requests [total, rate, throughput] 30000, 1000.03, 1000.00 +Duration [total, attack, wait] 30s, 29.999s, 744.047µs +Latencies [min, mean, 50, 90, 95, 99, max] 535.49µs, 722.768µs, 702.708µs, 807.78µs, 849.575µs, 981.854µs, 21.041ms +Bytes In [total, mean] 4770000, 159.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +## Test2: Running coffee header based routing + +```text +Requests [total, rate, throughput] 30000, 1000.01, 999.98 +Duration [total, attack, wait] 30s, 30s, 718.788µs +Latencies [min, mean, 50, 90, 95, 99, max] 558.587µs, 766.304µs, 750.921µs, 866.313µs, 907.422µs, 1.022ms, 10.872ms +Bytes In [total, mean] 4800000, 160.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +## Test3: Running coffee query based routing + +```text +Requests [total, rate, throughput] 30000, 1000.02, 999.99 +Duration [total, attack, wait] 30s, 30s, 733.649µs +Latencies [min, mean, 50, 90, 95, 99, max] 572.624µs, 771.492µs, 758.449µs, 867.491µs, 907.997µs, 1.032ms, 11.906ms +Bytes In [total, mean] 5040000, 168.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +## Test4: Running tea GET method based routing + +```text +Requests [total, rate, throughput] 30000, 1000.03, 1000.00 +Duration [total, attack, wait] 30s, 29.999s, 712.155µs +Latencies [min, mean, 50, 90, 95, 99, max] 549.224µs, 760.423µs, 746.75µs, 853.877µs, 894.554µs, 1.008ms, 8.12ms +Bytes In [total, mean] 4710000, 157.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +## Test5: Running tea POST method based routing + +```text +Requests [total, rate, throughput] 30000, 1000.00, 999.98 +Duration [total, attack, wait] 30.001s, 30s, 778.666µs +Latencies [min, mean, 50, 90, 95, 99, max] 544.486µs, 762.077µs, 748.375µs, 852.722µs, 893.014µs, 1.009ms, 9.632ms +Bytes In [total, mean] 4710000, 157.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` diff --git a/tests/results/longevity/1.6.0/oss-cpu.png b/tests/results/longevity/1.6.0/oss-cpu.png new file mode 100644 index 0000000000..840f251d3a Binary files /dev/null and b/tests/results/longevity/1.6.0/oss-cpu.png differ diff --git a/tests/results/longevity/1.6.0/oss-memory.png b/tests/results/longevity/1.6.0/oss-memory.png new file mode 100644 index 0000000000..e3895348e7 Binary files /dev/null and b/tests/results/longevity/1.6.0/oss-memory.png differ diff --git a/tests/results/longevity/1.6.0/oss-ngf-memory.png b/tests/results/longevity/1.6.0/oss-ngf-memory.png new file mode 100644 index 0000000000..30fa7773a1 Binary files /dev/null and b/tests/results/longevity/1.6.0/oss-ngf-memory.png differ diff --git a/tests/results/longevity/1.6.0/oss-reload-time.png b/tests/results/longevity/1.6.0/oss-reload-time.png new file mode 100644 index 0000000000..5001dc0e8e Binary files /dev/null and b/tests/results/longevity/1.6.0/oss-reload-time.png differ diff --git a/tests/results/longevity/1.6.0/oss-reloads.png b/tests/results/longevity/1.6.0/oss-reloads.png new file mode 100644 index 0000000000..8e8a62f9ad Binary files /dev/null and b/tests/results/longevity/1.6.0/oss-reloads.png differ diff --git a/tests/results/longevity/1.6.0/oss-stub-status.png b/tests/results/longevity/1.6.0/oss-stub-status.png new file mode 100644 index 0000000000..c9900813d2 Binary files /dev/null and b/tests/results/longevity/1.6.0/oss-stub-status.png differ diff --git a/tests/results/longevity/1.6.0/oss.md b/tests/results/longevity/1.6.0/oss.md new file mode 100644 index 0000000000..8cc93cb36b --- /dev/null +++ b/tests/results/longevity/1.6.0/oss.md @@ -0,0 +1,97 @@ +# Results + +## Test environment + +NGINX Plus: false + +NGINX Gateway Fabric: + +- Commit: 8be03e1fc5161a2b1bc0962fb0d8732114a9093d +- Date: 2025-01-14T18:57:38Z +- Dirty: true + +GKE Cluster: + +- Node count: 3 +- k8s version: v1.30.6-gke.1596000 +- vCPUs per node: 2 +- RAM per node: 4018128Ki +- Max pods per node: 110 +- Zone: us-central1-c +- Instance Type: e2-medium + +## Traffic + +HTTP: + +```text +Running 5760m test @ http://cafe.example.com/coffee + 2 threads and 100 connections + Thread Stats Avg Stdev Max +/- Stdev + Latency 189.49ms 147.10ms 2.00s 78.44% + Req/Sec 293.54 193.84 1.95k 66.59% + 198532845 requests in 5760.00m, 67.91GB read + Socket errors: connect 0, read 309899, write 63, timeout 2396 +Requests/sec: 574.46 +Transfer/sec: 206.05KB +``` + +HTTPS: + +```text +Running 5760m test @ https://cafe.example.com/tea + 2 threads and 100 connections + Thread Stats Avg Stdev Max +/- Stdev + Latency 179.59ms 121.50ms 1.99s 67.56% + Req/Sec 292.54 193.88 2.39k 66.47% + 197890521 requests in 5760.00m, 66.57GB read + Socket errors: connect 176, read 303560, write 0, timeout 7 +Requests/sec: 572.60 +Transfer/sec: 201.98KB +``` + +### Logs + +No error logs in nginx-gateway. + +No error logs in nginx. + + +### Key Metrics + +#### Containers memory + +![oss-memory.png](oss-memory.png) + +#### NGF Container Memory + +![oss-ngf-memory.png](oss-ngf-memory.png) + +### Containers CPU + +![oss-cpu.png](oss-cpu.png) + +### NGINX metrics + +![oss-stub-status.png](oss-stub-status.png) + +### Reloads + +Rate of reloads - successful and errors: + +![oss-reloads.png](oss-reloads.png) + +Reload spikes correspond to 1 hour periods of backend re-rollouts. + +No reloads finished with an error. + +Reload time distribution - counts: + +![oss-reload-time.png](oss-reload-time.png) + + +## Comparison with previous results + +Graphs look similar to 1.5.0 results. There is a color change swap in a few graphs which is a little confusing. +NGINX container memory decreased dramatically. NGINX Stub Status graph is confusing to interpret, which can make it seem +quite different to the 1.5.0 results, but it is similar, only with an increase in requests. diff --git a/tests/results/longevity/1.6.0/plus-cpu.png b/tests/results/longevity/1.6.0/plus-cpu.png new file mode 100644 index 0000000000..93616b0e27 Binary files /dev/null and b/tests/results/longevity/1.6.0/plus-cpu.png differ diff --git a/tests/results/longevity/1.6.0/plus-memory.png b/tests/results/longevity/1.6.0/plus-memory.png new file mode 100644 index 0000000000..72c77e28dc Binary files /dev/null and b/tests/results/longevity/1.6.0/plus-memory.png differ diff --git a/tests/results/longevity/1.6.0/plus-ngf-memory.png b/tests/results/longevity/1.6.0/plus-ngf-memory.png new file mode 100644 index 0000000000..b977c4e51a Binary files /dev/null and b/tests/results/longevity/1.6.0/plus-ngf-memory.png differ diff --git a/tests/results/longevity/1.6.0/plus-reload-time.png b/tests/results/longevity/1.6.0/plus-reload-time.png new file mode 100644 index 0000000000..6209a397b0 Binary files /dev/null and b/tests/results/longevity/1.6.0/plus-reload-time.png differ diff --git a/tests/results/longevity/1.6.0/plus-reloads.png b/tests/results/longevity/1.6.0/plus-reloads.png new file mode 100644 index 0000000000..9e6f8d22a0 Binary files /dev/null and b/tests/results/longevity/1.6.0/plus-reloads.png differ diff --git a/tests/results/longevity/1.6.0/plus-status.png b/tests/results/longevity/1.6.0/plus-status.png new file mode 100644 index 0000000000..eaec2f62f1 Binary files /dev/null and b/tests/results/longevity/1.6.0/plus-status.png differ diff --git a/tests/results/longevity/1.6.0/plus.md b/tests/results/longevity/1.6.0/plus.md new file mode 100644 index 0000000000..bc74cbfc05 --- /dev/null +++ b/tests/results/longevity/1.6.0/plus.md @@ -0,0 +1,123 @@ +# Results + +## Test environment + +NGINX Plus: true + +NGINX Gateway Fabric: + +- Commit: 8be03e1fc5161a2b1bc0962fb0d8732114a9093d +- Date: 2025-01-14T18:57:38Z +- Dirty: true + +GKE Cluster: + +- Node count: 3 +- k8s version: v1.30.6-gke.1596000 +- vCPUs per node: 2 +- RAM per node: 4018128Ki +- Max pods per node: 110 +- Zone: us-central1-c +- Instance Type: e2-medium + +## Traffic + +HTTP: + +```text +Running 5760m test @ http://cafe.example.com/coffee + 2 threads and 100 connections + Thread Stats Avg Stdev Max +/- Stdev + Latency 178.76ms 115.93ms 1.54s 65.67% + Req/Sec 298.56 193.44 2.46k 65.81% + 202236770 requests in 5760.00m, 69.39GB read + Socket errors: connect 0, read 68, write 118, timeout 4 + Non-2xx or 3xx responses: 22514 +Requests/sec: 585.18 +Transfer/sec: 210.54KB +``` + +HTTPS: + +```text +Running 5760m test @ https://cafe.example.com/tea + 2 threads and 100 connections + Thread Stats Avg Stdev Max +/- Stdev + Latency 178.97ms 115.95ms 1.45s 65.64% + Req/Sec 297.98 193.03 1.82k 65.83% + 201870214 requests in 5760.00m, 68.09GB read + Socket errors: connect 95, read 57, write 0, timeout 0 + Non-2xx or 3xx responses: 6 +Requests/sec: 584.12 +Transfer/sec: 206.60KB +``` + + +### Logs + +### nginx-gateway +```text +error=pkg/mod/k8s.io/client-go@v0.32.0/tools/cache/reflector.go:251: Failed to watch *v1alpha1.ClientSettingsPolicy: clientsettingspolicies.gateway.nginx.org is forbidden: User "system:serviceaccount:nginx-gateway:ngf-longevity-nginx-gateway-fabric" cannot watch resource "clientsettingspolicies" in API group "gateway.nginx.org" at the cluster scope;level=error;logger=UnhandledError;msg=Unhandled Error;stacktrace=k8s.io/client-go/tools/cache.DefaultWatchErrorHandler + pkg/mod/k8s.io/client-go@v0.32.0/tools/cache/reflector.go:166 +k8s.io/client-go/tools/cache.(*Reflector).Run.func1 + pkg/mod/k8s.io/client-go@v0.32.0/tools/cache/reflector.go:316 +k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1 + pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:226 +k8s.io/apimachinery/pkg/util/wait.BackoffUntil + pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:227 +k8s.io/client-go/tools/cache.(*Reflector).Run + pkg/mod/k8s.io/client-go@v0.32.0/tools/cache/reflector.go:314 +k8s.io/client-go/tools/cache.(*controller).Run.(*Group).StartWithChannel.func2 + pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/wait.go:55 +k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1 + pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/wait.go:72;ts=2025-01-14T20:45:36Z +``` + +### nginx + +```text +2025/01/14 06:29:09 [error] 216#216: *345664926 no live upstreams while connecting to upstream, client: 10.128.0.34, server: cafe.example.com, request: "GET /coffee HTTP/1.1", upstream: "http://longevity_coffee_80/coffee", host: "cafe.example.com" + +10.128.0.34 - - [14/Jan/2025:06:29:09 +0000] "GET /coffee HTTP/1.1" 502 150 "-" "-" +2025/01/14 06:29:09 [error] 216#216: *345664926 no live upstreams while connecting to upstream, client: 10.128.0.34, server: cafe.example.com, request: "GET /coffee HTTP/1.1", upstream: "http://longevity_coffee_80/coffee", host: "cafe.example.com" + +``` + +### Key Metrics + +#### Containers memory + +![plus-memory.png](plus-memory.png) + +#### NGF Container Memory + +![plus-ngf-memory.png](plus-ngf-memory.png) + +### Containers CPU + +![plus-cpu.png](plus-cpu.png) + +### NGINX Plus metrics + +![plus-status.png](plus-status.png) + +### Reloads + +Rate of reloads - successful and errors: + +![plus-reloads.png](plus-reloads.png) + +Note: compared to OSS NGINX, we don't have as many reloads here, because NGF uses NGINX Plus API to reconfigure NGINX +for endpoints changes. + +Reload time distribution - counts: + +![plus-reload-time.png](plus-reload-time.png) + +## Comparison with previous results + +Graphs look similar to 1.5.0 results. CPU usage increased slightly. There was a noticeable error sometime two days in +where memory usage dipped heavily and so did the NGINX plus status, which could be a test error instead of product error. +There looked to be a reload event where past results didn't have one. NGINX errors differ from previous results errors but +are consistent with errors seen in the 1.5.0 test suite. NGF error is something to keep an eye on. The NGINX errors did not coincide +with the abnormalities on any of the graphs. diff --git a/tests/results/ngf-upgrade/1.6.0/1.6.0-oss.md b/tests/results/ngf-upgrade/1.6.0/1.6.0-oss.md new file mode 100644 index 0000000000..e09e2c7ea4 --- /dev/null +++ b/tests/results/ngf-upgrade/1.6.0/1.6.0-oss.md @@ -0,0 +1,56 @@ +# Results + +## Test environment + +NGINX Plus: false + +NGINX Gateway Fabric: + +- Commit: b61c61d3f9ca29c6eb93ce9b44e652c9a521b3a4 +- Date: 2025-01-13T16:47:24Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.30.6-gke.1596000 +- vCPUs per node: 16 +- RAM per node: 65853984Ki +- Max pods per node: 110 +- Zone: us-west1-b +- Instance Type: n2d-standard-16 + +## Summary: + +- Latency increased across both HTTPS and HTTP traffic. + + +## Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 6000, 100.02, 100.02 +Duration [total, attack, wait] 59.991s, 59.99s, 532.496µs +Latencies [min, mean, 50, 90, 95, 99, max] 462.848µs, 904.959µs, 898.168µs, 1.053ms, 1.115ms, 1.268ms, 18.821ms +Bytes In [total, mean] 968026, 161.34 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:6000 +Error Set: +``` + +![http-oss.png](http-oss.png) + +## Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 6000, 100.02, 100.01 +Duration [total, attack, wait] 59.991s, 59.99s, 737.848µs +Latencies [min, mean, 50, 90, 95, 99, max] 705.036µs, 1.009ms, 973.41µs, 1.16ms, 1.217ms, 1.4ms, 15.604ms +Bytes In [total, mean] 932022, 155.34 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:6000 +Error Set: +``` + +![https-oss.png](https-oss.png) diff --git a/tests/results/ngf-upgrade/1.6.0/1.6.0-plus.md b/tests/results/ngf-upgrade/1.6.0/1.6.0-plus.md new file mode 100644 index 0000000000..fd3bfed126 --- /dev/null +++ b/tests/results/ngf-upgrade/1.6.0/1.6.0-plus.md @@ -0,0 +1,55 @@ +# Results + +## Test environment + +NGINX Plus: true + +NGINX Gateway Fabric: + +- Commit: b61c61d3f9ca29c6eb93ce9b44e652c9a521b3a4 +- Date: 2025-01-13T16:47:24Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.30.6-gke.1596000 +- vCPUs per node: 16 +- RAM per node: 65853984Ki +- Max pods per node: 110 +- Zone: us-west1-b +- Instance Type: n2d-standard-16 + +## Summary: + +- Performance stayed consistent with 1.5.0 results. + +## Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 6000, 100.02, 100.01 +Duration [total, attack, wait] 59.992s, 59.991s, 800.455µs +Latencies [min, mean, 50, 90, 95, 99, max] 608.736µs, 814.699µs, 794.88µs, 908.762µs, 953.288µs, 1.106ms, 9.306ms +Bytes In [total, mean] 967993, 161.33 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:6000 +Error Set: +``` + +![http-plus.png](http-plus.png) + +## Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 6000, 100.02, 100.01 +Duration [total, attack, wait] 59.992s, 59.991s, 880.798µs +Latencies [min, mean, 50, 90, 95, 99, max] 654.62µs, 940.714µs, 911.965µs, 1.074ms, 1.13ms, 1.359ms, 11.669ms +Bytes In [total, mean] 930000, 155.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:6000 +Error Set: +``` + +![https-plus.png](https-plus.png) diff --git a/tests/results/ngf-upgrade/1.6.0/http-oss.png b/tests/results/ngf-upgrade/1.6.0/http-oss.png new file mode 100644 index 0000000000..3762ead4c9 Binary files /dev/null and b/tests/results/ngf-upgrade/1.6.0/http-oss.png differ diff --git a/tests/results/ngf-upgrade/1.6.0/http-plus.png b/tests/results/ngf-upgrade/1.6.0/http-plus.png new file mode 100644 index 0000000000..028f8d66a0 Binary files /dev/null and b/tests/results/ngf-upgrade/1.6.0/http-plus.png differ diff --git a/tests/results/ngf-upgrade/1.6.0/https-oss.png b/tests/results/ngf-upgrade/1.6.0/https-oss.png new file mode 100644 index 0000000000..3762ead4c9 Binary files /dev/null and b/tests/results/ngf-upgrade/1.6.0/https-oss.png differ diff --git a/tests/results/ngf-upgrade/1.6.0/https-plus.png b/tests/results/ngf-upgrade/1.6.0/https-plus.png new file mode 100644 index 0000000000..028f8d66a0 Binary files /dev/null and b/tests/results/ngf-upgrade/1.6.0/https-plus.png differ diff --git a/tests/results/reconfig/1.6.0/1.6.0-oss.md b/tests/results/reconfig/1.6.0/1.6.0-oss.md new file mode 100644 index 0000000000..a29bb9bbb0 --- /dev/null +++ b/tests/results/reconfig/1.6.0/1.6.0-oss.md @@ -0,0 +1,210 @@ +# Results + +## Test environment + +NGINX Plus: false + +NGINX Gateway Fabric: + +- Commit: b61c61d3f9ca29c6eb93ce9b44e652c9a521b3a4 +- Date: 2025-01-13T16:47:24Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.30.6-gke.1596000 +- vCPUs per node: 16 +- RAM per node: 65853984Ki +- Max pods per node: 110 +- Zone: us-west1-b +- Instance Type: n2d-standard-16 + +## Summary: + +- Performance stayed consistent with 1.5.0 results. + +## Test 1: Resources exist before startup - NumResources 30 + +### Reloads and Time to Ready + +- TimeToReadyTotal: 2s +- TimeToReadyAvgSingle: < 1s +- NGINX Reloads: 2 +- NGINX Reload Average Time: 101ms +- Reload distribution: + - 500.0ms: 2 + - 1000.0ms: 2 + - 5000.0ms: 2 + - 10000.0ms: 2 + - 30000.0ms: 2 + - +Infms: 2 + +### Event Batch Processing + +- Event Batch Total: 6 +- Event Batch Processing Average Time: 53ms +- Event Batch Processing distribution: + - 500.0ms: 6 + - 1000.0ms: 6 + - 5000.0ms: 6 + - 10000.0ms: 6 + - 30000.0ms: 6 + - +Infms: 6 + +### NGINX Error Logs + + +## Test 1: Resources exist before startup - NumResources 150 + +### Reloads and Time to Ready + +- TimeToReadyTotal: 1s +- TimeToReadyAvgSingle: < 1s +- NGINX Reloads: 3 +- NGINX Reload Average Time: 134ms +- Reload distribution: + - 500.0ms: 3 + - 1000.0ms: 3 + - 5000.0ms: 3 + - 10000.0ms: 3 + - 30000.0ms: 3 + - +Infms: 3 + +### Event Batch Processing + +- Event Batch Total: 7 +- Event Batch Processing Average Time: 66ms +- Event Batch Processing distribution: + - 500.0ms: 7 + - 1000.0ms: 7 + - 5000.0ms: 7 + - 10000.0ms: 7 + - 30000.0ms: 7 + - +Infms: 7 + +### NGINX Error Logs + + +## Test 2: Start NGF, deploy Gateway, create many resources attached to GW - NumResources 30 + +### Reloads and Time to Ready + +- TimeToReadyTotal: 8s +- TimeToReadyAvgSingle: < 1s +- NGINX Reloads: 53 +- NGINX Reload Average Time: 149ms +- Reload distribution: + - 500.0ms: 53 + - 1000.0ms: 53 + - 5000.0ms: 53 + - 10000.0ms: 53 + - 30000.0ms: 53 + - +Infms: 53 + +### Event Batch Processing + +- Event Batch Total: 329 +- Event Batch Processing Average Time: 24ms +- Event Batch Processing distribution: + - 500.0ms: 329 + - 1000.0ms: 329 + - 5000.0ms: 329 + - 10000.0ms: 329 + - 30000.0ms: 329 + - +Infms: 329 + +### NGINX Error Logs + + +## Test 2: Start NGF, deploy Gateway, create many resources attached to GW - NumResources 150 + +### Reloads and Time to Ready + +- TimeToReadyTotal: 43s +- TimeToReadyAvgSingle: < 1s +- NGINX Reloads: 288 +- NGINX Reload Average Time: 150ms +- Reload distribution: + - 500.0ms: 288 + - 1000.0ms: 288 + - 5000.0ms: 288 + - 10000.0ms: 288 + - 30000.0ms: 288 + - +Infms: 288 + +### Event Batch Processing + +- Event Batch Total: 1641 +- Event Batch Processing Average Time: 26ms +- Event Batch Processing distribution: + - 500.0ms: 1641 + - 1000.0ms: 1641 + - 5000.0ms: 1641 + - 10000.0ms: 1641 + - 30000.0ms: 1641 + - +Infms: 1641 + +### NGINX Error Logs + + +## Test 3: Start NGF, create many resources attached to a Gateway, deploy the Gateway - NumResources 30 + +### Reloads and Time to Ready + +- TimeToReadyTotal: < 1s +- TimeToReadyAvgSingle: < 1s +- NGINX Reloads: 58 +- NGINX Reload Average Time: 138ms +- Reload distribution: + - 500.0ms: 58 + - 1000.0ms: 58 + - 5000.0ms: 58 + - 10000.0ms: 58 + - 30000.0ms: 58 + - +Infms: 58 + +### Event Batch Processing + +- Event Batch Total: 303 +- Event Batch Processing Average Time: 26ms +- Event Batch Processing distribution: + - 500.0ms: 303 + - 1000.0ms: 303 + - 5000.0ms: 303 + - 10000.0ms: 303 + - 30000.0ms: 303 + - +Infms: 303 + +### NGINX Error Logs + + +## Test 3: Start NGF, create many resources attached to a Gateway, deploy the Gateway - NumResources 150 + +### Reloads and Time to Ready + +- TimeToReadyTotal: < 1s +- TimeToReadyAvgSingle: < 1s +- NGINX Reloads: 328 +- NGINX Reload Average Time: 132ms +- Reload distribution: + - 500.0ms: 328 + - 1000.0ms: 328 + - 5000.0ms: 328 + - 10000.0ms: 328 + - 30000.0ms: 328 + - +Infms: 328 + +### Event Batch Processing + +- Event Batch Total: 1535 +- Event Batch Processing Average Time: 28ms +- Event Batch Processing distribution: + - 500.0ms: 1535 + - 1000.0ms: 1535 + - 5000.0ms: 1535 + - 10000.0ms: 1535 + - 30000.0ms: 1535 + - +Infms: 1535 + +### NGINX Error Logs diff --git a/tests/results/reconfig/1.6.0/1.6.0-plus.md b/tests/results/reconfig/1.6.0/1.6.0-plus.md new file mode 100644 index 0000000000..4f06bcd204 --- /dev/null +++ b/tests/results/reconfig/1.6.0/1.6.0-plus.md @@ -0,0 +1,210 @@ +# Results + +## Test environment + +NGINX Plus: true + +NGINX Gateway Fabric: + +- Commit: b61c61d3f9ca29c6eb93ce9b44e652c9a521b3a4 +- Date: 2025-01-13T16:47:24Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.30.6-gke.1596000 +- vCPUs per node: 16 +- RAM per node: 65853984Ki +- Max pods per node: 110 +- Zone: us-west1-b +- Instance Type: n2d-standard-16 + +## Summary: + +- Performance stayed consistent with 1.5.0 results. + +## Test 1: Resources exist before startup - NumResources 30 + +### Reloads and Time to Ready + +- TimeToReadyTotal: 2s +- TimeToReadyAvgSingle: < 1s +- NGINX Reloads: 2 +- NGINX Reload Average Time: 138ms +- Reload distribution: + - 500.0ms: 2 + - 1000.0ms: 2 + - 5000.0ms: 2 + - 10000.0ms: 2 + - 30000.0ms: 2 + - +Infms: 2 + +### Event Batch Processing + +- Event Batch Total: 6 +- Event Batch Processing Average Time: 61ms +- Event Batch Processing distribution: + - 500.0ms: 6 + - 1000.0ms: 6 + - 5000.0ms: 6 + - 10000.0ms: 6 + - 30000.0ms: 6 + - +Infms: 6 + +### NGINX Error Logs + + +## Test 1: Resources exist before startup - NumResources 150 + +### Reloads and Time to Ready + +- TimeToReadyTotal: 1s +- TimeToReadyAvgSingle: < 1s +- NGINX Reloads: 2 +- NGINX Reload Average Time: 125ms +- Reload distribution: + - 500.0ms: 2 + - 1000.0ms: 2 + - 5000.0ms: 2 + - 10000.0ms: 2 + - 30000.0ms: 2 + - +Infms: 2 + +### Event Batch Processing + +- Event Batch Total: 6 +- Event Batch Processing Average Time: 59ms +- Event Batch Processing distribution: + - 500.0ms: 6 + - 1000.0ms: 6 + - 5000.0ms: 6 + - 10000.0ms: 6 + - 30000.0ms: 6 + - +Infms: 6 + +### NGINX Error Logs + + +## Test 2: Start NGF, deploy Gateway, create many resources attached to GW - NumResources 30 + +### Reloads and Time to Ready + +- TimeToReadyTotal: 8s +- TimeToReadyAvgSingle: < 1s +- NGINX Reloads: 46 +- NGINX Reload Average Time: 153ms +- Reload distribution: + - 500.0ms: 46 + - 1000.0ms: 46 + - 5000.0ms: 46 + - 10000.0ms: 46 + - 30000.0ms: 46 + - +Infms: 46 + +### Event Batch Processing + +- Event Batch Total: 322 +- Event Batch Processing Average Time: 25ms +- Event Batch Processing distribution: + - 500.0ms: 322 + - 1000.0ms: 322 + - 5000.0ms: 322 + - 10000.0ms: 322 + - 30000.0ms: 322 + - +Infms: 322 + +### NGINX Error Logs + + +## Test 2: Start NGF, deploy Gateway, create many resources attached to GW - NumResources 150 + +### Reloads and Time to Ready + +- TimeToReadyTotal: 44s +- TimeToReadyAvgSingle: < 1s +- NGINX Reloads: 246 +- NGINX Reload Average Time: 152ms +- Reload distribution: + - 500.0ms: 246 + - 1000.0ms: 246 + - 5000.0ms: 246 + - 10000.0ms: 246 + - 30000.0ms: 246 + - +Infms: 246 + +### Event Batch Processing + +- Event Batch Total: 1595 +- Event Batch Processing Average Time: 27ms +- Event Batch Processing distribution: + - 500.0ms: 1595 + - 1000.0ms: 1595 + - 5000.0ms: 1595 + - 10000.0ms: 1595 + - 30000.0ms: 1595 + - +Infms: 1595 + +### NGINX Error Logs + + +## Test 3: Start NGF, create many resources attached to a Gateway, deploy the Gateway - NumResources 30 + +### Reloads and Time to Ready + +- TimeToReadyTotal: < 1s +- TimeToReadyAvgSingle: < 1s +- NGINX Reloads: 46 +- NGINX Reload Average Time: 150ms +- Reload distribution: + - 500.0ms: 46 + - 1000.0ms: 46 + - 5000.0ms: 46 + - 10000.0ms: 46 + - 30000.0ms: 46 + - +Infms: 46 + +### Event Batch Processing + +- Event Batch Total: 286 +- Event Batch Processing Average Time: 29ms +- Event Batch Processing distribution: + - 500.0ms: 286 + - 1000.0ms: 286 + - 5000.0ms: 286 + - 10000.0ms: 286 + - 30000.0ms: 286 + - +Infms: 286 + +### NGINX Error Logs + + +## Test 3: Start NGF, create many resources attached to a Gateway, deploy the Gateway - NumResources 150 + +### Reloads and Time to Ready + +- TimeToReadyTotal: 1s +- TimeToReadyAvgSingle: < 1s +- NGINX Reloads: 241 +- NGINX Reload Average Time: 151ms +- Reload distribution: + - 500.0ms: 241 + - 1000.0ms: 241 + - 5000.0ms: 241 + - 10000.0ms: 241 + - 30000.0ms: 241 + - +Infms: 241 + +### Event Batch Processing + +- Event Batch Total: 1458 +- Event Batch Processing Average Time: 30ms +- Event Batch Processing distribution: + - 500.0ms: 1458 + - 1000.0ms: 1458 + - 5000.0ms: 1458 + - 10000.0ms: 1458 + - 30000.0ms: 1458 + - +Infms: 1458 + +### NGINX Error Logs diff --git a/tests/results/scale/1.6.0/1.6.0-oss.md b/tests/results/scale/1.6.0/1.6.0-oss.md new file mode 100644 index 0000000000..7cb5d424f0 --- /dev/null +++ b/tests/results/scale/1.6.0/1.6.0-oss.md @@ -0,0 +1,206 @@ +# Results + +## Test environment + +NGINX Plus: false + +NGINX Gateway Fabric: + +- Commit: b61c61d3f9ca29c6eb93ce9b44e652c9a521b3a4 +- Date: 2025-01-13T16:47:24Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.30.6-gke.1596000 +- vCPUs per node: 16 +- RAM per node: 65853984Ki +- Max pods per node: 110 +- Zone: us-west1-b +- Instance Type: n2d-standard-16 + +## Summary: + +- Performance improved. Average reload and event batch processing decreased across all test cases. +- Errors that occurred are consistent with errors that occurred in the previous results. + +## Test TestScale_Listeners + +### Reloads + +- Total: 127 +- Total Errors: 0 +- Average Time: 222ms +- Reload distribution: + - 500.0ms: 127 + - 1000.0ms: 127 + - 5000.0ms: 127 + - 10000.0ms: 127 + - 30000.0ms: 127 + - +Infms: 127 + +### Event Batch Processing + +- Total: 386 +- Average Time: 150ms +- Event Batch Processing distribution: + - 500.0ms: 339 + - 1000.0ms: 384 + - 5000.0ms: 386 + - 10000.0ms: 386 + - 30000.0ms: 386 + - +Infms: 386 + +### Errors + +- NGF errors: 1 +- NGF container restarts: 0 +- NGINX errors: 0 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_Listeners) for more details. +The logs are attached only if there are errors. + +## Test TestScale_HTTPSListeners + +### Reloads + +- Total: 128 +- Total Errors: 0 +- Average Time: 244ms +- Reload distribution: + - 500.0ms: 128 + - 1000.0ms: 128 + - 5000.0ms: 128 + - 10000.0ms: 128 + - 30000.0ms: 128 + - +Infms: 128 + +### Event Batch Processing + +- Total: 451 +- Average Time: 141ms +- Event Batch Processing distribution: + - 500.0ms: 394 + - 1000.0ms: 449 + - 5000.0ms: 451 + - 10000.0ms: 451 + - 30000.0ms: 451 + - +Infms: 451 + +### Errors + +- NGF errors: 0 +- NGF container restarts: 0 +- NGINX errors: 0 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_HTTPSListeners) for more details. +The logs are attached only if there are errors. + +## Test TestScale_HTTPRoutes + +### Reloads + +- Total: 1001 +- Total Errors: 0 +- Average Time: 1493ms +- Reload distribution: + - 500.0ms: 138 + - 1000.0ms: 327 + - 5000.0ms: 1001 + - 10000.0ms: 1001 + - 30000.0ms: 1001 + - +Infms: 1001 + +### Event Batch Processing + +- Total: 1007 +- Average Time: 1575ms +- Event Batch Processing distribution: + - 500.0ms: 131 + - 1000.0ms: 308 + - 5000.0ms: 1007 + - 10000.0ms: 1007 + - 30000.0ms: 1007 + - +Infms: 1007 + +### Errors + +- NGF errors: 0 +- NGF container restarts: 0 +- NGINX errors: 0 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_HTTPRoutes) for more details. +The logs are attached only if there are errors. + +## Test TestScale_UpstreamServers + +### Reloads + +- Total: 94 +- Total Errors: 0 +- Average Time: 150ms +- Reload distribution: + - 500.0ms: 94 + - 1000.0ms: 94 + - 5000.0ms: 94 + - 10000.0ms: 94 + - 30000.0ms: 94 + - +Infms: 94 + +### Event Batch Processing + +- Total: 97 +- Average Time: 147ms +- Event Batch Processing distribution: + - 500.0ms: 97 + - 1000.0ms: 97 + - 5000.0ms: 97 + - 10000.0ms: 97 + - 30000.0ms: 97 + - +Infms: 97 + +### Errors + +- NGF errors: 2 +- NGF container restarts: 0 +- NGINX errors: 0 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_UpstreamServers) for more details. +The logs are attached only if there are errors. + +## Test TestScale_HTTPMatches + +```text +Requests [total, rate, throughput] 30000, 1000.03, 998.18 +Duration [total, attack, wait] 30s, 29.999s, 659.549µs +Latencies [min, mean, 50, 90, 95, 99, max] 356.253µs, 792.759µs, 762.379µs, 882.598µs, 930.755µs, 1.086ms, 13.881ms +Bytes In [total, mean] 4801650, 160.06 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 99.82% +Status Codes [code:count] 200:29945 503:55 +Error Set: +503 Service Temporarily Unavailable +``` +```text +Requests [total, rate, throughput] 30000, 1000.04, 1000.01 +Duration [total, attack, wait] 30s, 29.999s, 926.018µs +Latencies [min, mean, 50, 90, 95, 99, max] 625.417µs, 865.972µs, 846.518µs, 991.131µs, 1.047ms, 1.17ms, 12.133ms +Bytes In [total, mean] 4800000, 160.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` diff --git a/tests/results/scale/1.6.0/1.6.0-plus.md b/tests/results/scale/1.6.0/1.6.0-plus.md new file mode 100644 index 0000000000..bb3f92f7c1 --- /dev/null +++ b/tests/results/scale/1.6.0/1.6.0-plus.md @@ -0,0 +1,208 @@ +# Results + +## Test environment + +NGINX Plus: true + +NGINX Gateway Fabric: + +- Commit: b61c61d3f9ca29c6eb93ce9b44e652c9a521b3a4 +- Date: 2025-01-13T16:47:24Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.30.6-gke.1596000 +- vCPUs per node: 16 +- RAM per node: 65853984Ki +- Max pods per node: 110 +- Zone: us-west1-b +- Instance Type: n2d-standard-16 + +## Summary: + +- Performance is consistent with 1.5.0 results, except for a large increase in NGF and NGINX errors in the + Scale Listeners and Scale HTTPS Listeners test cases. +- Errors in Scale Upstream Servers test case are expected and of small importance. +- Errors in Scale Listeners test case are expected and of small importance. +- Errors in Scale HTTPS Listeners test case have not been seen in previous results and could be concerning. + +## Test TestScale_Listeners + +### Reloads + +- Total: 128 +- Total Errors: 0 +- Average Time: 241ms +- Reload distribution: + - 500.0ms: 128 + - 1000.0ms: 128 + - 5000.0ms: 128 + - 10000.0ms: 128 + - 30000.0ms: 128 + - +Infms: 128 + +### Event Batch Processing + +- Total: 387 +- Average Time: 167ms +- Event Batch Processing distribution: + - 500.0ms: 332 + - 1000.0ms: 385 + - 5000.0ms: 387 + - 10000.0ms: 387 + - 30000.0ms: 387 + - +Infms: 387 + +### Errors + +- NGF errors: 2 +- NGF container restarts: 0 +- NGINX errors: 16 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_Listeners) for more details. +The logs are attached only if there are errors. + +## Test TestScale_HTTPSListeners + +### Reloads + +- Total: 128 +- Total Errors: 0 +- Average Time: 258ms +- Reload distribution: + - 500.0ms: 128 + - 1000.0ms: 128 + - 5000.0ms: 128 + - 10000.0ms: 128 + - 30000.0ms: 128 + - +Infms: 128 + +### Event Batch Processing + +- Total: 451 +- Average Time: 153ms +- Event Batch Processing distribution: + - 500.0ms: 389 + - 1000.0ms: 445 + - 5000.0ms: 451 + - 10000.0ms: 451 + - 30000.0ms: 451 + - +Infms: 451 + +### Errors + +- NGF errors: 3 +- NGF container restarts: 0 +- NGINX errors: 13 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_HTTPSListeners) for more details. +The logs are attached only if there are errors. + +## Test TestScale_HTTPRoutes + +### Reloads + +- Total: 1001 +- Total Errors: 0 +- Average Time: 1474ms +- Reload distribution: + - 500.0ms: 139 + - 1000.0ms: 334 + - 5000.0ms: 1001 + - 10000.0ms: 1001 + - 30000.0ms: 1001 + - +Infms: 1001 + +### Event Batch Processing + +- Total: 1008 +- Average Time: 1594ms +- Event Batch Processing distribution: + - 500.0ms: 120 + - 1000.0ms: 299 + - 5000.0ms: 1008 + - 10000.0ms: 1008 + - 30000.0ms: 1008 + - +Infms: 1008 + +### Errors + +- NGF errors: 0 +- NGF container restarts: 0 +- NGINX errors: 0 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_HTTPRoutes) for more details. +The logs are attached only if there are errors. + +## Test TestScale_UpstreamServers + +### Reloads + +- Total: 2 +- Total Errors: 0 +- Average Time: 151ms +- Reload distribution: + - 500.0ms: 2 + - 1000.0ms: 2 + - 5000.0ms: 2 + - 10000.0ms: 2 + - 30000.0ms: 2 + - +Infms: 2 + +### Event Batch Processing + +- Total: 96 +- Average Time: 245ms +- Event Batch Processing distribution: + - 500.0ms: 94 + - 1000.0ms: 96 + - 5000.0ms: 96 + - 10000.0ms: 96 + - 30000.0ms: 96 + - +Infms: 96 + +### Errors + +- NGF errors: 2 +- NGF container restarts: 0 +- NGINX errors: 0 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_UpstreamServers) for more details. +The logs are attached only if there are errors. + +## Test TestScale_HTTPMatches + +```text +Requests [total, rate, throughput] 30000, 1000.01, 999.98 +Duration [total, attack, wait] 30.001s, 30s, 810.806µs +Latencies [min, mean, 50, 90, 95, 99, max] 551.972µs, 787.907µs, 754.116µs, 890.196µs, 954.704µs, 1.121ms, 12.28ms +Bytes In [total, mean] 4800000, 160.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` +```text +Requests [total, rate, throughput] 30000, 1000.01, 999.98 +Duration [total, attack, wait] 30.001s, 30s, 831.434µs +Latencies [min, mean, 50, 90, 95, 99, max] 648.624µs, 876.41µs, 851.428µs, 1.018ms, 1.091ms, 1.221ms, 9.496ms +Bytes In [total, mean] 4800000, 160.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` diff --git a/tests/results/scale/1.6.0/TestScale_HTTPRoutes/cpu-oss.png b/tests/results/scale/1.6.0/TestScale_HTTPRoutes/cpu-oss.png new file mode 100644 index 0000000000..74132e31ea Binary files /dev/null and b/tests/results/scale/1.6.0/TestScale_HTTPRoutes/cpu-oss.png differ diff --git a/tests/results/scale/1.6.0/TestScale_HTTPRoutes/cpu-plus.png b/tests/results/scale/1.6.0/TestScale_HTTPRoutes/cpu-plus.png new file mode 100644 index 0000000000..b2e864780a Binary files /dev/null and b/tests/results/scale/1.6.0/TestScale_HTTPRoutes/cpu-plus.png differ diff --git a/tests/results/scale/1.6.0/TestScale_HTTPRoutes/memory-oss.png b/tests/results/scale/1.6.0/TestScale_HTTPRoutes/memory-oss.png new file mode 100644 index 0000000000..3756714875 Binary files /dev/null and b/tests/results/scale/1.6.0/TestScale_HTTPRoutes/memory-oss.png differ diff --git a/tests/results/scale/1.6.0/TestScale_HTTPRoutes/memory-plus.png b/tests/results/scale/1.6.0/TestScale_HTTPRoutes/memory-plus.png new file mode 100644 index 0000000000..665b171ff1 Binary files /dev/null and b/tests/results/scale/1.6.0/TestScale_HTTPRoutes/memory-plus.png differ diff --git a/tests/results/scale/1.6.0/TestScale_HTTPRoutes/ttr-oss.png b/tests/results/scale/1.6.0/TestScale_HTTPRoutes/ttr-oss.png new file mode 100644 index 0000000000..196c827798 Binary files /dev/null and b/tests/results/scale/1.6.0/TestScale_HTTPRoutes/ttr-oss.png differ diff --git a/tests/results/scale/1.6.0/TestScale_HTTPRoutes/ttr-plus.png b/tests/results/scale/1.6.0/TestScale_HTTPRoutes/ttr-plus.png new file mode 100644 index 0000000000..93c0c78e9a Binary files /dev/null and b/tests/results/scale/1.6.0/TestScale_HTTPRoutes/ttr-plus.png differ diff --git a/tests/results/scale/1.6.0/TestScale_HTTPSListeners/cpu-oss.png b/tests/results/scale/1.6.0/TestScale_HTTPSListeners/cpu-oss.png new file mode 100644 index 0000000000..f7d00ddf53 Binary files /dev/null and b/tests/results/scale/1.6.0/TestScale_HTTPSListeners/cpu-oss.png differ diff --git a/tests/results/scale/1.6.0/TestScale_HTTPSListeners/cpu-plus.png b/tests/results/scale/1.6.0/TestScale_HTTPSListeners/cpu-plus.png new file mode 100644 index 0000000000..4cb84276cc Binary files /dev/null and b/tests/results/scale/1.6.0/TestScale_HTTPSListeners/cpu-plus.png differ diff --git a/tests/results/scale/1.6.0/TestScale_HTTPSListeners/memory-oss.png b/tests/results/scale/1.6.0/TestScale_HTTPSListeners/memory-oss.png new file mode 100644 index 0000000000..8f921a08c5 Binary files /dev/null and b/tests/results/scale/1.6.0/TestScale_HTTPSListeners/memory-oss.png differ diff --git a/tests/results/scale/1.6.0/TestScale_HTTPSListeners/memory-plus.png b/tests/results/scale/1.6.0/TestScale_HTTPSListeners/memory-plus.png new file mode 100644 index 0000000000..f4683ff10c Binary files /dev/null and b/tests/results/scale/1.6.0/TestScale_HTTPSListeners/memory-plus.png differ diff --git a/tests/results/scale/1.6.0/TestScale_HTTPSListeners/ngf-plus.log b/tests/results/scale/1.6.0/TestScale_HTTPSListeners/ngf-plus.log new file mode 100644 index 0000000000..e1324c6730 --- /dev/null +++ b/tests/results/scale/1.6.0/TestScale_HTTPSListeners/ngf-plus.log @@ -0,0 +1,3 @@ +{"level":"debug","ts":"2025-01-13T22:51:14Z","logger":"controller-runtime.healthz","msg":"healthz check failed","checker":"readyz","error":"nginx has not yet become ready to accept traffic"} +{"level":"debug","ts":"2025-01-13T22:51:14Z","logger":"controller-runtime.healthz","msg":"healthz check failed","checker":"readyz","error":"nginx has not yet become ready to accept traffic"} +{"level":"debug","ts":"2025-01-13T22:51:15Z","logger":"controller-runtime.healthz","msg":"healthz check failed","checker":"readyz","error":"nginx has not yet become ready to accept traffic"} diff --git a/tests/results/scale/1.6.0/TestScale_HTTPSListeners/nginx-plus.log b/tests/results/scale/1.6.0/TestScale_HTTPSListeners/nginx-plus.log new file mode 100644 index 0000000000..9c0cd33180 --- /dev/null +++ b/tests/results/scale/1.6.0/TestScale_HTTPSListeners/nginx-plus.log @@ -0,0 +1,13 @@ +2025/01/13 22:52:26 [error] 704#704: *231 no live upstreams while connecting to upstream, client: 10.138.0.109, server: 17.example.com, request: "GET / HTTP/2.0", upstream: "http://scale_backend-17_80/", host: "17.example.com" +2025/01/13 22:52:27 [error] 738#738: *243 no live upstreams while connecting to upstream, client: 10.138.0.109, server: 18.example.com, request: "GET / HTTP/2.0", upstream: "http://scale_backend-18_80/", host: "18.example.com" +2025/01/13 22:52:32 [error] 909#909: *316 no live upstreams while connecting to upstream, client: 10.138.0.109, server: 23.example.com, request: "GET / HTTP/2.0", upstream: "http://scale_backend-23_80/", host: "23.example.com" +2025/01/13 22:52:35 [error] 1010#1010: *361 no live upstreams while connecting to upstream, client: 10.138.0.109, server: 26.example.com, request: "GET / HTTP/2.0", upstream: "http://scale_backend-26_80/", host: "26.example.com" +2025/01/13 22:52:36 [error] 1044#1044: *377 no live upstreams while connecting to upstream, client: 10.138.0.109, server: 27.example.com, request: "GET / HTTP/2.0", upstream: "http://scale_backend-27_80/", host: "27.example.com" +2025/01/13 22:52:42 [error] 1180#1180: *441 no live upstreams while connecting to upstream, client: 10.138.0.109, server: 31.example.com, request: "GET / HTTP/2.0", upstream: "http://scale_backend-31_80/", host: "31.example.com" +2025/01/13 22:52:52 [error] 1418#1418: *551 no live upstreams while connecting to upstream, client: 10.138.0.109, server: 38.example.com, request: "GET / HTTP/2.0", upstream: "http://scale_backend-38_80/", host: "38.example.com" +2025/01/13 22:53:01 [error] 1589#1589: *633 no live upstreams while connecting to upstream, client: 10.138.0.109, server: 43.example.com, request: "GET / HTTP/2.0", upstream: "http://scale_backend-43_80/", host: "43.example.com" +2025/01/13 22:53:03 [error] 1622#1622: *648 no live upstreams while connecting to upstream, client: 10.138.0.109, server: 44.example.com, request: "GET / HTTP/2.0", upstream: "http://scale_backend-44_80/", host: "44.example.com" +2025/01/13 22:53:05 [error] 1656#1656: *669 no live upstreams while connecting to upstream, client: 10.138.0.109, server: 45.example.com, request: "GET / HTTP/2.0", upstream: "http://scale_backend-45_80/", host: "45.example.com" +2025/01/13 22:53:13 [error] 1792#1792: *744 no live upstreams while connecting to upstream, client: 10.138.0.109, server: 49.example.com, request: "GET / HTTP/2.0", upstream: "http://scale_backend-49_80/", host: "49.example.com" +2025/01/13 22:53:17 [error] 1860#1860: *777 no live upstreams while connecting to upstream, client: 10.138.0.109, server: 51.example.com, request: "GET / HTTP/2.0", upstream: "http://scale_backend-51_80/", host: "51.example.com" +2025/01/13 22:53:43 [error] 2234#2234: *999 no live upstreams while connecting to upstream, client: 10.138.0.109, server: 62.example.com, request: "GET / HTTP/2.0", upstream: "http://scale_backend-62_80/", host: "62.example.com" diff --git a/tests/results/scale/1.6.0/TestScale_HTTPSListeners/ttr-oss.png b/tests/results/scale/1.6.0/TestScale_HTTPSListeners/ttr-oss.png new file mode 100644 index 0000000000..cd2f1f61f9 Binary files /dev/null and b/tests/results/scale/1.6.0/TestScale_HTTPSListeners/ttr-oss.png differ diff --git a/tests/results/scale/1.6.0/TestScale_HTTPSListeners/ttr-plus.png b/tests/results/scale/1.6.0/TestScale_HTTPSListeners/ttr-plus.png new file mode 100644 index 0000000000..527f1950af Binary files /dev/null and b/tests/results/scale/1.6.0/TestScale_HTTPSListeners/ttr-plus.png differ diff --git a/tests/results/scale/1.6.0/TestScale_Listeners/cpu-oss.png b/tests/results/scale/1.6.0/TestScale_Listeners/cpu-oss.png new file mode 100644 index 0000000000..c0b655a968 Binary files /dev/null and b/tests/results/scale/1.6.0/TestScale_Listeners/cpu-oss.png differ diff --git a/tests/results/scale/1.6.0/TestScale_Listeners/cpu-plus.png b/tests/results/scale/1.6.0/TestScale_Listeners/cpu-plus.png new file mode 100644 index 0000000000..c4ce86f14b Binary files /dev/null and b/tests/results/scale/1.6.0/TestScale_Listeners/cpu-plus.png differ diff --git a/tests/results/scale/1.6.0/TestScale_Listeners/memory-oss.png b/tests/results/scale/1.6.0/TestScale_Listeners/memory-oss.png new file mode 100644 index 0000000000..c8511349b3 Binary files /dev/null and b/tests/results/scale/1.6.0/TestScale_Listeners/memory-oss.png differ diff --git a/tests/results/scale/1.6.0/TestScale_Listeners/memory-plus.png b/tests/results/scale/1.6.0/TestScale_Listeners/memory-plus.png new file mode 100644 index 0000000000..d6ebd3f6bb Binary files /dev/null and b/tests/results/scale/1.6.0/TestScale_Listeners/memory-plus.png differ diff --git a/tests/results/scale/1.6.0/TestScale_Listeners/ngf-oss.log b/tests/results/scale/1.6.0/TestScale_Listeners/ngf-oss.log new file mode 100644 index 0000000000..56794bbb0c --- /dev/null +++ b/tests/results/scale/1.6.0/TestScale_Listeners/ngf-oss.log @@ -0,0 +1 @@ +{"level":"debug","ts":"2025-01-13T22:48:04Z","logger":"statusUpdater","msg":"Encountered error updating status","error":"Operation cannot be fulfilled on gateways.gateway.networking.k8s.io \"gateway\": the object has been modified; please apply your changes to the latest version and try again","namespace":"scale","name":"gateway","kind":"Gateway"} diff --git a/tests/results/scale/1.6.0/TestScale_Listeners/ngf-plus.log b/tests/results/scale/1.6.0/TestScale_Listeners/ngf-plus.log new file mode 100644 index 0000000000..1fffe643a2 --- /dev/null +++ b/tests/results/scale/1.6.0/TestScale_Listeners/ngf-plus.log @@ -0,0 +1,2 @@ +{"level":"debug","ts":"2025-01-13T22:47:10Z","logger":"controller-runtime.healthz","msg":"healthz check failed","checker":"readyz","error":"nginx has not yet become ready to accept traffic"} +{"level":"debug","ts":"2025-01-13T22:47:10Z","logger":"controller-runtime.healthz","msg":"healthz check failed","checker":"readyz","error":"nginx has not yet become ready to accept traffic"} diff --git a/tests/results/scale/1.6.0/TestScale_Listeners/nginx-plus.log b/tests/results/scale/1.6.0/TestScale_Listeners/nginx-plus.log new file mode 100644 index 0000000000..19f2952764 --- /dev/null +++ b/tests/results/scale/1.6.0/TestScale_Listeners/nginx-plus.log @@ -0,0 +1,16 @@ +2025/01/13 22:48:16 [error] 398#398: *111 no live upstreams while connecting to upstream, client: 10.138.0.109, server: 8.example.com, request: "GET / HTTP/1.1", upstream: "http://scale_backend-8_80/", host: "8.example.com" +2025/01/13 22:48:16 [error] 432#432: *126 no live upstreams while connecting to upstream, client: 10.138.0.109, server: 9.example.com, request: "GET / HTTP/1.1", upstream: "http://scale_backend-9_80/", host: "9.example.com" +2025/01/13 22:48:23 [error] 738#738: *246 no live upstreams while connecting to upstream, client: 10.138.0.109, server: 18.example.com, request: "GET / HTTP/1.1", upstream: "http://scale_backend-18_80/", host: "18.example.com" +2025/01/13 22:48:25 [error] 807#807: *274 no live upstreams while connecting to upstream, client: 10.138.0.109, server: 20.example.com, request: "GET / HTTP/1.1", upstream: "http://scale_backend-20_80/", host: "20.example.com" +2025/01/13 22:48:26 [error] 875#875: *300 no live upstreams while connecting to upstream, client: 10.138.0.109, server: 22.example.com, request: "GET / HTTP/1.1", upstream: "http://scale_backend-22_80/", host: "22.example.com" +2025/01/13 22:48:34 [error] 1113#1113: *405 no live upstreams while connecting to upstream, client: 10.138.0.109, server: 29.example.com, request: "GET / HTTP/1.1", upstream: "http://scale_backend-29_80/", host: "29.example.com" +2025/01/13 22:48:37 [error] 1215#1215: *454 no live upstreams while connecting to upstream, client: 10.138.0.109, server: 32.example.com, request: "GET / HTTP/1.1", upstream: "http://scale_backend-32_80/", host: "32.example.com" +2025/01/13 22:48:42 [error] 1317#1317: *516 no live upstreams while connecting to upstream, client: 10.138.0.109, server: 35.example.com, request: "GET / HTTP/1.1", upstream: "http://scale_backend-35_80/", host: "35.example.com" +2025/01/13 22:48:45 [error] 1385#1385: *547 no live upstreams while connecting to upstream, client: 10.138.0.109, server: 37.example.com, request: "GET / HTTP/1.1", upstream: "http://scale_backend-37_80/", host: "37.example.com" +2025/01/13 22:48:48 [error] 1453#1453: *577 no live upstreams while connecting to upstream, client: 10.138.0.109, server: 39.example.com, request: "GET / HTTP/1.1", upstream: "http://scale_backend-39_80/", host: "39.example.com" +2025/01/13 22:48:49 [error] 1487#1487: *592 no live upstreams while connecting to upstream, client: 10.138.0.109, server: 40.example.com, request: "GET / HTTP/1.1", upstream: "http://scale_backend-40_80/", host: "40.example.com" +2025/01/13 22:48:51 [error] 1521#1521: *608 no live upstreams while connecting to upstream, client: 10.138.0.109, server: 41.example.com, request: "GET / HTTP/1.1", upstream: "http://scale_backend-41_80/", host: "41.example.com" +2025/01/13 22:48:52 [error] 1555#1555: *624 no live upstreams while connecting to upstream, client: 10.138.0.109, server: 42.example.com, request: "GET / HTTP/1.1", upstream: "http://scale_backend-42_80/", host: "42.example.com" +2025/01/13 22:48:54 [error] 1589#1589: *644 no live upstreams while connecting to upstream, client: 10.138.0.109, server: 43.example.com, request: "GET / HTTP/1.1", upstream: "http://scale_backend-43_80/", host: "43.example.com" +2025/01/13 22:49:14 [error] 1964#1964: *830 no live upstreams while connecting to upstream, client: 10.138.0.109, server: 54.example.com, request: "GET / HTTP/1.1", upstream: "http://scale_backend-54_80/", host: "54.example.com" +2025/01/13 22:49:18 [error] 2031#2031: *865 no live upstreams while connecting to upstream, client: 10.138.0.109, server: 56.example.com, request: "GET / HTTP/1.1", upstream: "http://scale_backend-56_80/", host: "56.example.com" diff --git a/tests/results/scale/1.6.0/TestScale_Listeners/ttr-oss.png b/tests/results/scale/1.6.0/TestScale_Listeners/ttr-oss.png new file mode 100644 index 0000000000..9b5a220f4a Binary files /dev/null and b/tests/results/scale/1.6.0/TestScale_Listeners/ttr-oss.png differ diff --git a/tests/results/scale/1.6.0/TestScale_Listeners/ttr-plus.png b/tests/results/scale/1.6.0/TestScale_Listeners/ttr-plus.png new file mode 100644 index 0000000000..c7ca3a0c2d Binary files /dev/null and b/tests/results/scale/1.6.0/TestScale_Listeners/ttr-plus.png differ diff --git a/tests/results/scale/1.6.0/TestScale_UpstreamServers/cpu-oss.png b/tests/results/scale/1.6.0/TestScale_UpstreamServers/cpu-oss.png new file mode 100644 index 0000000000..9e6bcf4858 Binary files /dev/null and b/tests/results/scale/1.6.0/TestScale_UpstreamServers/cpu-oss.png differ diff --git a/tests/results/scale/1.6.0/TestScale_UpstreamServers/cpu-plus.png b/tests/results/scale/1.6.0/TestScale_UpstreamServers/cpu-plus.png new file mode 100644 index 0000000000..98b869136b Binary files /dev/null and b/tests/results/scale/1.6.0/TestScale_UpstreamServers/cpu-plus.png differ diff --git a/tests/results/scale/1.6.0/TestScale_UpstreamServers/memory-oss.png b/tests/results/scale/1.6.0/TestScale_UpstreamServers/memory-oss.png new file mode 100644 index 0000000000..d77bf55efb Binary files /dev/null and b/tests/results/scale/1.6.0/TestScale_UpstreamServers/memory-oss.png differ diff --git a/tests/results/scale/1.6.0/TestScale_UpstreamServers/memory-plus.png b/tests/results/scale/1.6.0/TestScale_UpstreamServers/memory-plus.png new file mode 100644 index 0000000000..dc4f264be1 Binary files /dev/null and b/tests/results/scale/1.6.0/TestScale_UpstreamServers/memory-plus.png differ diff --git a/tests/results/scale/1.6.0/TestScale_UpstreamServers/ngf-oss.log b/tests/results/scale/1.6.0/TestScale_UpstreamServers/ngf-oss.log new file mode 100644 index 0000000000..1d354e86bd --- /dev/null +++ b/tests/results/scale/1.6.0/TestScale_UpstreamServers/ngf-oss.log @@ -0,0 +1,2 @@ +{"level":"debug","ts":"2025-01-13T23:24:30Z","logger":"controller-runtime.healthz","msg":"healthz check failed","checker":"readyz","error":"nginx has not yet become ready to accept traffic"} +{"level":"debug","ts":"2025-01-13T23:24:30Z","logger":"controller-runtime.healthz","msg":"healthz check failed","checker":"readyz","error":"nginx has not yet become ready to accept traffic"} diff --git a/tests/results/scale/1.6.0/TestScale_UpstreamServers/ngf-plus.log b/tests/results/scale/1.6.0/TestScale_UpstreamServers/ngf-plus.log new file mode 100644 index 0000000000..76abcc4cd4 --- /dev/null +++ b/tests/results/scale/1.6.0/TestScale_UpstreamServers/ngf-plus.log @@ -0,0 +1,2 @@ +{"level":"debug","ts":"2025-01-13T23:24:57Z","logger":"controller-runtime.healthz","msg":"healthz check failed","checker":"readyz","error":"nginx has not yet become ready to accept traffic"} +{"level":"debug","ts":"2025-01-13T23:24:58Z","logger":"controller-runtime.healthz","msg":"healthz check failed","checker":"readyz","error":"nginx has not yet become ready to accept traffic"} diff --git a/tests/results/zero-downtime-scale/1.6.0/1.6.0-oss.md b/tests/results/zero-downtime-scale/1.6.0/1.6.0-oss.md new file mode 100644 index 0000000000..7286725dbe --- /dev/null +++ b/tests/results/zero-downtime-scale/1.6.0/1.6.0-oss.md @@ -0,0 +1,286 @@ +# Results + +## Test environment + +NGINX Plus: false + +NGINX Gateway Fabric: + +- Commit: b61c61d3f9ca29c6eb93ce9b44e652c9a521b3a4 +- Date: 2025-01-13T16:47:24Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.30.6-gke.1596000 +- vCPUs per node: 16 +- RAM per node: 65853984Ki +- Max pods per node: 110 +- Zone: us-west1-b +- Instance Type: n2d-standard-16 + +## Summary: + +- Slight increase in average latency across all test cases. +- No errors. + +## One NGF Pod runs per node Test Results + +### Scale Up Gradually + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 884.151µs +Latencies [min, mean, 50, 90, 95, 99, max] 430.288µs, 894.252µs, 878.393µs, 1.057ms, 1.123ms, 1.348ms, 12.787ms +Bytes In [total, mean] 4625991, 154.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-affinity-https-oss.png](gradual-scale-up-affinity-https-oss.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 617.813µs +Latencies [min, mean, 50, 90, 95, 99, max] 411.796µs, 849.819µs, 839.38µs, 1.001ms, 1.067ms, 1.305ms, 12.671ms +Bytes In [total, mean] 4805960, 160.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-affinity-http-oss.png](gradual-scale-up-affinity-http-oss.png) + +### Scale Down Gradually + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 48000, 100.00, 100.00 +Duration [total, attack, wait] 8m0s, 8m0s, 935.746µs +Latencies [min, mean, 50, 90, 95, 99, max] 413.772µs, 832.89µs, 828.193µs, 965.749µs, 1.02ms, 1.241ms, 11.886ms +Bytes In [total, mean] 7689523, 160.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:48000 +Error Set: +``` + +![gradual-scale-down-affinity-http-oss.png](gradual-scale-down-affinity-http-oss.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 48000, 100.00, 100.00 +Duration [total, attack, wait] 8m0s, 8m0s, 839.262µs +Latencies [min, mean, 50, 90, 95, 99, max] 445.407µs, 859.764µs, 848.871µs, 993.325µs, 1.049ms, 1.258ms, 12.044ms +Bytes In [total, mean] 7401518, 154.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:48000 +Error Set: +``` + +![gradual-scale-down-affinity-https-oss.png](gradual-scale-down-affinity-https-oss.png) + +### Scale Up Abruptly + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 981.012µs +Latencies [min, mean, 50, 90, 95, 99, max] 449.478µs, 876.491µs, 863.628µs, 1.035ms, 1.091ms, 1.295ms, 11.09ms +Bytes In [total, mean] 1850379, 154.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-affinity-https-oss.png](abrupt-scale-up-affinity-https-oss.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 956.004µs +Latencies [min, mean, 50, 90, 95, 99, max] 426.158µs, 843.631µs, 838.899µs, 986.221µs, 1.038ms, 1.225ms, 9.264ms +Bytes In [total, mean] 1922412, 160.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-affinity-http-oss.png](abrupt-scale-up-affinity-http-oss.png) + +### Scale Down Abruptly + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.159ms +Latencies [min, mean, 50, 90, 95, 99, max] 407.742µs, 908.068µs, 906.137µs, 1.073ms, 1.125ms, 1.26ms, 8.881ms +Bytes In [total, mean] 1922385, 160.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-down-affinity-http-oss.png](abrupt-scale-down-affinity-http-oss.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.144ms +Latencies [min, mean, 50, 90, 95, 99, max] 420.068µs, 940.104µs, 930.864µs, 1.113ms, 1.166ms, 1.305ms, 8.861ms +Bytes In [total, mean] 1850378, 154.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-down-affinity-https-oss.png](abrupt-scale-down-affinity-https-oss.png) + +## Multiple NGF Pods run per node Test Results + +### Scale Up Gradually + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 858.203µs +Latencies [min, mean, 50, 90, 95, 99, max] 434.406µs, 878.34µs, 863.759µs, 1.034ms, 1.097ms, 1.376ms, 12.273ms +Bytes In [total, mean] 4806002, 160.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-http-oss.png](gradual-scale-up-http-oss.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 975.731µs +Latencies [min, mean, 50, 90, 95, 99, max] 452.033µs, 904.699µs, 886.261µs, 1.057ms, 1.119ms, 1.404ms, 13.241ms +Bytes In [total, mean] 4626020, 154.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-https-oss.png](gradual-scale-up-https-oss.png) + +### Scale Down Gradually + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 96000, 100.00, 100.00 +Duration [total, attack, wait] 16m0s, 16m0s, 941.177µs +Latencies [min, mean, 50, 90, 95, 99, max] 399.257µs, 854.526µs, 844.063µs, 1.006ms, 1.068ms, 1.305ms, 12.186ms +Bytes In [total, mean] 15378988, 160.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:96000 +Error Set: +``` + +![gradual-scale-down-http-oss.png](gradual-scale-down-http-oss.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 96000, 100.00, 100.00 +Duration [total, attack, wait] 16m0s, 16m0s, 1.044ms +Latencies [min, mean, 50, 90, 95, 99, max] 408.91µs, 875.414µs, 861.002µs, 1.027ms, 1.094ms, 1.346ms, 17.722ms +Bytes In [total, mean] 14803207, 154.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:96000 +Error Set: +``` + +![gradual-scale-down-https-oss.png](gradual-scale-down-https-oss.png) + +### Scale Up Abruptly + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 806.048µs +Latencies [min, mean, 50, 90, 95, 99, max] 422.292µs, 857.343µs, 847.614µs, 1.029ms, 1.103ms, 1.285ms, 5.302ms +Bytes In [total, mean] 1922384, 160.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-http-oss.png](abrupt-scale-up-http-oss.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 941.644µs +Latencies [min, mean, 50, 90, 95, 99, max] 437.845µs, 881.542µs, 861.654µs, 1.052ms, 1.126ms, 1.303ms, 6.146ms +Bytes In [total, mean] 1850379, 154.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-https-oss.png](abrupt-scale-up-https-oss.png) + +### Scale Down Abruptly + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 803.102µs +Latencies [min, mean, 50, 90, 95, 99, max] 456.04µs, 909.975µs, 899.85µs, 1.073ms, 1.129ms, 1.292ms, 9.787ms +Bytes In [total, mean] 1850444, 154.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-down-https-oss.png](abrupt-scale-down-https-oss.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 794.084µs +Latencies [min, mean, 50, 90, 95, 99, max] 432.036µs, 882.08µs, 877.001µs, 1.036ms, 1.087ms, 1.259ms, 9.732ms +Bytes In [total, mean] 1922353, 160.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-down-http-oss.png](abrupt-scale-down-http-oss.png) diff --git a/tests/results/zero-downtime-scale/1.6.0/1.6.0-plus.md b/tests/results/zero-downtime-scale/1.6.0/1.6.0-plus.md new file mode 100644 index 0000000000..6aa589b750 --- /dev/null +++ b/tests/results/zero-downtime-scale/1.6.0/1.6.0-plus.md @@ -0,0 +1,286 @@ +# Results + +## Test environment + +NGINX Plus: true + +NGINX Gateway Fabric: + +- Commit: b61c61d3f9ca29c6eb93ce9b44e652c9a521b3a4 +- Date: 2025-01-13T16:47:24Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.30.6-gke.1596000 +- vCPUs per node: 16 +- RAM per node: 65853984Ki +- Max pods per node: 110 +- Zone: us-west1-b +- Instance Type: n2d-standard-16 + +## Summary: + +- Slight increase in average latency across all test cases. +- No errors. + +## One NGF Pod runs per node Test Results + +### Scale Up Gradually + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 731.836µs +Latencies [min, mean, 50, 90, 95, 99, max] 405.262µs, 951.414µs, 942.674µs, 1.133ms, 1.196ms, 1.441ms, 12.654ms +Bytes In [total, mean] 4805936, 160.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-affinity-http-plus.png](gradual-scale-up-affinity-http-plus.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 928.875µs +Latencies [min, mean, 50, 90, 95, 99, max] 422.584µs, 974.005µs, 961.01µs, 1.153ms, 1.225ms, 1.478ms, 17.862ms +Bytes In [total, mean] 4626034, 154.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-affinity-https-plus.png](gradual-scale-up-affinity-https-plus.png) + +### Scale Down Gradually + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 48000, 100.00, 100.00 +Duration [total, attack, wait] 8m0s, 8m0s, 955.351µs +Latencies [min, mean, 50, 90, 95, 99, max] 461.146µs, 993.488µs, 983.448µs, 1.177ms, 1.24ms, 1.439ms, 18.887ms +Bytes In [total, mean] 7401553, 154.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:48000 +Error Set: +``` + +![gradual-scale-down-affinity-https-plus.png](gradual-scale-down-affinity-https-plus.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 48000, 100.00, 100.00 +Duration [total, attack, wait] 8m0s, 8m0s, 912.825µs +Latencies [min, mean, 50, 90, 95, 99, max] 419.392µs, 932.199µs, 931.633µs, 1.104ms, 1.162ms, 1.359ms, 14.565ms +Bytes In [total, mean] 7689592, 160.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:48000 +Error Set: +``` + +![gradual-scale-down-affinity-http-plus.png](gradual-scale-down-affinity-http-plus.png) + +### Scale Up Abruptly + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.009ms +Latencies [min, mean, 50, 90, 95, 99, max] 437.81µs, 958.122µs, 943.806µs, 1.159ms, 1.239ms, 1.45ms, 8.401ms +Bytes In [total, mean] 1850481, 154.21 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-affinity-https-plus.png](abrupt-scale-up-affinity-https-plus.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.026ms +Latencies [min, mean, 50, 90, 95, 99, max] 404.512µs, 899.693µs, 901.006µs, 1.077ms, 1.14ms, 1.335ms, 7.566ms +Bytes In [total, mean] 1922365, 160.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-affinity-http-plus.png](abrupt-scale-up-affinity-http-plus.png) + +### Scale Down Abruptly + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 780.518µs +Latencies [min, mean, 50, 90, 95, 99, max] 406.603µs, 961.899µs, 949.821µs, 1.142ms, 1.209ms, 1.355ms, 27.837ms +Bytes In [total, mean] 1850421, 154.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-down-affinity-https-plus.png](abrupt-scale-down-affinity-https-plus.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 931.573µs +Latencies [min, mean, 50, 90, 95, 99, max] 424.508µs, 934.592µs, 926.134µs, 1.112ms, 1.174ms, 1.32ms, 39.399ms +Bytes In [total, mean] 1922394, 160.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-down-affinity-http-plus.png](abrupt-scale-down-affinity-http-plus.png) + +## Multiple NGF Pods run per node Test Results + +### Scale Up Gradually + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 895.686µs +Latencies [min, mean, 50, 90, 95, 99, max] 428.719µs, 977.84µs, 958.198µs, 1.175ms, 1.253ms, 1.52ms, 20.388ms +Bytes In [total, mean] 4626083, 154.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-https-plus.png](gradual-scale-up-https-plus.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 875.402µs +Latencies [min, mean, 50, 90, 95, 99, max] 390.731µs, 940.618µs, 924.826µs, 1.12ms, 1.205ms, 1.509ms, 12.669ms +Bytes In [total, mean] 4806006, 160.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-http-plus.png](gradual-scale-up-http-plus.png) + +### Scale Down Gradually + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 96000, 100.00, 100.00 +Duration [total, attack, wait] 16m0s, 16m0s, 770.017µs +Latencies [min, mean, 50, 90, 95, 99, max] 399.823µs, 935.884µs, 927.357µs, 1.122ms, 1.191ms, 1.406ms, 53.699ms +Bytes In [total, mean] 15379358, 160.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:96000 +Error Set: +``` + +![gradual-scale-down-http-plus.png](gradual-scale-down-http-plus.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 96000, 100.00, 100.00 +Duration [total, attack, wait] 16m0s, 16m0s, 1.031ms +Latencies [min, mean, 50, 90, 95, 99, max] 404.664µs, 968.668µs, 952.902µs, 1.171ms, 1.249ms, 1.459ms, 62.262ms +Bytes In [total, mean] 14803402, 154.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:96000 +Error Set: +``` + +![gradual-scale-down-https-plus.png](gradual-scale-down-https-plus.png) + +### Scale Up Abruptly + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 584.702µs +Latencies [min, mean, 50, 90, 95, 99, max] 438.473µs, 908.694µs, 894.28µs, 1.094ms, 1.172ms, 1.432ms, 11.703ms +Bytes In [total, mean] 1922409, 160.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-http-plus.png](abrupt-scale-up-http-plus.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 843.545µs +Latencies [min, mean, 50, 90, 95, 99, max] 459.238µs, 1.002ms, 955.233µs, 1.182ms, 1.285ms, 1.786ms, 44.019ms +Bytes In [total, mean] 1850373, 154.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-https-plus.png](abrupt-scale-up-https-plus.png) + +### Scale Down Abruptly + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.06ms +Latencies [min, mean, 50, 90, 95, 99, max] 452.602µs, 992.347µs, 976.316µs, 1.207ms, 1.285ms, 1.474ms, 34.554ms +Bytes In [total, mean] 1922450, 160.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-down-http-plus.png](abrupt-scale-down-http-plus.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.249ms +Latencies [min, mean, 50, 90, 95, 99, max] 471.173µs, 1.047ms, 1.024ms, 1.267ms, 1.349ms, 1.555ms, 43.182ms +Bytes In [total, mean] 1850373, 154.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-down-https-plus.png](abrupt-scale-down-https-plus.png) diff --git a/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-down-affinity-http-oss.png b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-down-affinity-http-oss.png new file mode 100644 index 0000000000..2eadf7ab87 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-down-affinity-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-down-affinity-http-plus.png b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-down-affinity-http-plus.png new file mode 100644 index 0000000000..4971cbccea Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-down-affinity-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-down-affinity-https-oss.png b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-down-affinity-https-oss.png new file mode 100644 index 0000000000..2eadf7ab87 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-down-affinity-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-down-affinity-https-plus.png b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-down-affinity-https-plus.png new file mode 100644 index 0000000000..4971cbccea Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-down-affinity-https-plus.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-down-http-oss.png b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-down-http-oss.png new file mode 100644 index 0000000000..2252c17eb5 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-down-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-down-http-plus.png b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-down-http-plus.png new file mode 100644 index 0000000000..6a1af2fd33 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-down-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-down-https-oss.png b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-down-https-oss.png new file mode 100644 index 0000000000..2252c17eb5 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-down-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-down-https-plus.png b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-down-https-plus.png new file mode 100644 index 0000000000..6a1af2fd33 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-down-https-plus.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-up-affinity-http-oss.png b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-up-affinity-http-oss.png new file mode 100644 index 0000000000..6f400767ea Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-up-affinity-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-up-affinity-http-plus.png b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-up-affinity-http-plus.png new file mode 100644 index 0000000000..246e6fd4f3 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-up-affinity-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-up-affinity-https-oss.png b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-up-affinity-https-oss.png new file mode 100644 index 0000000000..6f400767ea Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-up-affinity-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-up-affinity-https-plus.png b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-up-affinity-https-plus.png new file mode 100644 index 0000000000..246e6fd4f3 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-up-affinity-https-plus.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-up-http-oss.png b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-up-http-oss.png new file mode 100644 index 0000000000..5f0b80e35e Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-up-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-up-http-plus.png b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-up-http-plus.png new file mode 100644 index 0000000000..a98ceb0145 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-up-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-up-https-oss.png b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-up-https-oss.png new file mode 100644 index 0000000000..5f0b80e35e Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-up-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-up-https-plus.png b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-up-https-plus.png new file mode 100644 index 0000000000..a98ceb0145 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/abrupt-scale-up-https-plus.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/gradual-scale-down-affinity-http-oss.png b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-down-affinity-http-oss.png new file mode 100644 index 0000000000..f3ef54e7c0 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-down-affinity-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/gradual-scale-down-affinity-http-plus.png b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-down-affinity-http-plus.png new file mode 100644 index 0000000000..252788f90e Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-down-affinity-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/gradual-scale-down-affinity-https-oss.png b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-down-affinity-https-oss.png new file mode 100644 index 0000000000..f3ef54e7c0 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-down-affinity-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/gradual-scale-down-affinity-https-plus.png b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-down-affinity-https-plus.png new file mode 100644 index 0000000000..252788f90e Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-down-affinity-https-plus.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/gradual-scale-down-http-oss.png b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-down-http-oss.png new file mode 100644 index 0000000000..b2b31a0912 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-down-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/gradual-scale-down-http-plus.png b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-down-http-plus.png new file mode 100644 index 0000000000..c91b5a6f80 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-down-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/gradual-scale-down-https-oss.png b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-down-https-oss.png new file mode 100644 index 0000000000..b2b31a0912 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-down-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/gradual-scale-down-https-plus.png b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-down-https-plus.png new file mode 100644 index 0000000000..c91b5a6f80 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-down-https-plus.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/gradual-scale-up-affinity-http-oss.png b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-up-affinity-http-oss.png new file mode 100644 index 0000000000..11238457c2 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-up-affinity-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/gradual-scale-up-affinity-http-plus.png b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-up-affinity-http-plus.png new file mode 100644 index 0000000000..7a0fa352f5 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-up-affinity-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/gradual-scale-up-affinity-https-oss.png b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-up-affinity-https-oss.png new file mode 100644 index 0000000000..11238457c2 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-up-affinity-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/gradual-scale-up-affinity-https-plus.png b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-up-affinity-https-plus.png new file mode 100644 index 0000000000..7a0fa352f5 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-up-affinity-https-plus.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/gradual-scale-up-http-oss.png b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-up-http-oss.png new file mode 100644 index 0000000000..089326bcef Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-up-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/gradual-scale-up-http-plus.png b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-up-http-plus.png new file mode 100644 index 0000000000..04646b9656 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-up-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/gradual-scale-up-https-oss.png b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-up-https-oss.png new file mode 100644 index 0000000000..089326bcef Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-up-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.6.0/gradual-scale-up-https-plus.png b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-up-https-plus.png new file mode 100644 index 0000000000..04646b9656 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.6.0/gradual-scale-up-https-plus.png differ