Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High CPU usage #251

Open
dwMiguelM opened this issue Oct 17, 2022 · 6 comments
Open

High CPU usage #251

dwMiguelM opened this issue Oct 17, 2022 · 6 comments
Assignees

Comments

@dwMiguelM
Copy link

dwMiguelM commented Oct 17, 2022

Hey folks,

Giving more attention to this issue (issues/221), due to the exact same fact, but let me show you in real-environment.

This test was made by generating 500kr/s (500.000 connections per second), the connections were generated using 15k unique IPs.
Currently, the VTS configuration has 17 filter keys.
Be aware, those filter keys catch everything that happens in the web server, they are heavily updated.

Heavy load without VTS: https://prnt.sc/_M-8xNbPPTk1
Heavy load with VTS: https://prnt.sc/PEQYLxSi5vE6

The VTS configurations/keys and further detailed information about this performance impact caused by the VTS can be provided to a VTS contributor/developer in a private conversation.

I hope we can work together to optimize this :)

Best Regards!

@vozlt
Copy link
Owner

vozlt commented Oct 17, 2022

@dwMiguelM
Thanks for testing and reporting. If you test by gradually reducing the number of filters, won't the load be reduced? Right? As mentioned here, I know that the higher the number of filters, the lower the performance due to ngx_shmtx_lock(). To improve this, it seems to be necessary to periodically update the shared memory after collecting statistics from individual nginx workers. This feature is currently being considered.

@dwMiguelM
Copy link
Author

@dwMiguelM Thanks for testing and reporting. If you test by gradually reducing the number of filters, won't the load be reduced? Right? As mentioned here, I know that the higher the number of filters, the lower the performance due to ngx_shmtx_lock(). To improve this, it seems to be necessary to periodically update the shared memory after collecting statistics from individual nginx workers. This feature is currently being considered.

I'm sorry for asking, but I would need to request you something, is it possible to get any contact yours?

Thank you in advance :)

@vozlt
Copy link
Owner

vozlt commented Oct 18, 2022

@dwMiguelM
Email: [email protected]

@testn
Copy link

testn commented Oct 21, 2022

@vozlt this sounds similar to this one right? Kong/kong-plugin-prometheus@fd844dc

@u5surf
Copy link
Collaborator

u5surf commented Dec 23, 2022

@testn
Yes, It seems that it fits what @vozlt said.
lua-resty-counter can provide the individual worker counter.
As far as I can see the PR that module use it to count per request and can sync to shm periodically.
I seems that it is not easy to implement in this module the same mechanism of lua-resty-counter.

@climagabriel
Copy link

jfyi The CPU usage is due to lock contention

#0  0x00007fabf1e91106 in ?? () from target:/lib/x86_64-linux-gnu/libc.so.6
#1  0x00007fabf1e9ccf8 in ?? () from target:/lib/x86_64-linux-gnu/libc.so.6
#2  0x0000563ce4835388 in ngx_shmtx_lock (mtx=mtx@entry=0x7faab7300068) at src/core/ngx_shmtx.c:111
#3  0x0000563ce497361a in ngx_http_vhost_traffic_status_shm_add_node (r=r@entry=0x563cee18f058, key=key@entry=0x7ffe36de47d0, type=type@entry=0) at gcore/modules/ngx_http_vts_module/src/ngx_http_vhost_traffic_status_shm.c:107
#4  0x0000563ce4973ae1 in ngx_http_vhost_traffic_status_shm_add_server (r=r@entry=0x563cee18f058) at gcore/modules/ngx_http_vts_module/src/ngx_http_vhost_traffic_status_shm.c:398
#5  0x0000563ce497275a in ngx_http_vhost_traffic_status_handler (r=0x563cee18f058) at gcore/modules/ngx_http_vts_module/src/ngx_http_vhost_traffic_status_module.c:287
#6  0x0000563ce488c5bf in ngx_http_log_request (r=r@entry=0x563cee18f058) at src/http/ngx_http_request.c:3814
#7  0x0000563ce488e4ba in ngx_http_free_request (r=r@entry=0x563cee18f058, rc=0) at src/http/ngx_http_request.c:3755
#8  0x0000563ce488e820 in ngx_http_close_request (r=0x563cee18f058, rc=0) at src/http/ngx_http_request.c:3701
#9  0x0000563ce488e136 in ngx_http_run_posted_requests (c=0x563cfab47068) at src/http/ngx_http_request.c:2478
#10 0x0000563ce489150c in ngx_http_process_request_headers (rev=rev@entry=0x563cfbcc65a0) at src/http/ngx_http_request.c:1560
#11 0x0000563ce4891c0e in ngx_http_process_request_line (rev=0x563cfbcc65a0) at src/http/ngx_http_request.c:1204
#12 0x0000563ce4857fa3 in ngx_epoll_process_events (cycle=0x563ced000058, timer=<optimized out>, flags=<optimized out>) at src/event/modules/ngx_epoll_module.c:914
#13 0x0000563ce484a07a in ngx_process_events_and_timers (cycle=cycle@entry=0x563ced000058) at src/event/ngx_event.c:250
#14 0x0000563ce485498a in ngx_worker_process_cycle (cycle=cycle@entry=0x563ced000058, data=data@entry=0x0) at src/os/unix/ngx_process_cycle.c:945
#15 0x0000563ce48528e5 in ngx_spawn_process (cycle=cycle@entry=0x563ced000058, proc=proc@entry=0x563ce4854820 <ngx_worker_process_cycle>, data=data@entry=0x0, name=name@entry=0x563ce4db8886 "worker process", respawn=respawn@entry=-4) at src/os/unix/ngx_process.c:203
#16 0x0000563ce4853f08 in ngx_start_worker_processes (cycle=cycle@entry=0x563ced000058, n=92, type=type@entry=-4) at src/os/unix/ngx_process_cycle.c:530
#17 0x0000563ce48564dc in ngx_master_process_cycle (cycle=0x563ced000058) at src/os/unix/ngx_process_cycle.c:416
#18 0x0000563ce48226fa in main (argc=<optimized out>, argv=<optimized out>) at src/core/nginx.c:489

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants