Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate if the http client library is closing sockets as expected #564

Closed
3 tasks done
TomasTurina opened this issue Jan 30, 2025 · 4 comments · Fixed by #699
Closed
3 tasks done

Investigate if the http client library is closing sockets as expected #564

TomasTurina opened this issue Jan 30, 2025 · 4 comments · Fixed by #699
Assignees
Labels

Comments

@TomasTurina
Copy link
Member

TomasTurina commented Jan 30, 2025

Description

The http client library uses Boost Beast's HTTP and HTTPS sockets to execute requests.

Every time a new request is executed, a new socket is created which is used to create the communication with the server.

Additionally, sockets have a Close function which must be called at the end of the operation to properly close the socket. However, this function is not currently being used and needs to be evaluated.

Tasks

  • Investigate the use of the Close socket functions and in which cases they should be called.
  • If necessary, adapt the current http client code to use them.
  • Be sure to test this change on all supported operating systems.
@Nicogp
Copy link
Member

Nicogp commented Mar 19, 2025

As the description of the issue says, the actual code creates a new connection but does not explicitly close the previous one, so it leaves the responsibility of closing the connection to the operating system. This can be seen in the following test, here you can see how the agent leaves the connections, these remain in “TIME_WAIT” state until finally the OS closes them.

Windows:

PS C:\Users\54358\Downloads> netstat -ano | findstr :27000
  TCP    127.0.0.1:27000        0.0.0.0:0              LISTENING       22332
  TCP    192.168.0.4:62813      192.168.0.177:27000    TIME_WAIT       0
  TCP    192.168.0.4:62814      192.168.0.177:27000    ESTABLISHED     14616
  TCP    192.168.0.4:62815      192.168.0.177:27000    TIME_WAIT       0
  TCP    192.168.0.4:62816      192.168.0.177:27000    TIME_WAIT       0
PS C:\Users\54358\Downloads> netstat -ano | findstr :27000
  TCP    127.0.0.1:27000        0.0.0.0:0              LISTENING       22332
  TCP    192.168.0.4:62813      192.168.0.177:27000    TIME_WAIT       0
  TCP    192.168.0.4:62814      192.168.0.177:27000    ESTABLISHED     14616
  TCP    192.168.0.4:62815      192.168.0.177:27000    TIME_WAIT       0
  TCP    192.168.0.4:62816      192.168.0.177:27000    TIME_WAIT       0
PS C:\Users\54358\Downloads> netstat -ano | findstr :27000
  TCP    127.0.0.1:27000        0.0.0.0:0              LISTENING       22332
  TCP    192.168.0.4:62814      192.168.0.177:27000    ESTABLISHED     14616
PS C:\Users\54358\Downloads> netstat -ano | findstr :27000
  TCP    127.0.0.1:27000        0.0.0.0:0              LISTENING       22332
PS C:\Users\54358\Downloads> netstat -ano | findstr :27000
  TCP    127.0.0.1:27000        0.0.0.0:0              LISTENING       22332

Ubuntu:

root@nico-VirtualBox:/home/nico/wazuh-agent/build# ss -tanp | grep :27000
TIME-WAIT 0      0       192.168.0.59:52254 192.168.0.177:27000                                                      
ESTAB     0      0       192.168.0.59:50656 192.168.0.177:27000 users:(("wazuh-agent",pid=7375,fd=16))               
TIME-WAIT 0      0       192.168.0.59:52270 192.168.0.177:27000                                                      
root@nico-VirtualBox:/home/nico/wazuh-agent/build# ss -tanp | grep :27000
TIME-WAIT 0      0       192.168.0.59:52254 192.168.0.177:27000                                                      
ESTAB     0      0       192.168.0.59:50656 192.168.0.177:27000 users:(("wazuh-agent",pid=7375,fd=16))               
TIME-WAIT 0      0       192.168.0.59:52270 192.168.0.177:27000                                                      
root@nico-VirtualBox:/home/nico/wazuh-agent/build# ss -tanp | grep :27000
TIME-WAIT 0      0       192.168.0.59:48822 192.168.0.177:27000                                                      
ESTAB     0      0       192.168.0.59:34580 192.168.0.177:27000 users:(("wazuh-agent",pid=7375,fd=16))               
TIME-WAIT 0      0       192.168.0.59:48810 192.168.0.177:27000                                                      
root@nico-VirtualBox:/home/nico/wazuh-agent/build# ss -tanp | grep :27000
ESTAB     0      0       192.168.0.59:41926 192.168.0.177:27000 users:(("wazuh-agent",pid=7375,fd=16))               
TIME-WAIT 0      0       192.168.0.59:54208 192.168.0.177:27000                                                      
TIME-WAIT 0      0       192.168.0.59:54212 192.168.0.177:27000

It is recommended and good practice to explicitly close the connection, as this frees up resources more quickly and avoids possible connection crashes.

@Nicogp
Copy link
Member

Nicogp commented Mar 20, 2025

Research for HTTP

For http connections the Wazuh Agent uses an object of the class boost::beast::tcp_stream, reading the official documentation of this class does not refer to a correct or recommended way to close the socket.

Browsing a little more in the documentation of the class “boost::beast::tcp_stream” we can find examples of implementation of a client/server application, here we find reference to how to close the socket.
https://www.boost.org/doc/libs/1_87_0/libs/beast/doc/html/beast/examples.html#beast.examples.clients

http_client_sync

// Gracefully close the socket
beast::error_code ec;
stream.socket().shutdown(tcp::socket::shutdown_both, ec);

// not_connected happens sometimes
// so don't bother reporting it.
//
if(ec && ec != beast::errc::not_connected)
throw beast::system_error{ec};

// If we get here then the connection is closed gracefully

http_client_async

http_client_awaitable

Research for HTTPS

For https connections the Wazuh Agent uses an object of the class boost::beast::ssl_stream, reading the official documentation of this class does not refer to a correct or recommended way to close the socket.

Browsing a little more in the documentation of the class “boost::beast::ssl_stream” we can find examples of implementation of a client/server application, here we find reference to how to close the socket.
https://www.boost.org/doc/libs/1_85_0/libs/beast/doc/html/beast/examples.html#beast.examples.clients

https://www.boost.org/doc/libs/1_85_0/libs/beast/example/http/client/sync-ssl/http_client_sync_ssl.cpp

https://www.boost.org/doc/libs/1_85_0/libs/beast/example/http/client/async-ssl/http_client_async_ssl.cpp

https://www.boost.org/doc/libs/1_85_0/libs/beast/example/http/client/awaitable-ssl/http_client_awaitable_ssl.cpp

@Nicogp
Copy link
Member

Nicogp commented Mar 21, 2025

Update 21/03/2025

After further investigation it was found that the socket is closing, it is the destructor of the tcp_stream object that closes it.

The TIME_WAIT state is a valid state, and is part of the TCP protocol, when a connection is closed from the client, TCP puts it in the TIME_WAIT state (for approximately 60s-120s, this is called “Maximum Segment Lifetime” (MSL)). This prevents old packets from interfering with new connections with the same IP/port.

The missing step in the disconnection is to perform the shutdown prior to the close, this is not done in the tcp_stream destructor.
In the previous comment there are examples of how to handle it.

@aritosteles
Copy link
Contributor

aritosteles commented Mar 25, 2025

Update

  • 21-03-2025:
    Getting updated on the issue. Started implementing some changes based on previous investigation.
  • 25-03-2025:
    Implemented necessary changes. Testing.

@aritosteles aritosteles linked a pull request Mar 25, 2025 that will close this issue
8 tasks
@wazuhci wazuhci moved this from In progress to Done in XDR+SIEM/Release 5.0.0 Mar 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: Done
Development

Successfully merging a pull request may close this issue.

3 participants