-
-
Notifications
You must be signed in to change notification settings - Fork 892
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sending high volume of packets in short amount of time causes packets dropping #2060
Comments
First of all, you can change the maximum packet size. You've mentioned this yourself, you can use What happens in the client application after the loop emitting all the chunks ends? |
After the client is done sending all chunks it emits one more message, then does a disconnect, after which the process ends. |
Well, that is likely the problem. If you are sending a lot of data it may take a while for the background tasks for flush everything out. If you disconnect the WebSocket then some data may still be waiting to go out. Three suggestions:
|
The callback method works for me. Thanks! Update:
The first method seems to disconnect the client because the 'ping' messages are not coming through anymore due to the large backed-up queue. As sending a large amount of data is not done often, it is fine for this application, but ideally it would not timeout the alive pings due to the message volume. |
There's two ways to send events, I forgot to mention using |
I'm typically sending small messages, but in some instances a large amount of data has to be sent over the socket.
For this, to get around the maximum packet size, I chunked up the input bytes into chunks from the sender side, and stitch then up again on the receiver side.
For small amount of chunks (~10-ish), this works fine and without any problems.
For a large amount of chunks (>40), the receiver starts losing packets.
Below the client-side code, create_chunks creates json objects which contain the data, the total amount of chunks in this initial packet, and the chunk_id, so I know which packets I have received.
Server side code waits for data and stitches together.
Logs from client side:
Emitting packet with id 0 524351
Emitting packet with id 1 524351
...
Emitting packet with id 265 524352
Emitting packet with id 266 219229
Logs on server side:
2024-05-01 19:09:43,249 [INFO] Receiving packet with id 0
2024-05-01 19:09:43,249 [INFO] Receiving packet with id 1
...
2024-05-01 19:09:43,260 [INFO] Receiving packet with id 38
2024-05-01 19:09:43,260 [INFO] Receiving packet with id 39
I can 'get around' this problem by adding a time.sleep(0.2) inbetween packet sending on the client side. Ideally I do not delay the sending of packets unnecessarily. Though this does indicate the problem has to do with some kind of internal buffer filling up.
The only thing I found on the flask-socketio documentation regarding message size regards a single message's size, and not the total aggregated size of all messages in the buffer.
max_http_buffer_size – The maximum size of a message when using the polling transport. The default is 1,000,000 bytes.
The text was updated successfully, but these errors were encountered: