You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
this causes repeated memcpy on the cpu every time a new chunk comes in.
imagine a long streaming response where a 50kb response is streaming in 10 bytes at a time. towards the end of the stream, a huge amount of data is copied over and over every time 10 bytes are added.
The text was updated successfully, but these errors were encountered:
the implementation here should mimic realloc, which generally grows the underlying buffer exponentially to avoid repeated copies.
one option is the uint8arraylist node module. i had some success integrating it, but it only supports ESM whereas connect needs to support CJS as well.
Thanks for the issue! It's certainly possible to avoid a lot of allocations and copies.
Uint8ArrayList keeps a reference of chunks and defers concatenation.
GrowableUint8Array allocates legroom in a buffer.
The downside of both approaches is that both implementations aren't TypedArrays, and can't be used for views (e.g. Uint8Array, DataView). This complicates using them efficiently for reading envelope sizes.
The proper solution are resizable ArrayBuffers. The API allows growing on demand, makes it easy to allocate for an envelope, and is compatible with views. Unfortunately, it's relatively new, and we have to hold back on using it to support older browsers and Node.js < v20.
Besides transformSplitEnvelope, readAllBytes from the same file could also make use of resizable ArrayBuffers, as well as some other places in the code base, and several places in Protobuf-ES. So I'm not sure that it's worth putting much work into optimizing transformSplitEnvelope, when the more general optimization with native support is in sight.
connect-es/packages/connect/src/protocol/async-iterable.ts
Lines 1008 to 1017 in 36af3f2
this causes repeated memcpy on the cpu every time a new chunk comes in.
imagine a long streaming response where a 50kb response is streaming in 10 bytes at a time. towards the end of the stream, a huge amount of data is copied over and over every time 10 bytes are added.
The text was updated successfully, but these errors were encountered: