-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf: reduce overhead #93
Conversation
@yusukebe |
Hi @tsctx! Super cool! The five changes you made all seem good. The tests passed, and some real-world examples with this version, including streaming responses, work fine. The benchmark I'm taking is super simple, just a "Hello World" application: import { serve } from '@hono/node-server'
import { Hono } from 'hono'
const app = new Hono()
app.get('/', (c) => c.text('Hi'))
serve(app) And I access it with Bombardier using this command: bombardier -d 10s --fasthttp http://localhost:3000 The results: It really has become faster! In this case, it does not use streaming, so we may have to test another pattern, but currently, this simple pattern is a good reference. I'd like to merge it into the "main". Do you have any more changes? |
You may merge them. |
Thanks! Okay, let's ship it. If you have any ideas for improvement, we'd be glad if you created an issue or PR! |
Hi @yusukebe,
I saw your post on X (Twiiter) and tried to optimize it.
For performance improvement:
writeHead
instead ofsetHeader
.ReadableStream
toReadable
. (Do not callReadable.fromWeb
)Readable.pipeline
.Response#arrayBuffer
instead ofResponse#text
andBuffer.byteLength
Headers#getSetCookie
since it is already split.In other words, this change reduces the overhead of the handle portion of the body.
incomingMessage => Request => handling => Response => ReadableStream => Readable => outgoingMessage
incomingMessage => Request => handling => Response => ReadableStream => outgoingMessage