You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to be able to use streaming responses from LLM APIs and write the result to stdout chunk by chunk and have glow render it incrementally. But it seems glow always reads the full input before rendering:
Update for anyone interested in this: I'm able to solve this outside of glow by accumulating the input and clearing and re-rendering the whole thing on each chunk.
import$from"jsr:@david/[email protected]"letinputBuffer=""constdecoder=newTextDecoder()forawait(constchunkofDeno.stdin.readable){inputBuffer+=decoder.decode(chunk)// --style auto is there to force it to output styled// https://github.com/charmbracelet/glow/blob/2430b0a/main.go#L158constoutput=await$`glow --style auto`.stdinText(inputBuffer).text()console.clear()console.log(output)}
I wrote a simple CLI for talking to LLM APIs and I pipe markdown-formatted output into
glow
for rendering.I would like to be able to use streaming responses from LLM APIs and write the result to stdout chunk by chunk and have
glow
render it incrementally. But it seemsglow
always reads the full input before rendering:glow/main.go
Lines 246 to 247 in 2430b0a
I understand why it is that way, I'm sure it's dramatically simpler. It's certainly what I would have done.
Alternatives
glow
when I have a streaming response (or stream raw markdown to stdout and then re-render withglow
once it's done)mods
CLI instead, which does support streaming but does not support ClaudeThe text was updated successfully, but these errors were encountered: