Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Doesn't handle long messages well #15

Open
dmatora opened this issue Sep 11, 2024 · 5 comments
Open

Doesn't handle long messages well #15

dmatora opened this issue Sep 11, 2024 · 5 comments
Assignees

Comments

@dmatora
Copy link

dmatora commented Sep 11, 2024

it does a great job on wrapping code blocks, but it's really missing ability to collapse long message into smaller "Toggle Full Message" books like BoltAI does.
When you paste 64K messages, dialog quickly becomes laggy to the point of being unusable

@avarayr
Copy link
Owner

avarayr commented Sep 11, 2024

@alrifat2 can you investigate this? We need some creative solutions to overcome React.js rendering overhead when messages get too big.
We already have virtual scrolling which means only visible chat bubbles actually get rendered, but if one single message is too big it will be rendered regardless.

Maybe upgrading to React 19 and enabling react compiler? My bottleneck with this was motion-framer incompatibility but seems like they have a beta which is compatible

@alrifat2 alrifat2 self-assigned this Sep 11, 2024
@avarayr
Copy link
Owner

avarayr commented Sep 11, 2024

@dmatora could you please check out #16 (git checkout feat/react-compiler ... bun install) and let me know if the new React compiler fixes this issue?
In my testing it handles large messages very well

@dmatora
Copy link
Author

dmatora commented Sep 12, 2024

I think I can see an improvement on Sonoma, I don't think I can on Monterey.
Either way I don't think that solves it.

  1. After getting initial render of message on the screen, scrolling through the message isn't choppy anymore (at least on Sonoma) but when you scroll between messageS, and not inside single message - issue still there.
  2. You still have to do huge amount of unnecessary scrolling, since these messages are huge. If you have a long conversation with dozens of huge code blocks, navigation would still be impossible, even without lags.
  3. Initial render time for these conversations currently is very long, and should be instant

I'd say this needs BoltAI implementation with toggle (expand/collapse) button and a button to copy entire message without expanding it.

@avarayr
Copy link
Owner

avarayr commented Sep 12, 2024

I think I can see an improvement on Sonoma, I don't think I can on Monterey. Either way I don't think that solves it.

  1. After getting initial render of message on the screen, scrolling through the message isn't choppy anymore (at least on Sonoma) but when you scroll between messageS, and not inside single message - issue still there.
  2. You still have to do huge amount of unnecessary scrolling, since these messages are huge. If you have a long conversation with dozens of huge code blocks, navigation would still be impossible, even without lags.
  3. Initial render time for these conversations currently is very long, and should be instant

I'd say this needs BoltAI implementation with toggle (expand/collapse) button and a button to copy entire message without expanding it.

For your use case, do you need the entire message to collapse when large, or only code blocks?

@dmatora
Copy link
Author

dmatora commented Sep 12, 2024

That's a good question.
I'd start with collapsing code blocks, and then gave it some time for testing to see if that's really sufficient. Currently I can't think of particular case when large text I used that is not code, but I'm sure such cases exists, like when code is not recognised

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants