You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sometimes a model will start spitting out 0 Token replies over and over and the model needs to be changed
Sometimes it would be nice to be able to switch from a Vision model to a Non Vision model mid conversation.
The web UI allows the selection of a new model, the new model is confirmed in the Python window.
A summary is apparently saved (though this summary ("OK") is really terrible, and appears to have been saved by Lumimaid) and then the summary ("OK") is loaded.
The next step is expected, but not ideal - We return to 'Waiting for player to select NPC'
Then the NPC is correctly selected - expected, but not ideal
Then I get an error message about the override file (that I do not get on initial load)
Then comes the repeatable bug.
TTS; Connecting to XTTS...
...and this becomes an infinite wait that ultimately forces a restart of Mantella from the MCM.
BUGFIX: Changing Models should not hang Mantella indefinitely when using XTTS.
FEATURE REQUEST: Changing Models should be seamless when using OpenRouter.
Current Behavior (post-bugfix): Close Conversation, Summarize, Change Model, Load Summary, Continue from summary
Proposed Behavior: Write Conversation JSON, No Summary, Change Model, Send the entire collected context from the other LLM to the new LLM (rely on OpenRouter's "Middle Out" culling for Context Overrun cases), including the entirety of the 'current' conversation from the Converation JSON log we just wrote, so the conversation is effectively seamless (though the reply 'style' and 'tone' will change if shifting from, say Anthropic Vision to Hermes No-Vision or even Lumimaid 8b from Hermes 405B...)
The text was updated successfully, but these errors were encountered:
Part Bug Report
Part Feature Request
Sometimes a model will start spitting out 0 Token replies over and over and the model needs to be changed
Sometimes it would be nice to be able to switch from a Vision model to a Non Vision model mid conversation.
The web UI allows the selection of a new model, the new model is confirmed in the Python window.

A summary is apparently saved (though this summary ("OK") is really terrible, and appears to have been saved by Lumimaid) and then the summary ("OK") is loaded.
The next step is expected, but not ideal - We return to 'Waiting for player to select NPC'
Then the NPC is correctly selected - expected, but not ideal
Then I get an error message about the override file (that I do not get on initial load)
Then comes the repeatable bug.
TTS; Connecting to XTTS...
...and this becomes an infinite wait that ultimately forces a restart of Mantella from the MCM.
BUGFIX:
Changing Models should not hang Mantella indefinitely when using XTTS.
FEATURE REQUEST:
Changing Models should be seamless when using OpenRouter.
Current Behavior (post-bugfix): Close Conversation, Summarize, Change Model, Load Summary, Continue from summary
Proposed Behavior: Write Conversation JSON, No Summary, Change Model, Send the entire collected context from the other LLM to the new LLM (rely on OpenRouter's "Middle Out" culling for Context Overrun cases), including the entirety of the 'current' conversation from the Converation JSON log we just wrote, so the conversation is effectively seamless (though the reply 'style' and 'tone' will change if shifting from, say Anthropic Vision to Hermes No-Vision or even Lumimaid 8b from Hermes 405B...)
The text was updated successfully, but these errors were encountered: