Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using Miku bombards my text generation server with requests #74

Open
jobobby04 opened this issue Jan 3, 2024 · 1 comment
Open

Using Miku bombards my text generation server with requests #74

jobobby04 opened this issue Jan 3, 2024 · 1 comment

Comments

@jobobby04
Copy link

It just keeps looping requests, not letting my server get any time to generate text. I have to kill miku to stop it.

@underhill-gb
Copy link

underhill-gb commented Oct 9, 2024

I've observed the same behavior with Miku in latest source when using Ollama, Oobabooga, and TabbyAPI.

After configuring Miku to work with each of these engines, I found that sending a single chat message (such as "Hello") to a Chatbot caused Miku to continuously send repeated requests to the /v1/completions endpoint.

These POST requests persisted indefinitely, as shown in the Ollama server logs below, yet the chatbot failed to return/display a proper response:

[GIN] 2024/10/08 - 19:11:17 | 200 | 40.1179ms | 192.168.10.159 | POST "/v1/completions"
[GIN] 2024/10/08 - 19:11:18 | 200 | 41.3535ms | 192.168.10.159 | POST "/v1/completions"
[GIN] 2024/10/08 - 19:11:18 | 200 | 43.2242ms | 192.168.10.159 | POST "/v1/completions"
... Etc.

Aphrodite was the only engine that worked successfully, suggesting that this issue may lie in Miku's handling of API requests across other OpenAI-compatible engines.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants