-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is it compatible with SakuraLLM? #157
Comments
I have a branch that adds support for llama.cpp via ollama (https://github.com/machinewrapped/gpt-subtrans/tree/ollama-support) but it's currently on hold because the ollama server stops responding after the first request and I need assistance to understand why (ollama/ollama-python#109). Customising prompts for the model is possible, but it would require modifying the code. You would need the ollama provider to return a custom If the model returns the translation in a specific format rather than following the instructions, the client would need to provide a custom TL;DR yes but not without writing code, as it stands. |
The main branch can communicate with llama.cpp via the openai api and appears to be available. The main problem is prompt, please formally support llama.cpp(not llama.cpp py) if possible. |
SakuraLLM is a specialized Japanese to Chinese LLM.
It can be deployed locally on windows via llama.cpp.
But it has special requirements for prompt, and the existing gpt-subtrans can hardly be used.
SakuraLLM repository: https://github.com/SakuraLLM/Sakura-13B-Galgame
prompt build:
The text was updated successfully, but these errors were encountered: