Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Error] Mistral-7B-Instruct-v0.2 Model is creating garbage responses to prompts #114

Open
kalyan-nakka opened this issue Apr 18, 2024 · 0 comments

Comments

@kalyan-nakka
Copy link

I'm trying to build the MLC Chat Android app using the prebuilt models provided in this repository as per the instructions provided in https://llm.mlc.ai/docs/prebuilt_models.html#overview. All LLMs are working fine and are providing responses as expected but only Mistral-7B-Instruct-v0.2 model is generating garbage responses to the prompts as shown below.

This is happening for all kinds of prompts sending to the model.

Please help me with this error.

Screenshot_2024-04-18-16-09-21-52_39ee8d39e9d9fa2b63462cd99e6b41fe

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant