-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using candle-vllm as crate in rust? #62
Comments
Hi @gkvoelkl! Candle-vllm is a great option: you can see an example of how to build such a chatbot here in pure Rust: openai_server.rs. I would also recommend that you check out mistral.rs as it not only has PagedAttention but Metal, Cpu, vision model, adapter models, quantization, and a plethora of other features including a crate which is meant for usage in an application (docs, examples). |
Hello @EricLBuehler! Amazing work btw |
Hello Eric,I've tried various crates, but I haven't found the right one. I want to use LLM to expand the non-player characters in a game. I work with Rust and Bevy. The easiest way would be to integrate llama3 directly into my game. I need the model and a Rust crate that can work with the model. The crate should be as simple as possible, like a chat client, like the OpenAI API. Which crate would you recommend? Thanks. Kind regards, Gerhard
|
Hi Eric, great rust programm.
I am looking for a crate so I can use a chatbot function within my rust programm. I tried to to that with candle. I hope it will be more documented in den future.
Will it be possible to call a function of candle-vllm without starting an explicit server? So I can use candle-vllm within my programm.
Thanks
Best regards Gerhard
The text was updated successfully, but these errors were encountered: