Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use local model, besides Huggingface API #47

Open
GhostBP112 opened this issue Oct 2, 2024 · 2 comments
Open

Use local model, besides Huggingface API #47

GhostBP112 opened this issue Oct 2, 2024 · 2 comments
Labels
enhancement New feature or request

Comments

@GhostBP112
Copy link

Is it planned or possible to use a local LLM for processing?
I would see this variant as a possibility to significantly increase the generation speed (if the appropriate hardware is available) and also the possibility to use the model offline.

@barun-saha
Copy link
Owner

Hi,

Thanks for your interest in SlideDeck AI.

There is no "plan" as such for this. However, the use of local LLMs has been in "thoughts" lately.

Regarding the speed, token generation with Mistral Nemo appears to take longer, yes. I have been contemplating to switch back to Mistral or at least provide it as an alternative.

Let me create some tasks toward this general direction.

@barun-saha barun-saha added the enhancement New feature or request label Oct 2, 2024
@barun-saha
Copy link
Owner

Hi @GhostBP112 ,

Just added support for offline LLMs via Ollama. An environment variable needs to be set to access this mode. Detailed steps are available in the project description.

Let me know if you get a chance to try it out, and whether it works out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants