-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support local model with inference-engine mlx #475
base: main
Are you sure you want to change the base?
Conversation
I like the idea here, but think rather than rely on a folder structure this should use config files or command line arguments to specify paths to model implementations and populate things like the model card list at runtime. I'm considering refactoring the inference engine to take model implementations by default and use the shard downloader as one of a few possible routes to get weights, and I think automatically instantiating and parsing a default directory structure for this purpose creates a lot of potential for issues down the line |
f6fc665
to
438eae4
Compare
@blindcrone |
… and automatic download steps.
New Updates:
Note: This feature only supports the |
Enhancement: support local and custom models #165
This is a modified version of my existing code, although the code quality may not be ideal. It supports running local path models using mlx for both CLI and ChatAPI. (For now, you still need to manually place the model into the ~/.cache/exo directory before use. I'm working on automating this, but my unfamiliarity with grpc requires further research.)
I would greatly appreciate any suggestions on how to improve or optimize the code for better results.
Changes: