Pinned Loading
-
LLaMA-Factory
LLaMA-Factory PublicForked from hiyouga/LLaMA-Factory
Unify Efficient Fine-Tuning of 100+ LLMs
Python
-
unsloth
unsloth PublicForked from unslothai/unsloth
Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory
Python
-
server
server PublicForked from triton-inference-server/server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
Python
-
NeMo
NeMo PublicForked from NVIDIA/NeMo
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
Python
-
servers
servers PublicForked from modelcontextprotocol/servers
Model Context Protocol Servers
JavaScript
-
continue
continue PublicForked from continuedev/continue
⏩ Continue is the leading open-source AI code assistant. You can connect any models and any context to build custom autocomplete and chat experiences inside VS Code and JetBrains
TypeScript
If the problem persists, check the GitHub status page or contact support.