Skip to content

Releases: meta-llama/llama-stack

v0.1.0

24 Jan 17:47
Compare
Choose a tag to compare

We are excited to announce a stable API release of Llama Stack, which enables developers to build RAG applications and Agents using tools and safety shields, monitor and those agents with telemetry, and evaluate the agent with scoring functions.

Context

GenAI application developers need more than just an LLM - they need to integrate tools, connect with their data sources, establish guardrails, and ground the LLM responses effectively. Currently, developers must piece together various tools and APIs, complicating the development lifecycle and increasing costs. The result is that developers are spending more time on these integrations rather than focusing on the application logic itself. The bespoke coupling of components also makes it challenging to adopt state-of-the-art solutions in the rapidly evolving GenAI space. This is particularly difficult for open models like Llama, as best practices are not widely established in the open.

Llama Stack was created to provide developers with a comprehensive and coherent interface that simplifies AI application development and codifies best practices across the Llama ecosystem. Since our launch in September 2024, we have seen a huge uptick in interest in Llama Stack APIs by both AI developers and from partners building AI services with Llama models. Partners like Nvidia, Fireworks, and Ollama have collaborated with us to develop implementations across various APIs, including inference, memory, and safety.

With Llama Stack, you can easily build a RAG agent which can also search the web, do complex math, and custom tool calling. You can use telemetry to inspect those traces, and convert telemetry into evals datasets. And with Llama Stack’s plugin architecture and prepackage distributions, you choose to run your agent anywhere - in the cloud with our partners, deploy your own environment using virtualenv, conda, or Docker, operate locally with Ollama, or even run on mobile devices with our SDKs. Llama Stack offers unprecedented flexibility while also simplifying the developer experience.

Release

After iterating on the APIs for the last 3 months, today we’re launching a stable release (V1) of the Llama Stack APIs and the corresponding llama-stack server and client packages(v0.1.0). We now have automated tests for providers. These tests make sure that all provider implementations are verified. Developers can now easily and reliably select distributions or providers based on their specific requirements.

There are example standalone apps in llama-stack-apps.

Key Features of this release

  • Unified API Layer

    • Inference: Run LLM models
    • RAG: Store and retrieve knowledge for RAG
    • Agents: Build multi-step agentic workflows
    • Tools: Register tools that can be called by the agent
    • Safety: Apply content filtering and safety policies
    • Evaluation: Test model and agent quality
    • Telemetry: Collect and analyze usage data and complex agentic traces
    • Post Training ( Coming Soon ): Fine tune models for specific use cases
  • Rich Provider Ecosystem

    • Local Development: Meta's Reference, Ollama
    • Cloud: Fireworks, Together, Nvidia, AWS Bedrock, Groq, Cerebras
    • On-premises: Nvidia NIM, vLLM, TGI, Dell-TGI
    • On-device: iOS and Android support
  • Built for Production

    • Pre-packaged distributions for common deployment scenarios
    • Backwards compatibility across model versions
    • Comprehensive evaluation capabilities
    • Full observability and monitoring
  • Multiple developer interfaces

    • CLI: Command line interface
    • Python SDK
    • Swift iOS SDK
    • Kotlin Android SDK
  • Sample llama stack applications

    • Python
    • iOS
    • Android

What's Changed

  • [4/n][torchtune integration] support lazy load model during inference by @SLR722 in #620
  • remove unused telemetry related code for console by @dineshyv in #659
  • Fix Meta reference GPU implementation by @ashwinb in #663
  • Fixed imports for inference by @cdgamarose-nv in #661
  • fix trace starting in library client by @dineshyv in #655
  • Add Llama 70B 3.3 to fireworks by @aidando73 in #654
  • Tools API with brave and MCP providers by @dineshyv in #639
  • [torchtune integration] post training + eval by @SLR722 in #670
  • Fix post training apis broken by torchtune release by @SLR722 in #674
  • Add missing venv option in --image-type by @terrytangyuan in #677
  • Removed unnecessary CONDA_PREFIX env var in installation guide by @terrytangyuan in #683
  • Add 3.3 70B to Ollama inference provider by @aidando73 in #681
  • docs: update evals_reference/index.md by @eltociear in #675
  • [remove import ][1/n] clean up import & in apis/ by @yanxi0830 in #689
  • [bugfix] fix broken vision inference, change serialization for bytes by @yanxi0830 in #693
  • Minor Quick Start documentation updates. by @derekslager in #692
  • [bugfix] fix meta-reference agents w/ safety multiple model loading pytest by @yanxi0830 in #694
  • [bugfix] fix prompt_adapter interleaved_content_convert_to_raw by @yanxi0830 in #696
  • Add missing "inline::" prefix for providers in building_distro.md by @terrytangyuan in #702
  • Fix failing flake8 E226 check by @terrytangyuan in #701
  • Add missing newlines before printing the Dockerfile content by @terrytangyuan in #700
  • Add JSON structured outputs to Ollama Provider by @aidando73 in #680
  • [#407] Agents: Avoid calling tools that haven't been explicitly enabled by @aidando73 in #637
  • Made changes to readme and pinning to llamastack v0.0.61 by @heyjustinai in #624
  • [rag evals][1/n] refactor base scoring fn & data schema check by @yanxi0830 in #664
  • [Post Training] Fix missing import by @SLR722 in #705
  • Import from the right path by @SLR722 in #708
  • [#432] Add Groq Provider - chat completions by @aidando73 in #609
  • Change post training run.yaml inference config by @SLR722 in #710
  • [Post training] make validation steps configurable by @SLR722 in #715
  • Fix incorrect entrypoint for broken llama stack run by @terrytangyuan in #706
  • Fix assert message and call to completion_request_to_prompt in remote:vllm by @terrytangyuan in #709
  • Fix Groq invalid self.config reference by @aidando73 in #719
  • support llama3.1 8B instruct in post training by @SLR722 in #698
  • remove default logger handlers when using libcli with notebook by @dineshyv in #718
  • move DataSchemaValidatorMixin into standalone utils by @yanxi0830 in #720
  • add 3.3 to together inference provider by @yanxi0830 in #729
  • Update CODEOWNERS - add sixianyi0721 as the owner by @sixianyi0721 in #731
  • fix links for distro by @yanxi0830 in #733
  • add --version to llama stack CLI & /version endpoint by @yanxi0830 in #732
  • agents to use tools api by @dineshyv in #673
  • Add X-LlamaStack-Client-Version, rename ProviderData -> Provider-Data by @ashwinb in #735
  • Check version incompatibility by @ashwinb in #738
  • Add persistence for localfs datasets by @VladOS95-cyber in #557
  • Fixed typo in default VLLM_URL in remote-vllm.md by @terrytangyuan in #723
  • Consolidating Memory tests under client-sdk by @vladimirivic in #703
  • Expose LLAMASTACK_PORT in cli.stack.run by @terrytangyuan in #722
  • remove conflicting default for tool prompt format in chat completion by @dineshyv in #742
  • rename LLAMASTACK_PORT to LLAMA_STACK_PORT for consistency with other env vars by @raghotham in #744
  • Add inline vLLM inference provider to regression tests and fix regressions by @frreiss in #662
  • [CICD] github workflow to push nightly package to testpypi by @yanxi0830 in #734
  • Replaced zrangebylex method in the range method by @che...
Read more

v0.1.0rc12

22 Jan 22:24
Compare
Choose a tag to compare
v0.1.0rc12 Pre-release
Pre-release

What's Changed

Read more

v0.0.63

18 Dec 07:17
Compare
Choose a tag to compare

A small but important bug-fix release to update the URL datatype for the client-SDKs. The issue affected multimodal agentic turns especially.

Full Changelog: v0.0.62...v0.0.63

v0.0.62

18 Dec 02:39
Compare
Choose a tag to compare

What's Changed

A few important updates some of which are backwards incompatible. You must update your run.yamls when upgrading. As always look to templates/<distro>/run.yaml for reference.

  • Make embedding generation go through inference by @dineshyv in #606
  • [/scoring] add ability to define aggregation functions for scoring functions & refactors by @yanxi0830 in #597
  • Update the "InterleavedTextMedia" type by @ashwinb in #635
  • [NEW!] Experimental post-training APIs! #540, #593, etc.

A variety of fixes and enhancements. Some selected ones:

New Contributors

Full Changelog: v0.0.61...v0.0.62

v0.0.61

10 Dec 20:50
e2054d5
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.0.55...v0.0.61

v0.0.55 release

23 Nov 17:14
Compare
Choose a tag to compare

What's Changed

  • Fix TGI inference adapter
  • Fix llama stack build in 0.0.54 by @dltn in #505
  • Several documentation related improvements
  • Fix opentelemetry adapter by @dineshyv in #510
  • Update Ollama supported llama model list by @hickeyma in #483

Full Changelog: v0.0.54...v0.0.55

Llama Stack 0.0.54 Release

22 Nov 00:36
Compare
Choose a tag to compare

What's Changed

  • Bugfixes release on top of 0.0.53
  • Don't depend on templates.py when print llama stack build messages by @ashwinb in #496
  • Restructure docs by @dineshyv in #494
  • Since we are pushing for HF repos, we should accept them in inference configs by @ashwinb in #497
  • Fix fp8 quantization script. by @liyunlu0618 in #500
  • use logging instead of prints by @dineshyv in #499

New Contributors

Full Changelog: v0.0.53...v0.0.54

Llama Stack 0.0.53 Release

20 Nov 22:18
Compare
Choose a tag to compare

🚀 Initial Release Notes for Llama Stack!

Added

  • Resource-oriented design for models, shields, memory banks, datasets and eval tasks
  • Persistence for registered objects with distribution
  • Ability to persist memory banks created for FAISS
  • PostgreSQL KVStore implementation
  • Environment variable placeholder support in run.yaml files
  • Comprehensive Zero-to-Hero notebooks and quickstart guides
  • Support for quantized models in Ollama
  • Vision models support for Together, Fireworks, Meta-Reference, and Ollama, and vLLM
  • Bedrock distribution with safety shields support
  • Evals API with task registration and scoring functions
  • MMLU and SimpleQA benchmark scoring functions
  • Huggingface dataset provider integration for benchmarks
  • Support for custom dataset registration from local paths
  • Benchmark evaluation CLI tools with visualization tables
  • RAG evaluation scoring functions and metrics
  • Local persistence for datasets and eval tasks

Changed

  • Split safety into distinct providers (llama-guard, prompt-guard, code-scanner)
  • Changed provider naming convention (implsinline, adaptersremote)
  • Updated API signatures for dataset and eval task registration
  • Restructured folder organization for providers
  • Enhanced Docker build configuration
  • Added version prefixing for REST API routes
  • Enhanced evaluation task registration workflow
  • Improved benchmark evaluation output formatting
  • Restructured evals folder organization for better modularity

Removed

  • llama stack configure command

What's Changed

Read more