diff --git a/ai/builders/get-started.mdx b/ai/builders/get-started.mdx index 92e13c2b..0fef028d 100644 --- a/ai/builders/get-started.mdx +++ b/ai/builders/get-started.mdx @@ -2,7 +2,7 @@ title: Building on the AI Subnet --- -The AI Subnet, currently in its **Alpha** stage, is fully operational and +The AI Subnet, currently in its **Beta** stage, is fully operational and already hosts several quality applications. At this stage, efforts are concentrated on **network optimization** and **user experience enhancement**. While official SDKs are still under development, developers can interact directly with the AI diff --git a/ai/gateways/start-gateway.mdx b/ai/gateways/start-gateway.mdx index 8030fae7..689f2ef0 100644 --- a/ai/gateways/start-gateway.mdx +++ b/ai/gateways/start-gateway.mdx @@ -4,7 +4,7 @@ title: Start your AI Gateway The AI Subnet is not yet integrated into the main [go-livepeer](https://github.com/livepeer/go-livepeer) software due to its -**Alpha** status. To enable AI inference capabilities on your Gateway node, +**Beta** status. To enable AI inference capabilities on your Gateway node, please use the `ai-video` branch of [go-livepeer](https://github.com/livepeer/go-livepeer/tree/ai-video). This branch contains the necessary software for the AI Gateway node. Currently, there diff --git a/ai/introduction.mdx b/ai/introduction.mdx index 488dbbfb..069e1749 100644 --- a/ai/introduction.mdx +++ b/ai/introduction.mdx @@ -5,7 +5,7 @@ iconType: regular --- - The Livepeer _AI Video Subnet_ is in its Alpha phase. Bugs or issues may be + The Livepeer _AI Video Subnet_ is in its **Beta** phase. Bugs or issues may be encountered. Contributions to improvement are appreciated - please report problems via the [issue tracker](https://github.com/livepeer/go-livepeer/issues/new/choose). Feedback @@ -202,7 +202,7 @@ place to expand support to include other model types in future updates. ### Current Limitations and Future Directions -- **Alpha Phase**: The AI Subnet is currently in its Alpha phase, and users may +- **Beta Phase**: The AI Subnet is currently in its **Beta** phase, and users may encounter bugs or issues during this early stage. It is not yet meant to be used with high demand production workloads. - **Supports Limited Set of Open-source Models**: The AI Subnet currently diff --git a/ai/orchestrators/models-config.mdx b/ai/orchestrators/models-config.mdx index f7b08634..27c5a871 100644 --- a/ai/orchestrators/models-config.mdx +++ b/ai/orchestrators/models-config.mdx @@ -71,7 +71,7 @@ currently **recommended** models and their respective prices. ### Key Configuration Fields - During the **Alpha** phase, only one "warm" model per GPU is supported. + During the **Beta** phase, only one "warm" model per GPU is supported. diff --git a/ai/orchestrators/models-download.mdx b/ai/orchestrators/models-download.mdx index 3955db13..82ff8578 100644 --- a/ai/orchestrators/models-download.mdx +++ b/ai/orchestrators/models-download.mdx @@ -34,10 +34,10 @@ downloading the currently **recommended** models for the AI Subnet. ```bash cd ~/.lpData - curl -s https://raw.githubusercontent.com/livepeer/ai-worker/main/runner/dl_checkpoints.sh | bash -s -- --alpha + curl -s https://raw.githubusercontent.com/livepeer/ai-worker/main/runner/dl_checkpoints.sh | bash -s -- --beta ``` - This command downloads the recommended models for the AI Subnet and stores them in your machine's `~/.lpData/models` directory. To obtain a complete set of AI Subnet models, omit the `--alpha` flag. This requires additional disk space. + This command downloads the recommended models for the AI Subnet and stores them in your machine's `~/.lpData/models` directory. To obtain a complete set of AI Subnet models, omit the `--beta` flag. This requires additional disk space. diff --git a/ai/orchestrators/onchain.mdx b/ai/orchestrators/onchain.mdx index 9302c940..1df5f8dc 100644 --- a/ai/orchestrators/onchain.mdx +++ b/ai/orchestrators/onchain.mdx @@ -16,7 +16,7 @@ to the AI Subnet and earn fees for processing AI inference jobs. ## Ensure you can Redeem Tickets -As the AI Subnet is in its Alpha phase, +As the AI Subnet is in its **Beta** phase, [its software](https://github.com/livepeer/go-livepeer/tree/ai-video) isn't integrated with the Mainnet Transcoding Network [software stack](https://github.com/livepeer/go-livepeer) yet. This means that @@ -143,7 +143,7 @@ contract on the [Arbitrum Mainnet](https://arbitrum.io). - Currently, setting your AI service URI using the Livepeer CLI is not supported during the **Alpha** phase of the AI Subnet. This feature is planned for inclusion in future releases. + Currently, setting your AI service URI using the Livepeer CLI is not supported during the **Beta** phase of the AI Subnet. This feature is planned for inclusion in future releases. diff --git a/ai/orchestrators/start-orchestrator.mdx b/ai/orchestrators/start-orchestrator.mdx index 166a860f..f769fb77 100644 --- a/ai/orchestrators/start-orchestrator.mdx +++ b/ai/orchestrators/start-orchestrator.mdx @@ -3,14 +3,14 @@ title: Start your AI Orchestrator --- - The AI Subnet is currently in its **Alpha** stage and is undergoing active + The AI Subnet is currently in its **Beta** stage and is undergoing active development. Running it on the same machine as your main Orchestrator or Gateway node may cause stability issues. Please proceed with caution. The AI Subnet is not yet integrated into the main [go-livepeer](https://github.com/livepeer/go-livepeer) software due to its -**Alpha** status. To equip your Orchestrator node with AI inference +**Beta** status. To equip your Orchestrator node with AI inference capabilities, please use the `ai-video` branch of [go-livepeer](https://github.com/livepeer/go-livepeer/tree/ai-video). This branch contains the necessary software for the AI Orchestrator. Currently, there diff --git a/ai/pipelines/overview.mdx b/ai/pipelines/overview.mdx index 3a78c1ff..cec0191d 100644 --- a/ai/pipelines/overview.mdx +++ b/ai/pipelines/overview.mdx @@ -16,7 +16,7 @@ section. ### Warm Models -During the **Alpha** phase of the AI Subnet, Orchestrators are encouraged to +During the **Beta** phase of the AI Subnet, Orchestrators are encouraged to keep at least **one model** per pipeline active on their GPUs ("warm models"). This approach ensures quicker response times for **early builders** on the Subnet. We're optimizing GPU model loading/unloading to relax this requirement. @@ -34,7 +34,7 @@ The current warm models for each pipeline are listed on their respective pages. Orchestrators can theoretically load **any** [diffusion model](https://huggingface.co/docs/diffusers/en/index) from [Hugging Face](https://huggingface.co/models) on-demand, optimizing GPU -resources by loading models only when needed. However, during the **Alpha** +resources by loading models only when needed. However, during the **Beta** phase, Orchestrators need to pre-download a model. diff --git a/mint.json b/mint.json index b814ca97..5c8b9207 100644 --- a/mint.json +++ b/mint.json @@ -357,7 +357,7 @@ "url": "sdks" }, { - "name": "AI Video (Alpha)", + "name": "AI Video (Beta)", "icon": "microchip-ai", "iconType": "regular", "url": "ai"