generated from NetCoreTemplates/razor-ssg
-
Notifications
You must be signed in to change notification settings - Fork 3
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
5 changed files
with
134 additions
and
10 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,68 @@ | ||
--- | ||
title: ComfyUI Extension | ||
description: Installing and configuring the ComfyUI extension | ||
--- | ||
|
||
# ComfyUI Extension | ||
|
||
ComfyUI is a powerful image generation and manipulation tool that can be used to create images from text, images from images, and more. It is a key component of AI Server that provides a wide range of image processing capabilities. | ||
As a way to leverage the ComfyUI API in a more accessible manner, we have support for ComfyUI as a provider type in AI Server. This allows you to easily integrate ComfyUI into your AI Server instance using it as a remote self-hosted agent capable of processing image requests, and other modalities. | ||
|
||
## Installing the ComfyUI Extension | ||
|
||
To install this more easily, [we have put together a Docker image and a Docker Compose file](https://github.com/serviceStack/agent-comfy) that you can use to get started with ComfyUI in AI Server that is already bundled with the ComfyUI extension, and all the necessary dependencies. | ||
|
||
### Running the ComfyUI Extension | ||
|
||
To run the ComfyUI extension, you can use the following steps: | ||
|
||
1. **Clone the Repository**: Clone the ComfyUI extension repository from GitHub. | ||
|
||
```sh | ||
git clone https://github.com/ServiceStack/agent-comfy.git | ||
``` | ||
|
||
2. **Edit the example.env File**: Update the example.env file with your desired settings. | ||
|
||
```sh | ||
cp example.env .env | ||
``` | ||
|
||
And then edit the `.env` file with your desired settings: | ||
|
||
```sh | ||
DEFAULT_MODELS=sdxl-lightning,flux-schnell | ||
API_KEY=your_agent_api_key | ||
HF_TOKEN=your_hf_token | ||
CIVITAI_TOKEN=your_civitai_api_key | ||
``` | ||
|
||
3. **Run the Docker Compose**: Start the ComfyUI extension with Docker Compose. | ||
|
||
```sh | ||
docker compose up | ||
``` | ||
|
||
### .env Configuration | ||
|
||
The `.env` file is used to configure the ComfyUI extension during the initial setup, and is the easiest way to get started. | ||
|
||
The keys available in the `.env` file are: | ||
|
||
- **DEFAULT_MODELS**: Comma-separated list of models to load on startup. This will be used to automatically download the models and their related dependencies. The full list of options can be found on your AI Server at `/lib/data/ai-models.json`. | ||
- **API_KEY**: This is the API key that will be used by your AI Server to authenticate with the ComfyUI. If not provided, there will be no authentication required to access your ComfyUI instance. | ||
- **HF_TOKEN**: This is the Hugging Face token that will be used to authenticate with the Hugging Face API when trying to download models. If not provided, models requiring Hugging Face authentication like those with user agreements will not be downloaded. | ||
- **CIVITAI_TOKEN**: This is the Civitai API key that will be used to authenticate with the Civitai API when trying to download models. If not provided, models requiring Civitai authentication like those with user agreements will not be downloaded. | ||
|
||
> Models requiring authentication to download are also flagged in the `/lib/data/ai-models.json` file. | ||
|
||
### Accessing the ComfyUI Extension | ||
|
||
Once the ComfyUI extension is running, you can access the ComfyUI instance at [http://localhost:7860](http://localhost:7860) and can be used as a standard ComfyUI instance. | ||
The AI Server has pre-defined workflows to interact with your ComfyUI instance to generate images, audio, text, and more. | ||
|
||
These workflows are found in the AI Server AppHost project under `workflows`. These are templated JSON versions of workflows you save in the ComfyUI web interface. | ||
|
||
### Advanced Configuration | ||
|
||
ComfyUI workflows can be changed or overridden on a per model basis by editing the `workflows` folder in the AI Server AppHost project. Flux Schnell is an example of overriding text-to-image for just a single workflow for which the code can be found in `AiServer/Configure.AppHost.cs`. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,46 @@ | ||
--- | ||
title: Self-hosted AI Providers with Ollama | ||
--- | ||
|
||
# Self-hosted AI Providers with Ollama | ||
|
||
Ollama can be used as an AI Provider type to process LLM requests in AI Server. | ||
|
||
## Setting up Ollama | ||
|
||
When using Ollama as an AI Provider, you will need to ensure the models you want to use are available in your Ollama instance. | ||
|
||
This can be done via the command `ollama pull <model-name>` to download the model from the [Ollama library](https://ollama.com/library). | ||
|
||
Once the model is downloaded, and your Ollama instance is running and accessible to AI Server, you can configure Ollama as an AI Provider in AI Server Admin Portal. | ||
|
||
## Configuring Ollama in AI Server | ||
|
||
Navigating to the Admin Portal in AI Server, select the **AI Providers** menu item on the left sidebar. | ||
|
||
![AI Providers](/images/ai-server/ai-providers.png) | ||
|
||
Click on the **New Provider** button at the top of the grid. | ||
|
||
![New Provider](/images/ai-server/new-provider.png) | ||
|
||
Select Ollama as the Provider Type at the top of the form, and fill in the required fields: | ||
|
||
- **Name**: A friendly name for the provider. | ||
- **Endpoint**: The URL of your Ollama instance, eg `http://localhost:11434`. | ||
- **API Key**: Optional API key to authenticate with your Ollama instance. | ||
- **Priority**: The priority of the provider, used to determine the order of provider selection if multiple provide the same model. | ||
|
||
![Ollama Provider](/images/ai-server/ollama-provider.png) | ||
|
||
Once the URL and API Key are set, requests will be made to your Ollama instance to list available models. These will then be displayed as options to enable for the provider you are configuring. | ||
|
||
![Ollama Models](/images/ai-server/ollama-models.png) | ||
|
||
Select the models you want to enable for this provider, and click **Save** to save the provider configuration. | ||
|
||
## Using Ollama models in AI Server | ||
|
||
Once configured, you can make requests to AI Server to process LLM requests using the models available in your Ollama instance. | ||
|
||
Model names in AI Server are common across all providers, enabling you to switch or load balance between providers without changing your client code. See [Usage](https://docs.servicestack.net/ai-server/usage/) for more information on making requests to AI Server. |