Skip to content

Commit

Permalink
WIP AI server docs.
Browse files Browse the repository at this point in the history
  • Loading branch information
Layoric committed Oct 1, 2024
1 parent 6f99076 commit 3378f98
Show file tree
Hide file tree
Showing 5 changed files with 134 additions and 10 deletions.
2 changes: 1 addition & 1 deletion MyApp/_pages/ai-server/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ AI Server simplifies the integration and management of AI capabilities in your a
## Getting Started for Developers

1. **Setup**: Follow the Quick Start guide to deploy AI Server.
2. **Configuration**: Use the Admin UI to add your AI providers and generate API keys.
2. **Configuration**: Use the Admin Portal to add your AI providers and generate API keys.
3. **Integration**: Choose your preferred language and use ServiceStack's Add ServiceStack Reference to generate type-safe client libraries.
4. **Development**: Start making API calls to AI Server from your application, leveraging the full suite of AI capabilities.

Expand Down
68 changes: 68 additions & 0 deletions MyApp/_pages/ai-server/install/comfy-extension.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
---
title: ComfyUI Extension
description: Installing and configuring the ComfyUI extension
---

# ComfyUI Extension

ComfyUI is a powerful image generation and manipulation tool that can be used to create images from text, images from images, and more. It is a key component of AI Server that provides a wide range of image processing capabilities.
As a way to leverage the ComfyUI API in a more accessible manner, we have support for ComfyUI as a provider type in AI Server. This allows you to easily integrate ComfyUI into your AI Server instance using it as a remote self-hosted agent capable of processing image requests, and other modalities.

## Installing the ComfyUI Extension

To install this more easily, [we have put together a Docker image and a Docker Compose file](https://github.com/serviceStack/agent-comfy) that you can use to get started with ComfyUI in AI Server that is already bundled with the ComfyUI extension, and all the necessary dependencies.

### Running the ComfyUI Extension

To run the ComfyUI extension, you can use the following steps:

1. **Clone the Repository**: Clone the ComfyUI extension repository from GitHub.

```sh
git clone https://github.com/ServiceStack/agent-comfy.git
```

2. **Edit the example.env File**: Update the example.env file with your desired settings.

```sh
cp example.env .env
```

And then edit the `.env` file with your desired settings:

```sh
DEFAULT_MODELS=sdxl-lightning,flux-schnell
API_KEY=your_agent_api_key
HF_TOKEN=your_hf_token
CIVITAI_TOKEN=your_civitai_api_key
```

3. **Run the Docker Compose**: Start the ComfyUI extension with Docker Compose.

```sh
docker compose up
```

### .env Configuration

The `.env` file is used to configure the ComfyUI extension during the initial setup, and is the easiest way to get started.

The keys available in the `.env` file are:

- **DEFAULT_MODELS**: Comma-separated list of models to load on startup. This will be used to automatically download the models and their related dependencies. The full list of options can be found on your AI Server at `/lib/data/ai-models.json`.
- **API_KEY**: This is the API key that will be used by your AI Server to authenticate with the ComfyUI. If not provided, there will be no authentication required to access your ComfyUI instance.
- **HF_TOKEN**: This is the Hugging Face token that will be used to authenticate with the Hugging Face API when trying to download models. If not provided, models requiring Hugging Face authentication like those with user agreements will not be downloaded.
- **CIVITAI_TOKEN**: This is the Civitai API key that will be used to authenticate with the Civitai API when trying to download models. If not provided, models requiring Civitai authentication like those with user agreements will not be downloaded.

> Models requiring authentication to download are also flagged in the `/lib/data/ai-models.json` file.

### Accessing the ComfyUI Extension

Once the ComfyUI extension is running, you can access the ComfyUI instance at [http://localhost:7860](http://localhost:7860) and can be used as a standard ComfyUI instance.
The AI Server has pre-defined workflows to interact with your ComfyUI instance to generate images, audio, text, and more.

These workflows are found in the AI Server AppHost project under `workflows`. These are templated JSON versions of workflows you save in the ComfyUI web interface.

### Advanced Configuration

ComfyUI workflows can be changed or overridden on a per model basis by editing the `workflows` folder in the AI Server AppHost project. Flux Schnell is an example of overriding text-to-image for just a single workflow for which the code can be found in `AiServer/Configure.AppHost.cs`.
26 changes: 18 additions & 8 deletions MyApp/_pages/ai-server/install/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,25 +5,27 @@ title: Configuring AI Server
# Configuring AI Server

AI Server makes orchestration of various AI providers easy by providing a unified gateway to process LLM, AI, and image transformation requests.
It comes with an Admin Dashboard that allows you to configure your AI providers and generate API keys to control access.
It comes with an Admin Portal that allows you to configure your AI providers and generate API keys to control access.

## Accessing the Admin Dashboard
## Accessing the Admin Portal

Running AI Server will land you on a page showing access to:

- **[Admin Dashboard](http://localhost:5005/admin)**: Centralized management of AI providers and API keys.
- **[Admin Portal](http://localhost:5005/admin)**: Centralized management of AI providers and API keys.
- **[Admin UI](http://localhost:5005/admin-ui**: ServiceStack built in Admin UI to manage your AI Server.
- **[API Explorer](http://localhost:5005/ui**: Explore and test the AI Server API endpoints in a friendly UI.
- **[AI Server Documentation](https://docs.servicestack.net/ai-server/)**: Detailed documentation on how to use AI Server.

> The default credentials to access the Admin Portal are `p@55wOrd`, this can be changed in your `.env` file by setting the `AUTH_SECRET` key.
## Configuring AI Providers

AI Providers are the external LLM based services like OpenAI, Google, Mistral etc that AI Server interacts with to process Chat requests.

There are two ways to configure AI Providers:

1. **.env File**: Update the `.env` file with your API keys and run the AI Server for the first time.
2. **Admin Dashboard**: Use the Admin Dashboard to add, edit, or remove AI Providers and generate AI Server API keys.
2. **Admin Portal**: Use the Admin Portal to add, edit, or remove AI Providers and generate AI Server API keys.

### Using the .env File

Expand All @@ -39,11 +41,11 @@ The .env file is located in the root of the AI Server repository and contains th

Providing the API keys in the .env file will automatically configure the AI Providers when you run the AI Server for the first time.

### Using the Admin Dashboard
### Using the Admin Portal

The Admin Dashboard provides a more interactive way to manage your AI Providers after the AI Server is running.
The Admin Portal provides a more interactive way to manage your AI Providers after the AI Server is running.

To access the Admin Dashboard:
To access the Admin Portal:

1. Navigate to [http://localhost:5005/admin](http://localhost:5005/admin).
2. Log in with the default credentials `p@55wOrd`.
Expand All @@ -62,10 +64,18 @@ AI Server supports the following AI Providers:

## Generating AI Server API Keys

API keys are used to authenticate requests to AI Server and are generated via the Admin Dashboard.
API keys are used to authenticate requests to AI Server and are generated via the Admin Portal.

Here you can create new API keys, view existing keys, and revoke keys as needed.

Keys can be created with expiration dates, and restrictions to specific API endpoints, along with notes to help identify the key's purpose.


## Stored File Management

AI Server stores results of the AI operations in a pre-configured paths.

- **Artifacts**: AI generated images, audio, and video files, default path is `App_Data/artifacts`.
- **Files**: Cached variants and processed files, default path is `App_Data/files`.

These paths can be configured in the `.env` file by setting the `ARTIFACTS_PATH` and `AI_FILES_PATH` keys.
2 changes: 1 addition & 1 deletion MyApp/_pages/ai-server/install/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ docker compose up

## Accessing AI Server

Once the AI Server is running, you can access the Admin UI at [http://localhost:5005](http://localhost:5005) to configure your AI providers and generate API keys.
Once the AI Server is running, you can access the Admin Portal at [http://localhost:5005/admin](http://localhost:5005/admin) to configure your AI providers and generate API keys.
If you first ran the AI Server with configured API Keys in your `.env` file, you providers will be automatically configured for the related services.

> You can reset the process by deleting your local `App_Data` directory and rerunning `docker compose up`.
46 changes: 46 additions & 0 deletions MyApp/_pages/ai-server/install/ollama.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
---
title: Self-hosted AI Providers with Ollama
---

# Self-hosted AI Providers with Ollama

Ollama can be used as an AI Provider type to process LLM requests in AI Server.

## Setting up Ollama

When using Ollama as an AI Provider, you will need to ensure the models you want to use are available in your Ollama instance.

This can be done via the command `ollama pull <model-name>` to download the model from the [Ollama library](https://ollama.com/library).

Once the model is downloaded, and your Ollama instance is running and accessible to AI Server, you can configure Ollama as an AI Provider in AI Server Admin Portal.

## Configuring Ollama in AI Server

Navigating to the Admin Portal in AI Server, select the **AI Providers** menu item on the left sidebar.

![AI Providers](/images/ai-server/ai-providers.png)

Click on the **New Provider** button at the top of the grid.

![New Provider](/images/ai-server/new-provider.png)

Select Ollama as the Provider Type at the top of the form, and fill in the required fields:

- **Name**: A friendly name for the provider.
- **Endpoint**: The URL of your Ollama instance, eg `http://localhost:11434`.
- **API Key**: Optional API key to authenticate with your Ollama instance.
- **Priority**: The priority of the provider, used to determine the order of provider selection if multiple provide the same model.

![Ollama Provider](/images/ai-server/ollama-provider.png)

Once the URL and API Key are set, requests will be made to your Ollama instance to list available models. These will then be displayed as options to enable for the provider you are configuring.

![Ollama Models](/images/ai-server/ollama-models.png)

Select the models you want to enable for this provider, and click **Save** to save the provider configuration.

## Using Ollama models in AI Server

Once configured, you can make requests to AI Server to process LLM requests using the models available in your Ollama instance.

Model names in AI Server are common across all providers, enabling you to switch or load balance between providers without changing your client code. See [Usage](https://docs.servicestack.net/ai-server/usage/) for more information on making requests to AI Server.

0 comments on commit 3378f98

Please sign in to comment.