@@ -622,6 +95,7 @@ Configuration is updated at ~/.dstack/config.yml
## What's next?
-1. Follow [quickstart](../quickstart.md)
-2. Browse [examples :material-arrow-top-right-thin:{ .external }](https://github.com/dstackai/dstack/blob/master/examples/README.md)
-3. Join the community via [Discord :material-arrow-top-right-thin:{ .external }](https://discord.gg/u8SmfwPpMd)
\ No newline at end of file
+1. Check the [`server/config.yml` reference](../reference/server/config.yml.md) on how to configure backends
+2. Follow [quickstart](../quickstart.md)
+3. Browse [examples :material-arrow-top-right-thin:{ .external }](https://github.com/dstackai/dstack/tree/master/examples)
+4. Join the community via [Discord :material-arrow-top-right-thin:{ .external }](https://discord.gg/u8SmfwPpMd)
\ No newline at end of file
diff --git a/docs/docs/quickstart.md b/docs/docs/quickstart.md
index 7e3d4250d..96908b21c 100644
--- a/docs/docs/quickstart.md
+++ b/docs/docs/quickstart.md
@@ -1,6 +1,6 @@
# Quickstart
-??? info "Prerequisites"
+??? info "Installation"
To use the open-source version, make sure to [install the server](installation/index.md) and configure backends.
If you're using [dstack Sky :material-arrow-top-right-thin:{ .external }](https://sky.dstack.ai){:target="_blank"}, install the CLI and run the `dstack config` command:
@@ -134,5 +134,5 @@ To exclude any files from uploading, use `.gitignore`.
1. Read about [dev environments](concepts/dev-environments.md), [tasks](concepts/tasks.md),
[services](concepts/services.md), and [pools](concepts/pools.md)
-2. Browse [examples :material-arrow-top-right-thin:{ .external }](https://github.com/dstackai/dstack/blob/master/examples/README.md){:target="_blank"}
+2. Browse [examples :material-arrow-top-right-thin:{ .external }](https://github.com/dstackai/dstack/tree/master/examples){:target="_blank"}
3. Join the community via [Discord :material-arrow-top-right-thin:{ .external }](https://discord.gg/u8SmfwPpMd)
\ No newline at end of file
diff --git a/docs/docs/reference/dstack.yml/service.md b/docs/docs/reference/dstack.yml/service.md
index ba59460ef..f702cf675 100644
--- a/docs/docs/reference/dstack.yml/service.md
+++ b/docs/docs/reference/dstack.yml/service.md
@@ -2,10 +2,9 @@
The `service` configuration type allows running [services](../../concepts/services.md).
-!!! info "Filename"
- Configuration files must have a name ending with `.dstack.yml` (e.g., `.dstack.yml` or `serve.dstack.yml` are both acceptable)
- and can be located in the project's root directory or any nested folder.
- Any configuration can be run via [`dstack run`](../cli/index.md#dstack-run).
+> Configuration files must have a name ending with `.dstack.yml` (e.g., `.dstack.yml` or `serve.dstack.yml` are both acceptable)
+> and can be located in the project's root directory or any nested folder.
+> Any configuration can be run via [`dstack run . -f PATH`](../cli/index.md#dstack-run).
## Examples
@@ -69,7 +68,7 @@ port: 8000
port: 8000
```
-### OpenAI-compatible interface
+### OpenAI-compatible interface { #model-mapping }
By default, if you run a service, its endpoint is accessible at `https://.`.
@@ -104,7 +103,46 @@ model:
In this case, with such a configuration, once the service is up, you'll be able to access the model at
`https://gateway.` via the OpenAI-compatible interface.
-See [services](../../concepts/services.md#configure-model-mapping) for more detail.
+
+The `format` supports only `tgi` (Text Generation Inference)
+and `openai` (if you are using Text Generation Inference or vLLM with OpenAI-compatible mode).
+
+??? info "Chat template"
+
+ By default, `dstack` loads the [chat template](https://huggingface.co/docs/transformers/main/en/chat_templating)
+ from the model's repository. If it is not present there, manual configuration is required.
+
+ ```yaml
+ type: service
+
+ image: ghcr.io/huggingface/text-generation-inference:latest
+ env:
+ - MODEL_ID=TheBloke/Llama-2-13B-chat-GPTQ
+ commands:
+ - text-generation-launcher --port 8000 --trust-remote-code --quantize gptq
+ port: 8000
+
+ resources:
+ gpu: 80GB
+
+ # Enable the OpenAI-compatible endpoint
+ model:
+ type: chat
+ name: TheBloke/Llama-2-13B-chat-GPTQ
+ format: tgi
+ chat_template: "{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% set system_message = false %}{% endif %}{% for message in loop_messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if loop.index0 == 0 and system_message != false %}{% set content = '<>\\n' + system_message + '\\n<>\\n\\n' + message['content'] %}{% else %}{% set content = message['content'] %}{% endif %}{% if message['role'] == 'user' %}{{ '[INST] ' + content.strip() + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ ' ' + content.strip() + ' ' }}{% endif %}{% endfor %}"
+ eos_token: ""
+ ```
+
+ ##### Limitations
+
+ Please note that model mapping is an experimental feature with the following limitations:
+
+ 1. Doesn't work if your `chat_template` uses `bos_token`. As a workaround, replace `bos_token` inside `chat_template` with the token content itself.
+ 2. Doesn't work if `eos_token` is defined in the model repository as a dictionary. As a workaround, set `eos_token` manually, as shown in the example above (see Chat template).
+
+ If you encounter any other issues, please make sure to file a [GitHub issue](https://github.com/dstackai/dstack/issues/new/choose).
+
### Replicas and auto-scaling
diff --git a/docs/docs/reference/dstack.yml/task.md b/docs/docs/reference/dstack.yml/task.md
index cb080d162..f1fadb460 100644
--- a/docs/docs/reference/dstack.yml/task.md
+++ b/docs/docs/reference/dstack.yml/task.md
@@ -167,7 +167,7 @@ The following environment variables are available in any run and are passed by `
| `DSTACK_NODE_RANK` | The rank of the node |
| `DSTACK_MASTER_NODE_IP` | The internal IP address the master node |
-### Nodes
+### Nodes { #_nodes }
By default, the task runs on a single node. However, you can run it on a cluster of nodes.
@@ -226,8 +226,15 @@ commands:
-Now, you can pass your arguments to the `dstack run` command.
-See [tasks](../../concepts/tasks.md#parametrize-tasks) for more detail.
+Now, you can pass your arguments to the `dstack run` command:
+
+
+
+```shell
+$ dstack run . -f train.dstack.yml --train_batch_size=1 --num_train_epochs=100
+```
+
+
### Web applications
diff --git a/docs/docs/reference/server/config.yml.md b/docs/docs/reference/server/config.yml.md
index 26a3bbdba..bc1615bb5 100644
--- a/docs/docs/reference/server/config.yml.md
+++ b/docs/docs/reference/server/config.yml.md
@@ -3,16 +3,24 @@
The `~/.dstack/server/config.yml` file is used by the `dstack` server
to [configure](../../installation/index.md#configure-backends) cloud accounts.
-!!! info "Projects"
- For flexibility, `dstack` server permits you to configure backends for multiple projects.
- If you intend to use only one project, name it `main`.
+> The `dstack` server allows you to configure backends for multiple projects.
+> If you don't need multiple projects, use only the `main` project.
-### Examples
+Each cloud account must be configured under the `backends` property of the respective project.
+See the examples below.
-#### AWS
+## Examples
+
+### AWS
+
+There are two ways to configure AWS: using an access key or using the default credentials.
=== "Access key"
+ Create an access key by following the [this guide :material-arrow-top-right-thin:{ .external }](https://docs.aws.amazon.com/cli/latest/userguide/cli-authentication-user.html#cli-authentication-user-get).
+ Once you've downloaded the `.csv` file with your IAM user's Access key ID and Secret access key, proceed to
+ configure the backend.
+
```yaml
@@ -30,6 +38,8 @@ to [configure](../../installation/index.md#configure-backends) cloud accounts.
=== "Default credentials"
+ If you have default credentials set up (e.g. in `~/.aws/credentials`), configure the backend like this:
+
```yaml
@@ -43,9 +53,82 @@ to [configure](../../installation/index.md#configure-backends) cloud accounts.
-#### Azure
+??? info "VPC"
+ By default, `dstack` uses the default VPC. It's possible to customize it:
-=== "Client"
+ === "vpc_name"
+
+ ```yaml
+ projects:
+ - name: main
+ backends:
+ - type: aws
+ creds:
+ type: default
+
+ vpc_name: my-vpc
+ ```
+
+ === "vpc_ids"
+ ```yaml
+ projects:
+ - name: main
+ backends:
+ - type: aws
+ creds:
+ type: default
+
+ vpc_ids:
+ us-east-1: vpc-0a2b3c4d5e6f7g8h
+ us-east-2: vpc-9i8h7g6f5e4d3c2b
+ us-west-1: vpc-4d3c2b1a0f9e8d7
+ ```
+
+ Note, the VPCs are required to have a public subnet.
+
+??? info "Required AWS permissions"
+ The following AWS policy permissions are sufficient for `dstack` to work:
+
+ ```
+ {
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Action": [
+ "ec2:*"
+ ],
+ "Resource": "*"
+ },
+ {
+ "Effect": "Allow",
+ "Action": [
+ "servicequotas:*"
+ ],
+ "Resource": "*"
+ }
+ ]
+ }
+ ```
+
+### Azure
+
+There are two ways to configure Azure: using a client secret or using the default credentials.
+
+=== "Client secret"
+
+ A client secret can be created using the [Azure CLI :material-arrow-top-right-thin:{ .external }](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli):
+
+ ```shell
+ SUBSCRIPTION_ID=...
+ az ad sp create-for-rbac
+ --name dstack-app \
+ --role $DSTACK_ROLE \
+ --scopes /subscriptions/$SUBSCRIPTION_ID \
+ --query "{ tenant_id: tenant, client_id: appId, client_secret: password }"
+ ```
+
+ Once you have `tenant_id`, `client_id`, and `client_secret`, go ahead and configure the backend.
@@ -66,183 +149,392 @@ to [configure](../../installation/index.md#configure-backends) cloud accounts.
=== "Default credentials"
+ Obtain the `subscription_id` and `tenant_id` via the [Azure CLI :material-arrow-top-right-thin:{ .external }](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli):
+
+ ```shell
+ az account show --query "{subscription_id: id, tenant_id: tenantId}"
+ ```
+
+ Then proceed to configure the backend:
+
+
+
+ ```yaml
+ projects:
+ - name: main
+ backends:
+ - type: azure
+ subscription_id: 06c82ce3-28ff-4285-a146-c5e981a9d808
+ tenant_id: f84a7584-88e4-4fd2-8e97-623f0a715ee1
+ creds:
+ type: default
+ ```
+
+
+
+ !!! info "NOTE:"
+ If you don't know your `subscription_id`, run
+
+ ```shell
+ az account show --query "{subscription_id: id}"
+ ```
+
+ ??? info "Required Azure permissions"
+ The following Azure permissions are sufficient for `dstack` to work:
+ ```
+ {
+ "properties": {
+ "roleName": "dstack-role",
+ "description": "Minimal reqired permissions for using Azure with dstack",
+ "assignableScopes": [
+ "/subscriptions/${YOUR_SUBSCRIPTION_ID}"
+ ],
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Compute/availabilitySets/*",
+ "Microsoft.Compute/locations/*",
+ "Microsoft.Compute/virtualMachines/*",
+ "Microsoft.Compute/virtualMachineScaleSets/*",
+ "Microsoft.Compute/cloudServices/*",
+ "Microsoft.Compute/disks/write",
+ "Microsoft.Compute/disks/read",
+ "Microsoft.Compute/disks/delete",
+ "Microsoft.Network/networkSecurityGroups/*",
+ "Microsoft.Network/locations/*",
+ "Microsoft.Network/virtualNetworks/*",
+ "Microsoft.Network/networkInterfaces/*",
+ "Microsoft.Network/publicIPAddresses/*",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.Resources/subscriptions/resourceGroups/write",
+ "Microsoft.Resources/subscriptions/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ]
+ }
+ }
+ ```
+
+ ### GCP
+
+ ??? info "Enable APIs"
+ First, ensure the required APIs are enabled in your GCP `project_id`.
+
+ ```shell
+ PROJECT_ID=...
+ gcloud config set project $PROJECT_ID
+ gcloud services enable cloudapis.googleapis.com
+ gcloud services enable compute.googleapis.com
+ ```
+
+ There are two ways to configure GCP: using a service account or using the default credentials.
+
+ === "Service account"
+
+ To create a service account, follow [this guide :material-arrow-top-right-thin:{ .external }](https://cloud.google.com/iam/docs/service-accounts-create).
+ Make sure to grant it the `Service Account User` and `Compute Admin` roles.
+
+ After setting up the service account [create a key :material-arrow-top-right-thin:{ .external }](https://cloud.google.com/iam/docs/keys-create-delete) for it
+ and download the corresponding JSON file.
+
+ Then go ahead and configure the backend by specifying the downloaded file path.
+
+
+
+ ```yaml
+ projects:
+ - name: main
+ backends:
+ - type: gcp
+ project_id: gcp-project-id
+ creds:
+ type: service_account
+ filename: ~/.dstack/server/gcp-024ed630eab5.json
+ ```
+
+
+
+ === "Default credentials"
+
+
+
+ ```yaml
+ projects:
+ - name: main
+ backends:
+ - type: gcp
+ project_id: gcp-project-id
+ creds:
+ type: default
+ ```
+
+
+
+ !!! info "NOTE:"
+ If you don't know your GCP project ID, run
+
+ ```shell
+ gcloud projects list --format="json(projectId)"
+ ```
+
+ ??? info "Required GCP permissions"
+ The following GCP permissions are sufficient for `dstack` to work:
+
+ ```
+ compute.disks.create
+ compute.firewalls.create
+ compute.images.useReadOnly
+ compute.instances.create
+ compute.instances.delete
+ compute.instances.get
+ compute.instances.setLabels
+ compute.instances.setMetadata
+ compute.instances.setTags
+ compute.networks.updatePolicy
+ compute.regions.list
+ compute.subnetworks.use
+ compute.subnetworks.useExternalIp
+ compute.zoneOperations.get
+ ```
+
+ ### Lambda
+
+ Log into your [Lambda Cloud :material-arrow-top-right-thin:{ .external }](https://lambdalabs.com/service/gpu-cloud) account, click API keys in the sidebar, and then click the `Generate API key`
+ button to create a new API key.
+
+ Then, go ahead and configure the backend:
+
-
+
```yaml
projects:
- name: main
- backends:
- - type: azure
- subscription_id: 06c82ce3-28ff-4285-a146-c5e981a9d808
- tenant_id: f84a7584-88e4-4fd2-8e97-623f0a715ee1
+ backends:
+ - type: lambda
creds:
- type: default
+ type: api_key
+ api_key: eersct_yrpiey-naaeedst-tk-_cb6ba38e1128464aea9bcc619e4ba2a5.iijPMi07obgt6TZ87v5qAEj61RVxhd0p
```
-
+
-#### GCP
+ ### TensorDock
-=== "Service account"
+ Log into your [TensorDock :material-arrow-top-right-thin:{ .external }](https://marketplace.tensordock.com/) account, click API in the sidebar, and use the `Create an Authorization`
+ section to create a new authorization key.
+
+ Then, go ahead and configure the backend:
-
+
```yaml
projects:
- name: main
- backends:
- - type: gcp
- project_id: gcp-project-id
+ backends:
+ - type: tensordock
creds:
- type: service_account
- filename: ~/.dstack/server/gcp-024ed630eab5.json
+ type: api_key
+ api_key: 248e621d-9317-7494-dc1557fa5825b-98b
+ api_token: FyBI3YbnFEYXdth2xqYRnQI7hiusssBC
```
-
+
-=== "Default credentials"
+ !!! info "NOTE:"
+ The `tensordock` backend supports on-demand instances only. Spot instance support coming soon.
+
+ ### Vast.ai
+
+ Log into your [Vast.ai :material-arrow-top-right-thin:{ .external }](https://cloud.vast.ai/) account, click Account in the sidebar, and copy your
+ API Key.
+
+ Then, go ahead and configure the backend:
-
+
```yaml
projects:
- name: main
- backends:
- - type: gcp
- project_id: gcp-project-id
+ backends:
+ - type: vastai
creds:
- type: default
+ type: api_key
+ api_key: d75789f22f1908e0527c78a283b523dd73051c8c7d05456516fc91e9d4efd8c5
```
-
-
-
-#### Lambda
-
-
-```yaml
-projects:
-- name: main
- backends:
- - type: lambda
- creds:
- type: api_key
- api_key: eersct_yrpiey-naaeedst-tk-_cb6ba38e1128464aea9bcc619e4ba2a5.iijPMi07obgt6TZ87v5qAEj61RVxhd0p
-```
+
-
+ !!! info "NOTE:"
+ Also, the `vastai` backend supports on-demand instances only. Spot instance support coming soon.
-#### TensorDock
+ ### CUDO
-
+ Log into your [CUDO Compute :material-arrow-top-right-thin:{ .external }](https://compute.cudo.org/) account, click API keys in the sidebar, and click the `Create an API key` button.
-```yaml
-projects:
-- name: main
- backends:
- - type: tensordock
- creds:
- type: api_key
- api_key: 248e621d-9317-7494-dc1557fa5825b-98b
- api_token: FyBI3YbnFEYXdth2xqYRnQI7hiusssBC
-```
+ Ensure you've created a project with CUDO Compute, then proceed to configuring the backend.
-
+
-#### Vast.ai
+ ```yaml
+ projects:
+ - name: main
+ backends:
+ - type: cudo
+ project_id: my-cudo-project
+ creds:
+ type: api_key
+ api_key: 7487240a466624b48de22865589
+ ```
-
+
-```yaml
-projects:
-- name: main
- backends:
- - type: vastai
- creds:
- type: api_key
- api_key: d75789f22f1908e0527c78a283b523dd73051c8c7d05456516fc91e9d4efd8c5
-```
+ ### RunPod
-
+ Log into your [RunPod :material-arrow-top-right-thin:{ .external }](https://www.runpod.io/console/) console, click Settings in the sidebar, expand the `API Keys` section, and click
+ the button to create a key.
-#### CUDO
+ Then proceed to configuring the backend.
-
+
-```yaml
-projects:
-- name: main
- backends:
- - type: cudo
- project_id: my-cudo-project
- creds:
- type: api_key
- api_key: 7487240a466624b48de22865589
-```
+ ```yaml
+ projects:
+ - name: main
+ backends:
+ - type: runpod
+ creds:
+ type: api_key
+ api_key: US9XTPDIV8AR42MMINY8TCKRB8S4E7LNRQ6CAUQ9
+ ```
-
+
-#### DataCrunch
+ !!! warning "NOTE:"
+ If you're using a custom Docker image, its entrypoint cannot be anything other than `/bin/bash` or `/bin/sh`.
+ See the [issue :material-arrow-top-right-thin:{ .external }](https://github.com/dstackai/dstack/issues/1137) for more details.
-
+ !!! info "NOTE:"
+ The `runpod` backend supports on-demand instances only. Spot instance support coming soon.
-```yaml
-projects:
-- name: main
- backends:
- - type: datacrunch
- creds:
- type: api_key
- client_id: xfaHBqYEsArqhKWX-e52x3HH7w8T
- client_secret: B5ZU5Qx9Nt8oGMlmMhNI3iglK8bjMhagTbylZy4WzncZe39995f7Vxh8
-```
+ ### DataCrunch
-
+ Log into your [DataCrunch :material-arrow-top-right-thin:{ .external }](https://cloud.datacrunch.io/signin) account, click Account Settings in the sidebar, find `REST API Credentials` area and then click the `Generate Credentials` button.
-#### Kubernetes
+ Then, go ahead and configure the backend:
-=== "Self-managed"
```yaml
projects:
- name: main
- backends:
- - type: kubernetes
- kubeconfig:
- filename: ~/.kube/config
- networking:
- ssh_host: localhost # The external IP address of any node
- ssh_port: 32000 # Any port accessible outside of the cluster
+ backends:
+ - type: datacrunch
+ creds:
+ type: api_key
+ client_id: xfaHBqYEsArqhKWX-e52x3HH7w8T
+ client_secret: B5ZU5Qx9Nt8oGMlmMhNI3iglK8bjMhagTbylZy4WzncZe39995f7Vxh8
```
-=== "Managed"
-
+ ### Kubernetes
- ```yaml
- projects:
- - name: main
- backends:
- - type: kubernetes
- kubeconfig:
- filename: ~/.kube/config
- networking:
- ssh_port: 32000 # Any port accessible outside of the cluster
- ```
+ `dstack` supports both self-managed, and managed Kubernetes clusters.
-
+ ??? info "Prerequisite"
+ To use GPUs with Kubernetes, the cluster must be installed with the
+ [NVIDIA GPU Operator :material-arrow-top-right-thin:{ .external }](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/index.html).
+
+ [//]: # (TODO: Provide short yet clear instructions. Elaborate on whether it works with Kind.)
+
+ To configure a Kubernetes backend, specify the path to the kubeconfig file,
+ and the port that `dstack` can use for proxying SSH traffic.
+ In case of a self-managed cluster, also specify the IP address of any node in the cluster.
+
+ [//]: # (TODO: Mention that the Kind context has to be selected via `current-context` )
+
+ === "Self-managed"
+
+ Here's how to configure the backend to use a self-managed cluster.
+
+
+
+ ```yaml
+ projects:
+ - name: main
+ backends:
+ - type: kubernetes
+ kubeconfig:
+ filename: ~/.kube/config
+ networking:
+ ssh_host: localhost # The external IP address of any node
+ ssh_port: 32000 # Any port accessible outside of the cluster
+ ```
+
+
+
+ The port specified to `ssh_port` must be accessible outside of the cluster.
+
+ For example, if you are using Kind, make sure to add it via `extraPortMappings`:
+
+
+
+ ```yaml
+ kind: Cluster
+ apiVersion: kind.x-k8s.io/v1alpha4
+ nodes:
+ - role: control-plane
+ extraPortMappings:
+ - containerPort: 32000 # Must be same as `ssh_port`
+ hostPort: 32000 # Must be same as `ssh_port`
+ ```
+
+
+
+ [//]: # (TODO: Elaborate on the Kind's IP address on Linux)
+
+ === "Managed"
+ Here's how to configure the backend to use a managed cluster (AWS, GCP, Azure).
+
+
+
+ ```yaml
+ projects:
+ - name: main
+ backends:
+ - type: kubernetes
+ kubeconfig:
+ filename: ~/.kube/config
+ networking:
+ ssh_port: 32000 # Any port accessible outside of the cluster
+ ```
+
+
+
+ The port specified to `ssh_port` must be accessible outside of the cluster.
+
+ For example, if you are using EKS, make sure to add it via an ingress rule
+ of the corresponding security group:
+
+ ```shell
+ aws ec2 authorize-security-group-ingress --group-id
--protocol tcp --port 32000 --cidr 0.0.0.0/0
+ ```
+
+ [//]: # (TODO: Elaborate on gateways, and what backends allow configuring them)
-For more details on configuring clouds, please refer to [Installation](../../installation/index.md#configure-backends).
+ [//]: # (TODO: Should we automatically detect ~/.kube/config)
-### Root reference
+## Root reference
#SCHEMA# dstack._internal.server.services.config.ServerConfig
overrides:
show_root_heading: false
-### `projects[n]` { #projects data-toc-label="projects" }
+## `projects[n]` { #projects data-toc-label="projects" }
#SCHEMA# dstack._internal.server.services.config.ProjectConfig
overrides:
@@ -250,7 +542,7 @@ For more details on configuring clouds, please refer to [Installation](../../ins
backends:
type: 'Union[AWSConfigInfoWithCreds, AzureConfigInfoWithCreds, GCPConfigInfoWithCreds, LambdaConfigInfoWithCreds, TensorDockConfigInfoWithCreds, VastAIConfigInfoWithCreds, KubernetesConfig]'
-### `projects[n].backends[type=aws]` { #aws data-toc-label="backends[type=aws]" }
+## `projects[n].backends[type=aws]` { #aws data-toc-label="backends[type=aws]" }
#SCHEMA# dstack._internal.server.services.config.AWSConfig
overrides:
@@ -259,7 +551,7 @@ For more details on configuring clouds, please refer to [Installation](../../ins
required: true
item_id_prefix: aws-
-### `projects[n].backends[type=aws].creds` { #aws-creds data-toc-label="backends[type=aws].creds" }
+## `projects[n].backends[type=aws].creds` { #aws-creds data-toc-label="backends[type=aws].creds" }
=== "Access key"
#SCHEMA# dstack._internal.core.models.backends.aws.AWSAccessKeyCreds
@@ -275,7 +567,7 @@ For more details on configuring clouds, please refer to [Installation](../../ins
type:
required: true
-### `projects[n].backends[type=azure]` { #azure data-toc-label="backends[type=azure]" }
+## `projects[n].backends[type=azure]` { #azure data-toc-label="backends[type=azure]" }
#SCHEMA# dstack._internal.server.services.config.AzureConfig
overrides:
@@ -284,7 +576,7 @@ For more details on configuring clouds, please refer to [Installation](../../ins
required: true
item_id_prefix: azure-
-### `projects[n].backends[type=azure].creds` { #azure-creds data-toc-label="backends[type=azure].creds" }
+## `projects[n].backends[type=azure].creds` { #azure-creds data-toc-label="backends[type=azure].creds" }
=== "Client"
#SCHEMA# dstack._internal.core.models.backends.azure.AzureClientCreds
@@ -300,7 +592,7 @@ For more details on configuring clouds, please refer to [Installation](../../ins
type:
required: true
-### `projects[n].backends[type=datacrunch]` { #datacrunch data-toc-label="backends[type=datacrunch]" }
+## `projects[n].backends[type=datacrunch]` { #datacrunch data-toc-label="backends[type=datacrunch]" }
#SCHEMA# dstack._internal.server.services.config.DataCrunchConfig
overrides:
@@ -309,7 +601,7 @@ For more details on configuring clouds, please refer to [Installation](../../ins
required: true
item_id_prefix: datacrunch-
-### `projects[n].backends[type=datacrunch].creds` { #datacrunch-creds data-toc-label="backends[type=datacrunch].creds" }
+## `projects[n].backends[type=datacrunch].creds` { #datacrunch-creds data-toc-label="backends[type=datacrunch].creds" }
#SCHEMA# dstack._internal.core.models.backends.datacrunch.DataCrunchAPIKeyCreds
overrides:
@@ -317,7 +609,7 @@ For more details on configuring clouds, please refer to [Installation](../../ins
type:
required: true
-### `projects[n].backends[type=gcp]` { #gcp data-toc-label="backends[type=gcp]" }
+## `projects[n].backends[type=gcp]` { #gcp data-toc-label="backends[type=gcp]" }
#SCHEMA# dstack._internal.server.services.config.GCPConfig
overrides:
@@ -326,7 +618,7 @@ For more details on configuring clouds, please refer to [Installation](../../ins
required: true
item_id_prefix: gcp-
-### `projects[n].backends[type=gcp].creds` { #gcp-creds data-toc-label="backends[type=gcp].creds" }
+## `projects[n].backends[type=gcp].creds` { #gcp-creds data-toc-label="backends[type=gcp].creds" }
=== "Service account"
#SCHEMA# dstack._internal.server.services.config.GCPServiceAccountCreds
@@ -342,7 +634,7 @@ For more details on configuring clouds, please refer to [Installation](../../ins
type:
required: true
-### `projects[n].backends[type=lambda]` { #lambda data-toc-label="backends[type=lambda]" }
+## `projects[n].backends[type=lambda]` { #lambda data-toc-label="backends[type=lambda]" }
#SCHEMA# dstack._internal.server.services.config.LambdaConfig
overrides:
@@ -351,7 +643,7 @@ For more details on configuring clouds, please refer to [Installation](../../ins
required: true
item_id_prefix: lambda-
-### `projects[n].backends[type=lambda].creds` { #lambda-creds data-toc-label="backends[type=lambda].creds" }
+## `projects[n].backends[type=lambda].creds` { #lambda-creds data-toc-label="backends[type=lambda].creds" }
#SCHEMA# dstack._internal.core.models.backends.lambdalabs.LambdaAPIKeyCreds
overrides:
@@ -359,7 +651,7 @@ For more details on configuring clouds, please refer to [Installation](../../ins
type:
required: true
-### `projects[n].backends[type=tensordock]` { #tensordock data-toc-label="backends[type=tensordock]" }
+## `projects[n].backends[type=tensordock]` { #tensordock data-toc-label="backends[type=tensordock]" }
#SCHEMA# dstack._internal.server.services.config.TensorDockConfig
overrides:
@@ -368,7 +660,7 @@ For more details on configuring clouds, please refer to [Installation](../../ins
required: true
item_id_prefix: tensordock-
-### `projects[n].backends[type=tensordock].creds` { #tensordock-creds data-toc-label="backends[type=tensordock].creds" }
+## `projects[n].backends[type=tensordock].creds` { #tensordock-creds data-toc-label="backends[type=tensordock].creds" }
#SCHEMA# dstack._internal.core.models.backends.tensordock.TensorDockAPIKeyCreds
overrides:
@@ -376,7 +668,7 @@ For more details on configuring clouds, please refer to [Installation](../../ins
type:
required: true
-### `projects[n].backends[type=vastai]` { #vastai data-toc-label="backends[type=vastai]" }
+## `projects[n].backends[type=vastai]` { #vastai data-toc-label="backends[type=vastai]" }
#SCHEMA# dstack._internal.server.services.config.VastAIConfig
overrides:
@@ -385,7 +677,7 @@ For more details on configuring clouds, please refer to [Installation](../../ins
required: true
item_id_prefix: vastai-
-### `projects[n].backends[type=vastai].creds` { #vastai-creds data-toc-label="backends[type=vastai].creds" }
+## `projects[n].backends[type=vastai].creds` { #vastai-creds data-toc-label="backends[type=vastai].creds" }
#SCHEMA# dstack._internal.core.models.backends.vastai.VastAIAPIKeyCreds
overrides:
@@ -393,7 +685,7 @@ For more details on configuring clouds, please refer to [Installation](../../ins
type:
required: true
-### `projects[n].backends[type=kubernetes]` { #kubernetes data-toc-label="backends[type=kubernetes]" }
+## `projects[n].backends[type=kubernetes]` { #kubernetes data-toc-label="backends[type=kubernetes]" }
#SCHEMA# dstack._internal.server.services.config.KubernetesConfig
overrides:
@@ -401,13 +693,13 @@ For more details on configuring clouds, please refer to [Installation](../../ins
type:
required: true
-### `projects[n].backends[type=kubernetes].kubeconfig` { #kubeconfig data-toc-label="kubeconfig" }
+## `projects[n].backends[type=kubernetes].kubeconfig` { #kubeconfig data-toc-label="kubeconfig" }
##SCHEMA# dstack._internal.server.services.config.KubeconfigConfig
overrides:
show_root_heading: false
-### `projects[n].backends[type=kubernetes].networking` { #networking data-toc-label="networking" }
+## `projects[n].backends[type=kubernetes].networking` { #networking data-toc-label="networking" }
##SCHEMA# dstack._internal.core.models.backends.kubernetes.KubernetesNetworkingConfig
overrides:
diff --git a/docs/overrides/home.html b/docs/overrides/home.html
index 561e76d3e..52d59b097 100644
--- a/docs/overrides/home.html
+++ b/docs/overrides/home.html
@@ -279,6 +279,88 @@ Pools
+