Releases: dstackai/dstack
0.18.42
Volume attachments
It's now possible to see volume attachments when listing volumes. The dstack volume -v
command shows which fleets the volumes are attached to in the ATTACHED
column:
✗ dstack volume -v
NAME BACKEND REGION STATUS ATTACHED CREATED ERROR
my-gcp-volume-1 gcp europe-west4 active my-dev 1 weeks ago
(europe-west4-c)
my-aws-volume-1 aws eu-west-1 (eu-west-1a) active - 3 days ago
This can help you decide if you should use an existing volume for a run or create a new volume if all volumes are occupied.
You can also check which volumes are currently attached and which are not via the API:
import os
import requests
url = os.environ["DSTACK_URL"]
token = os.environ["DSTACK_TOKEN"]
project = os.environ["DSTACK_PROJECT"]
print("Getting volumes...")
resp = requests.post(
url=f"{url}/api/project/{project}/volumes/list",
headers={"Authorization": f"Bearer {token}"},
)
volumes = resp.json()
print("Checking volumes attachments...")
for volume in volumes:
is_attached = len(volume["attachments"]) > 0
print(f"Volume {volume['name']} attached: {is_attached}")
✗ python check_attachments.py
Getting volumes...
Checking volumes attachments...
Volume my-gcp-volume-1 attached: True
Volume my-aws-volume-1 attached: False
Bugfixes
This release contains several important bugfixes including a bugfix for fleets with placement: cluster
(#2302).
What's Changed
- Add Deepseek and Intel Examples by @Bihan in #2291
- Add volume attachments info to the API and CLI by @r4victor in #2298
- Fix and test offers and pool instances filtering by @r4victor in #2303
Full Changelog: 0.18.41...0.18.42
0.18.41
GPU blocks
Previously, dstack
could process only one workload per instance at a time, even if the instance had enough resources to handle multiple workloads. With a new blocks
fleet property you can split the instance into blocks (virtual subinstances), allowing users to run workloads simultaneously, utilizing a fraction of GPU, CPU and memory resources.
Cloud fleet
type: fleet
name: my-fleet
nodes: 1
resources:
gpu: 8:24GB
blocks: 4 # split into 4 blocks, 2 GPU per block
SSH fleet
type: fleet
name: my-fleet
ssh_config:
user: ubuntu
identity_file: ~/.ssh/id_rsa
hosts:
- hostname: 3.255.177.51
blocks: auto # as many as possible, e.g., 8 GPUS -> 8 blocks
You can see how many instance blocks are currently busy in the dstack fleet
output:
$ dstack fleet
FLEET INSTANCE BACKEND RESOURCES PRICE STATUS CREATED
fleet-gaudi2 0 ssh (remote) 152xCPU, 1007GB, 8xGaudi2 (96GB), 387.0GB (disk) $0.0 3/8 busy 56 sec ago
The remaining blocks can be used for new runs.
SSH fleets with head node
With a new proxy_jump
fleet property, dstack
now supports network configurations where worker nodes are located behind a head node and are not reachable directly:
type: fleet
name: my-fleet
ssh_config:
user: ubuntu
identity_file: ~/.ssh/worker_node_key
hosts:
# worker nodes
- 3.255.177.51
- 3.255.177.52
# head node proxy; can also be configured per worker node
proxy_jump:
hostname: 3.255.177.50
user: ubuntu
identity_file: ~/.ssh/head_node_key
Check the documentation for details.
Inactivity duration
You can now configure dev environments to automatically stop after a period of inactivity by specifying inactivity_duration
:
type: dev-environment
ide: vscode
# Stop if inactive for 2 hours
inactivity_duration: 2h
A dev environment is considered inactive if there are no SSH connections to it, including VS Code connections, ssh <run name>
shells, and attached dstack apply
or dstack attach
commands. For more details on using inactivity_duration
, see the docs.
Multiple EFA interfaces
dstack
now attaches the maximum possible number of EFA interfaces when provisioning AWS instances with EFA support. For example, when provisioning p5.48xlarge instance, dstack
configures an optimal set up with 32 interfaces providing total network bandwidth capacity of 3,200 Gbps, of which up to 800 Gbps can be utilized for IP network traffic.
Note: Multiple EFA interface are enabled if the aws
backend config has public_ips: false
set. If instances have public IPs, only one EFA interface is enabled per instance due to AWS limitations.
Volumes for distributed tasks
You can now use single-attach volumes such as AWS EBS with distributed tasks by attaching different volumes to different nodes. This is done using dstack
variable interpolation:
type: task
nodes: 8
commands:
- ...
volumes:
- name: data-volume-${{ dstack.node_rank }}
path: /volume_data
Tip: To create volumes for all nodes using one volume configuration, specify volume name with -n
:
$ for i in {0..7}; do dstack apply -f vol.dstack.yml -n data-volume-$i -y; done
Availability zones
It's now possible to specify availability_zone
in volume configurations:
type: volume
name: my-volume
backend: aws
region: eu-west-1
availability_zone: eu-west-1c
size: 100GB
and availability_zones
in fleet and run configurations
type: fleet
name: my-fleet
nodes: 2
availability_zones: [eu-west-1c]
This has multiple use cases:
- Specify the same availability zone when provisioning volumes and fleets to ensure they can be used together.
- Specify a volume availability zone that has instance types that you work with.
- Create volumes for all availability zones to be able to use any zone and improve GPU availability.
The dstack fleet -v
and dstack volumes -v
commands now display availability zones along with regions.
Deployment considerations
- If you deploy the
dstack
server using rolling deployments (old and new replicas co-exist), it's advised to stop runs and fleets before deploying 0.18.41. Otherwise, you may see error logs from the old replica. It should not have major implications.
What's Changed
- Implement GPU
blocks
property by @un-def in #2253 - Show deleted runs in the UI by @olgenn in #2272
- [Bug]: The UI issues many API requests when stopping multiple runs by @olgenn in #2273
- Ensure frontend displays errors when getting 400 from the server by @olgenn in #2275
- Support --name for all configurations by @r4victor in #2269
- Support per-job volumes by @r4victor in #2276
- Full EFA attachment for non-public IPs by @solovyevt in #2271
- Return deleted runs in /api/runs/list by @r4victor in #2158
- Fix process_submitted_jobs instance lock by @un-def in #2279
- Change
dstack fleet
STATUS
for block instances by @un-def in #2280 - [Docs] Restructure concept pages to ensure dstack apply is not lost at the end of the page by @peterschmidt85 in #2283
- Allow specifying Azure resource_group by @r4victor in #2288
- Allow configuring availability zones by @r4victor in #2266
- Track SSH connections in dstack-runner by @jvstme in #2287
- Add the
inactivity_timeout
configuration option by @jvstme in #2289 - Show dev environment inactivity in
dstack ps -v
by @jvstme in #2290 - Support non-root Docker images in RunPod by @jvstme in #2286
- Fix terminating runs when job is terminated by @jvstme in #2295
- [Docs]: Dev environment inactivity duration by @jvstme in #2296
- [Docs]: Add
availability_zones
to offer filters by @jvstme in #2297 - Add head node support for SSH fleets by @un-def in #2292
- Support services with head node setup by @un-def in #2299
Full Changelog: 0.18.40...0.18.41
0.18.40
Volumes
Optional instance volumes
Instance volumes can now be made optional
. When a volume is marked as optional, it will be mounted only if the backend supports instance volumes; otherwise, it will not be mounted.
type: dev-environment
ide: vscode
volumes:
- instance_path: /dstack-cache
path: /root/.cache/
optional: true
Optional instance volumes are useful for caching, allowing runs to work with backends that don’t support them, such as runpod
, vastai
, and kubernetes
.
Services
Path prefix
Previously, if you were running services without a gateway, it was not possible to deploy certain web apps, such as Dash. This was due to the path prefix /proxy/services/<project name>/<run name>/
in the endpoint URL.
With this new update, it’s now possible to configure a service so that such web apps work without a gateway. To do this, set the strip_prefix
property to false
and pass the prefix to the web app. Here’s an example with a Dash app:
type: service
name: my-dash-app
gateway: false
# Disable authorization
auth: false
# Do not strip the path prefix
strip_prefix: false
env:
# Configure Dash to work with a path prefix
- DASH_ROUTES_PATHNAME_PREFIX=/proxy/services/main/my-dash-app/
commands:
- pip install dash
- python app.py
port: 8050
Git
Branches
When you run dstack apply
, before dstack
starts a container, it fetches the code from the repository where dstack apply
was invoked. If the repository is a remote Git repo, dstack
clones it using the user’s Git credentials.
Previously, dstack
always cloned only a single branch in this scenario (to ensure faster startup).
With this update, for development environments, dstack
now clones all branches by default. You can override this behavior using the new single_branch
property.
SSH
If you override the user
property in your run configuration, dstack
runs the container as that user. Previously, when accessing the dev environment via VS Code or connecting to the run with the ssh <run name>
command, you were still logged in as the root user and had to switch manually. Now, you are automatically logged in as the configured user.
What's changed
- Update contributing guide on runs and jobs by @r4victor in #2247
- Remove no offers warnings for SSH fleets by @jvstme in #2249
- Allow configuring single_branch by @r4victor in #2256
- Fix SSH fleet and gateway configuration change detection by @r4victor in #2258
- Update contributing guide on shim by @un-def in #2255
- Configuring if service path prefix is stripped by @jvstme in #2254
- [
dstack-runner
] Back up and restore~/.ssh
files by @un-def in #2261 - Use
user
property as the default user for SSH by @un-def in #2263 - Support optional instance volumes by @jvstme in #2260
- Extend request/response logging by @jvstme in #2265
Full changelog: 0.18.39...0.18.40
0.18.39
This release fixes a backward compatibility bug introduced in 0.18.38. The bug caused the CLI version 0.18.38 fail with older servers when applying fleet configurations.
What's Changed
Full Changelog: 0.18.38...0.18.39
0.18.38
Intel Gaudi
dstack
now supports Intel Gaudi accelerators with SSH fleets.
To use Intel Gaudi with dstack
, create an SSH fleet, and once it's up, feel free to specify gaudi
, gaudi2
, or gaudi3
as a GPU name (or intel
as a vendor name) in your run configuration:
type: dev-environment
python: "3.12"
ide: vscode
resources:
gpu: gaudi2:8 # 8 × Gaudi 2
Note
To use SSH fleets with Intel Gaudi, ensure that the Gaudi software and drivers are installed on each host. This should include the drivers, hl-smi
, and Habana Container Runtime.
Volumes
Stop duration and force detachment
In some cases, a volume may get stuck in the detaching
state. When this happens, the run is marked as stopped, but the instance remains in an inconsistent state, preventing its deletion or reuse. Additionally, the volume cannot be used with other runs.
To address this, dstack
now ensures that the run remains in the terminating
state until the volume is fully detached. By default, dstack
waits for 5m before forcing a detach. You can override this using stop_duration
by setting a different duration or disabling it (off
) for an unlimited duration.
Note
Force detaching a volume may corrupt the file system and should only be used as a last resort. If volumes frequently require force detachment, contact your cloud provider’s support to identify the root cause.
Bug-fixes
This update also resolves an issue where dstack
mistakenly marked a volume as attached
even though it was actually detached.
UI
Fleets
The UI has been updated to simplify fleet and instance management. The Fleets
page now allows users to terminate fleets and displays both active and terminated fleets. The new Instances page shows active and terminated instances across all fleets.
What's changed
- Add Intel Gaudi support for SSH fleets by @un-def in #2216
- Support models with non-standard
finish_reason
by @jvstme in #2229 - [Internal]: Ensure all files end with a newline by @jvstme in #2227
- [chore]: Refactor gateway modules by @jvstme in #2226
- [chore]: Move connection pool to proxy deps by @jvstme in #2235
- [chore]: Update migration
ffa99edd1988
by @jvstme in #2217 - [chore]: Update/remove dstack-proxy TODOs by @jvstme in #2239
- [UI] New UI for fleets and instances by @olgenn in #2236
- Improve UX when no offers found by @jvstme in #2240
- Implement volumes force detach by @r4victor in #2242
Full changelog: 0.18.37...0.18.38
0.18.37
0.18.36
Vultr
Cluster placement
The vultr
backend can now provision fleets with cluster placement.
type: fleet
nodes: 4
placement: cluster
resources:
gpu: MI300X:8
backends: [vultr]
Nodes in such a cluster will be interconnected and can be used to run distributed tasks.
Performance
The update optimizes the performance of dstack server
, allowing a single server replica to handle up to 150 active runs, jobs, and instances. Capacity can be further increased by using PostgreSQL and running multiple server replicas.
Last, getting instance offers from backends when you run dstack apply
has also been optimized and now takes less time.
What's changed
- Increase max active resources supported by server by @r4victor in #2189
- Implement bridge network mode for jobs by @un-def in #2191
- [Internal] Fix
python-json-logger
deprecation warning by @jvstme in #2201 - Fix local backend by @r4victor in #2203
- Implement offers cache by @r4victor in #2197
- Add
/api/instances/list
by @jvstme in #2199 - Allow getting by ID in
/api/project/_/fleets/get
by @jvstme in #2200 - Add termination reason and message to the runner API by @r4victor in #2204
- Add vpc cluster support in Vultr by @Bihan in #2196
- Fix instance_types not respected for pool instances by @r4victor in #2205
- Delete manually created empty fleets by @r4victor in #2206
- Return repo errors from runner by @r4victor in #2207
- Fix caching offers with GPU requirements by @jvstme in #2210
- Fix filtering idle instances by instance type by @jvstme in #2214
- Add more project URLs on PyPI by @jvstme in #2215
Full changelog: 0.18.35...0.18.36
0.18.35
Vultr
This update introduces initial integration with Vultr. This cloud provider offers a diverse range of NVIDIA and AMD accelerators, from cost-effective fractional GPUs to multi-GPU bare-metal hosts.
$ dstack apply -f examples/.dstack.yml
# BACKEND REGION RESOURCES PRICE
1 vultr ewr 2xCPU, 8GB, 1xA16 (2GB), 50.0GB (disk) $0.059
2 vultr ewr 1xCPU, 5GB, 1xA40 (2GB), 90.0GB (disk) $0.075
3 vultr ewr 1xCPU, 6GB, 1xA100 (4GB), 70.0GB (disk) $0.123
...
18 vultr ewr 32xCPU, 375GB, 2xL40S (48GB), 2200.0GB (disk) $3.342
19 vultr ewr 24xCPU, 240GB, 2xA100 (80GB), 1400.0GB (disk) $4.795
20 vultr ewr 96xCPU, 960GB, 16xA16 (16GB), 1700.0GB (disk) $7.534
21 vultr ewr 96xCPU, 1024GB, 4xA100 (80GB), 450.0GB (disk) $9.589
See the docs for detailed instructions on configuring the vultr
backend.
Note
This release includes all dstack features except support for volumes and clusters. These features will be added in an upcoming update.
Vast.ai
Previously, the vastai
backend only allowed using Docker images where root
is the default user. This limitation has been removed, so you can now run NVIDIA NIM or any other image regardless of the user.
Backward compatibility
If you are going to configure the vultr
backend, make sure you update all your dstack CLI and API clients to the latest version. Clients prior to 0.18.35 will not work when Vultr is configured.
What's changed
- [
dstack-shim
] Revamp logging and CLI by @un-def in #2176 - Download
dstack-runner
to a well-known location by @un-def in #2179 - Add Vultr Support by @Bihan in #2132
- Support non-root Docker images in Vast.ai by @jvstme in #2185
- Refactor idle instance termination by @jvstme in #2188
- Retry instance termination in case of errors by @jvstme in #2190
- Update PyPI Development Status classifier by @jvstme in #2192
- Add Vultr to
Concepts
andReference
pages by @Bihan in #2186
Full changelog: 0.18.34...0.18.35
0.18.34
Idle duration
If provisioned fleet instances aren’t used, they are marked as idle
for reuse within the configured idle duration. After this period, instances are automatically deleted. This behavior was previously configured using the termination_policy
and termination_idle_time
properties in run or fleet configurations.
With this update, we replace these two properties with idle_duration
, a simpler way to configure this behavior. This property can be set to a specific duration or to off
for unlimited time.
type: dev-environment
name: vscode
python: "3.11"
ide: vscode
# Terminate instances idle for more than 1 hour
idle_duration: 1h
resources:
gpu: 24GB
Docker
Previously, dstack had limitations on Docker images for dev environments, tasks, and services. These have now been lifted, allowing images based on various Linux distributions like Alpine, Rocky Linux, and Fedora.
dstack
now also supports Docker images with built-in OpenSSH servers, which previously caused issues.
Documentation
The documentation has been significantly improved:
- Backend configuration has been moved from the Reference page to Concepts→Backends.
- Major examples related to dev environments, tasks, and services have been relocated from the Reference page to their respective Concepts pages.
Deprecations
- The
termination_idle_time
andtermination_policy
parameters in run configurations have been deprecated in favor ofidle_duration
.
What's changed
- [
dstack-shim
] Implement Future API by @un-def in #2141 - [API] Add API support to get runs by id by @r4victor in #2157
- [TPU] Update TPU v5e runtime and update vllm-tpu example by @Bihan in #2155
- [Internal] Skip docs-build on PRs from forks by @r4victor in #2159
- [
dstack-shim
] Add API v2 compat support to ShimClient by @un-def in #2156 - [Run configurations] Support Alpine and more RPM-based images by @un-def in #2151
- [Internal] Omit
id
field in (API)Client.runs.get()
method by @un-def in #2174 - [
dstack-shim
] Remove API v1 by @un-def in #2160 - [Volumes] Fix volume attachment with dstack backend by @un-def in #2175
- Replace
termination_policy
andtermination_idle_time
withidle_duration: int|str|off
by @peterschmidt85 in #2167 - Allow running
sshd
indstack
runs by @jvstme in #2178 - [Docs] Many docs improvements by @peterschmidt85 in #2171
Full changelog: 0.18.33...0.18.34
0.18.33
This update fixes TPU v6e support and a potential gateway upgrade issue.
What's Changed
- Fix runtime version for TPU v6e by @r4victor in #2149
- Update
state.json
migration on gateways by @jvstme in #2152 - Optimize gateway startup and service update time by @jvstme in #2153
Full Changelog: 0.18.32...0.18.33