Skip to content

Commit

Permalink
refactor(examples) Update quickstart-pytorch to show how to run with …
Browse files Browse the repository at this point in the history
…GPU (#4395)

Co-authored-by: jafermarq <[email protected]>
  • Loading branch information
zjh199683 and jafermarq authored Oct 29, 2024
1 parent 4de7e98 commit 8fac6fc
Show file tree
Hide file tree
Showing 2 changed files with 16 additions and 0 deletions.
11 changes: 11 additions & 0 deletions examples/quickstart-pytorch/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,11 @@ You can run your Flower project in both _simulation_ and _deployment_ mode witho

### Run with the Simulation Engine

> \[!TIP\]
> This example might run faster when the `ClientApp`s have access to a GPU. If your system has one, you can make use of it by configuring the `backend.client-resources` component in `pyproject.toml`. If you want to try running the example with GPU right away, use the `local-simulation-gpu` federation as shown below.
```bash
# Run with the default federation (CPU only)
flwr run .
```

Expand All @@ -58,6 +62,13 @@ You can also override some of the settings for your `ClientApp` and `ServerApp`
flwr run . --run-config "num-server-rounds=5 learning-rate=0.05"
```

Run the project in the `local-simulation-gpu` federation that gives CPU and GPU resources to each `ClientApp`. By default, at most 5x`ClientApp` will run in parallel in the available GPU. You can tweak the degree of parallelism by adjusting the settings of this federation in the `pyproject.toml`.

```bash
# Run with the `local-simulation-gpu` federation
flwr run . local-simulation-gpu
```

> \[!TIP\]
> For a more detailed walk-through check our [quickstart PyTorch tutorial](https://flower.ai/docs/framework/tutorial-quickstart-pytorch.html)
Expand Down
5 changes: 5 additions & 0 deletions examples/quickstart-pytorch/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -36,3 +36,8 @@ default = "local-simulation"

[tool.flwr.federations.local-simulation]
options.num-supernodes = 10

[tool.flwr.federations.local-simulation-gpu]
options.num-supernodes = 10
options.backend.client-resources.num-cpus = 2 # each ClientApp assumes to use 2CPUs
options.backend.client-resources.num-gpus = 0.2 # at most 5 ClientApp will run in a given GPU

0 comments on commit 8fac6fc

Please sign in to comment.