Skip to content

Commit

Permalink
Merge branch 'main' into fds-add-vertical-even-partitioner
Browse files Browse the repository at this point in the history
  • Loading branch information
jafermarq authored Dec 18, 2024
2 parents a73f623 + 0a27cc4 commit bbb8317
Show file tree
Hide file tree
Showing 437 changed files with 13,628 additions and 7,213 deletions.
4 changes: 2 additions & 2 deletions .github/CODEOWNERS
Original file line number Diff line number Diff line change
Expand Up @@ -22,10 +22,10 @@ README.md @jafermarq @tanertopal @danieljanes
/src/py/flwr/cli/new/templates @jafermarq @tanertopal @danieljanes

# Changelog
/doc/source/ref-changelog.md @jafermarq @tanertopal @danieljanes
/framework/docs/source/ref-changelog.md @jafermarq @tanertopal @danieljanes

# Translations
/doc/locales @charlesbvll @tanertopal @danieljanes
/framework/docs/locales @charlesbvll @tanertopal @danieljanes

# GitHub Actions and Workflows
/.github/workflows @Robert-Steiner @tanertopal @danieljanes
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/baselines.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ jobs:
FILTER+=$(echo "$DIR: ${BASELINES_PATH}/**\n")
done < <(find baselines -maxdepth 1 \
-name ".*" -prune -o \
-path "baselines/doc" -prune -o \
-path "baselines/docs" -prune -o \
-path "baselines/dev" -prune -o \
-path "baselines/baseline_template" -prune -o \
-path "baselines/flwr_baselines" -prune -o \
Expand Down
9 changes: 5 additions & 4 deletions .github/workflows/docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,8 @@ jobs:
AWS_SECRET_ACCESS_KEY: ${{ secrets. AWS_SECRET_ACCESS_KEY }}
DOCS_BUCKET: flower.ai
run: |
aws s3 sync --delete --exclude ".*" --exclude "v/*" --cache-control "no-cache" ./doc/build/html/ s3://${{ env.DOCS_BUCKET }}/docs/framework
aws s3 sync --delete --exclude ".*" --exclude "v/*" --cache-control "no-cache" ./baselines/doc/build/html/ s3://${{ env.DOCS_BUCKET }}/docs/baselines
aws s3 sync --delete --exclude ".*" --exclude "v/*" --cache-control "no-cache" ./examples/doc/build/html/ s3://${{ env.DOCS_BUCKET }}/docs/examples
aws s3 sync --delete --exclude ".*" --exclude "v/*" --cache-control "no-cache" ./datasets/doc/build/html/ s3://${{ env.DOCS_BUCKET }}/docs/datasets
cp -r doc/build/html/v* framework/docs/build/html
aws s3 sync --delete --exclude ".*" --exclude "v/*" --cache-control "no-cache" ./framework/docs/build/html/ s3://${{ env.DOCS_BUCKET }}/docs/framework
aws s3 sync --delete --exclude ".*" --exclude "v/*" --cache-control "no-cache" ./baselines/docs/build/html/ s3://${{ env.DOCS_BUCKET }}/docs/baselines
aws s3 sync --delete --exclude ".*" --exclude "v/*" --cache-control "no-cache" ./examples/docs/build/html/ s3://${{ env.DOCS_BUCKET }}/docs/examples
aws s3 sync --delete --exclude ".*" --exclude "v/*" --cache-control "no-cache" ./datasets/docs/build/html/ s3://${{ env.DOCS_BUCKET }}/docs/datasets
4 changes: 2 additions & 2 deletions .github/workflows/update_translations.yml
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ jobs:

- name: Update text and translations for all locales
run: |
cd doc
cd framework/docs
make update-text
for langDir in locales/*; do
if [ -d "$langDir" ]; then
Expand All @@ -52,7 +52,7 @@ jobs:
run: |
git config --local user.email "41898282+github-actions[bot]@users.noreply.github.com"
git config --local user.name "github-actions[bot]"
git add doc/locales
git add framework/docs/locales
git commit -m "Update text and language files"
continue-on-error: true

Expand Down
12 changes: 9 additions & 3 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# Flower
.flower_ops
data/
doc/source/api_documentation
doc/source/_build
doc/source/dataset/
framework/docs/source/api_documentation
framework/docs/source/_build
framework/docs/source/dataset/
flwr_logs
.cache

Expand All @@ -17,6 +17,9 @@ examples/**/dataset/**
# Flower Baselines
baselines/datasets/leaf

# Exclude ee package
src/py/flwr/ee

# macOS
.DS_Store

Expand Down Expand Up @@ -183,3 +186,6 @@ app/src/main/assets
/captures
.externalNativeBuild
.cxx

# Pyright
pyrightconfig.json
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,23 +48,23 @@ Flower's goal is to make federated learning accessible to everyone. This series

0. **What is Federated Learning?**

[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/doc/source/tutorial-series-what-is-federated-learning.ipynb) (or open the [Jupyter Notebook](https://github.com/adap/flower/blob/main/doc/source/tutorial-series-what-is-federated-learning.ipynb))
[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/framework/docs/source/tutorial-series-what-is-federated-learning.ipynb) (or open the [Jupyter Notebook](https://github.com/adap/flower/blob/main/framework/docs/source/tutorial-series-what-is-federated-learning.ipynb))

1. **An Introduction to Federated Learning**

[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/doc/source/tutorial-series-get-started-with-flower-pytorch.ipynb) (or open the [Jupyter Notebook](https://github.com/adap/flower/blob/main/doc/source/tutorial-series-get-started-with-flower-pytorch.ipynb))
[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/framework/docs/source/tutorial-series-get-started-with-flower-pytorch.ipynb) (or open the [Jupyter Notebook](https://github.com/adap/flower/blob/main/framework/docs/source/tutorial-series-get-started-with-flower-pytorch.ipynb))

2. **Using Strategies in Federated Learning**

[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/doc/source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb) (or open the [Jupyter Notebook](https://github.com/adap/flower/blob/main/doc/source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb))
[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/framework/docs/source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb) (or open the [Jupyter Notebook](https://github.com/adap/flower/blob/main/framework/docs/source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb))

3. **Building Strategies for Federated Learning**

[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/doc/source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb) (or open the [Jupyter Notebook](https://github.com/adap/flower/blob/main/doc/source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb))
[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/framework/docs/source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb) (or open the [Jupyter Notebook](https://github.com/adap/flower/blob/main/framework/docs/source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb))

4. **Custom Clients for Federated Learning**

[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/doc/source/tutorial-series-customize-the-client-pytorch.ipynb) (or open the [Jupyter Notebook](https://github.com/adap/flower/blob/main/doc/source/tutorial-series-customize-the-client-pytorch.ipynb))
[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/doc/source/tutorial-series-customize-the-client-pytorch.ipynb) (or open the [Jupyter Notebook](https://github.com/adap/flower/blob/main/framework/docs/source/tutorial-series-customize-the-client-pytorch.ipynb))

Stay tuned, more tutorials are coming soon. Topics include **Privacy and Security in Federated Learning**, and **Scaling Federated Learning**.

Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes
File renamed without changes.
11 changes: 8 additions & 3 deletions baselines/doc/source/conf.py → baselines/docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,12 +12,13 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Config for Sphinx docs."""


import datetime
import os
import sys
from sphinx.application import ConfigError


# Configuration file for the Sphinx documentation builder.
#
Expand Down Expand Up @@ -120,11 +121,15 @@

nbsphinx_execute = "never"

_open_in_colab_button = """
colab_link = (
"https://colab.research.google.com/github/adap/flower/blob/main/"
"doc/source/{{ env.doc2path(env.docname, base=None) }}"
)
_open_in_colab_button = f"""
.. raw:: html
<br/>
<a href="https://colab.research.google.com/github/adap/flower/blob/main/doc/source/{{ env.doc2path(env.docname, base=None) }}">
<a href="{colab_link}">
<img alt="Open in Colab" src="https://colab.research.google.com/assets/colab-badge.svg"/>
</a>
"""
Expand Down
File renamed without changes.
85 changes: 42 additions & 43 deletions baselines/fedrep/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,91 +36,90 @@ dataset: [CIFAR-10, CIFAR-100]

These two models are modified from the [official repo](https://github.com/rahulv0205/fedrep_experiments)'s. To be clear that, in the official models, there is no BN layers. However, without BN layer helping, training will definitely collapse.

Please see how models are implemented using a so called model_manager and model_split class since FedRep uses head and base layers in a neural network. These classes are defined in the `models.py` file and thereafter called when building new models in the directory `/implemented_models`. Please, extend and add new models as you wish.
Please see how models are implemented using a so called model_manager and model_split class since FedRep uses head and base layers in a neural network. These classes are defined in the `models.py` file. Please, extend and add new models as you wish.

**Dataset:** CIFAR10, CIFAR-100. CIFAR10/100 will be partitioned based on number of classes for data that each client shall receive e.g. 4 allocated classes could be [1, 3, 5, 9].

**Training Hyperparameters:** The hyperparameters can be found in `conf/base.yaml` file which is the configuration file for the main script.

| Description | Default Value |
| --------------------- | ----------------------------------- |
| `num_clients` | `100` |
| `num_rounds` | `100` |
| `num_local_epochs` | `5` |
| `num_rep_epochs` | `1` |
| `enable_finetune` | `False` |
| `num_finetune_epochs` | `5` |
| `use_cuda` | `true` |
| `specified_device` | `null` |
| `client resources` | `{'num_cpus': 2, 'num_gpus': 0.5 }` |
| `learning_rate` | `0.01` |
| `batch_size` | `50` |
| `model_name` | `cnncifar10` |
| `algorithm` | `fedrep` |
**Training Hyperparameters:** The hyperparameters can be found in `pyproject.toml` file under the `[tool.flwr.app.config]` section.

| Description | Default Value |
|-------------------------|-------------------------------------|
| `num-server-rounds` | `100` |
| `num-local-epochs` | `5` |
| `num-rep-epochs` | `1` |
| `enable-finetune` | `False` |
| `num-finetune-epochs` | `5` |
| `use-cuda` | `true` |
| `specified-cuda-device` | `null` |
| `client-resources` | `{'num-cpus': 2, 'num-gpus': 0.5 }` |
| `learning-rate` | `0.01` |
| `batch-size` | `50` |
| `model-name` | `cnncifar10` |
| `algorithm` | `fedrep` |


## Environment Setup

To construct the Python environment follow these steps:
Create a new Python environment using [pyenv](https://github.com/pyenv/pyenv) and [virtualenv plugin](https://github.com/pyenv/pyenv-virtualenv), then install the baseline project:

```bash
# Set Python 3.10
pyenv local 3.10.12
# Tell poetry to use python 3.10
poetry env use 3.10.12
# Create the environment
pyenv virtualenv 3.10.12 fedrep-env

# Install the base Poetry environment
poetry install
# Activate it
pyenv activate fedrep-env

# Activate the environment
poetry shell
# Then install the project
pip install -e .
```

## Running the Experiments

```
python -m fedrep.main # this will run using the default settings in the `conf/base.yaml`
flwr run . # this will run using the default settings in the `pyproject.toml`
```

While the config files contain a large number of settings, the ones below are the main ones you'd likely want to modify to .
While the config files contain a large number of settings, the ones below are the main ones you'd likely want to modify.
```bash
algorithm: fedavg, fedrep # these are currently supported
dataset.name: cifar10, cifar100
dataset.num_classes: 2, 5, 20 (only for CIFAR-100)
model_name: cnncifar10, cnncifar100
algorithm = "fedavg", "fedrep" # these are currently supported
dataset-name = "cifar10", "cifar100"
dataset-split-num-classes = 2, 5, 20 (only for CIFAR-100)
model-name = "cnncifar10", "cnncifar100"
```

See also for instance the configuration files for CIFAR10 and CIFAR100 under the `conf` directory.

## Expected Results
The default algorithm used by all configuration files is `fedrep`. To use `fedavg` please change the `algorithm` property in the respective configuration file. The default federated environment consists of 100 clients.

When the execution completes, a new directory `results` will be created with a json file that contains the running configurations and the results per round.

> [!NOTE]
> All plots shown below are generated using the `docs/make_plots.py` script. The script reads all json files generated by the baseline inside the `results` directory.
### CIFAR-10 (100, 2)

```
python -m fedrep.main --config-name cifar10_100_2 algorithm=fedrep
python -m fedrep.main --config-name cifar10_100_2 algorithm=fedavg
flwr run . --run-config conf/cifar10_2.toml
```
<img src="_static/cifar10_100_2.png" width="400"/>

### CIFAR-10 (100, 5)

```
python -m fedrep.main --config-name cifar10_100_5 algorithm=fedrep
python -m fedrep.main --config-name cifar10_100_5 algorithm=fedavg
flwr run . --run-config conf/cifar10_5.toml
```
<img src="_static/cifar10_100_5.png" width="400"/>

### CIFAR-100 (100, 5)

```
python -m fedrep.main --config-name cifar100_100_5 algorithm=fedrep
python -m fedrep.main --config-name cifar100_100_5 algorithm=fedavg
flwr run . --run-config conf/cifar100_5.toml
```
<img src="_static/cifar100_100_5.png" width="400"/>

### CIFAR-100 (100, 20)

```
python -m fedrep.main --config-name cifar100_100_20 algorithm=fedrep
python -m fedrep.main --config-name cifar100_100_20 algorithm=fedavg
flwr run . --run-config conf/cifar100_20.toml
```
<img src="_static/cifar100_100_20.png" width="400"/>
<img src="_static/cifar100_100_20.png" width="400"/>
Binary file modified baselines/fedrep/_static/cifar100_100_20.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified baselines/fedrep/_static/cifar100_100_5.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified baselines/fedrep/_static/cifar10_100_2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified baselines/fedrep/_static/cifar10_100_5.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
11 changes: 11 additions & 0 deletions baselines/fedrep/conf/cifar100_20.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
algorithm = "fedrep"

# model specs
model-name = "cnncifar100"

# dataset specs
dataset-name = "cifar100"
dataset-split = "sample"
dataset-split-num-classes = 20
dataset-split-seed = 42
dataset-split-fraction = 0.83
11 changes: 11 additions & 0 deletions baselines/fedrep/conf/cifar100_5.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
algorithm = "fedrep"

# model specs
model-name = "cnncifar100"

# dataset specs
dataset-name = "cifar100"
dataset-split = "sample"
dataset-split-num-classes = 5
dataset-split-seed = 42
dataset-split-fraction = 0.83
8 changes: 8 additions & 0 deletions baselines/fedrep/conf/cifar10_2.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
algorithm = "fedrep"

# dataset specs
dataset-name = "cifar10"
dataset-split = "sample"
dataset-split-num-classes = 2
dataset-split-seed = 42
dataset-split-fraction = 0.83
8 changes: 8 additions & 0 deletions baselines/fedrep/conf/cifar10_5.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
algorithm = "fedrep"

# dataset specs
dataset-name = "cifar10"
dataset-split = "sample"
dataset-split-num-classes = 5
dataset-split-seed = 42
dataset-split-fraction = 0.83
50 changes: 50 additions & 0 deletions baselines/fedrep/docs/make_plots.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
"""Generate plots from json files."""

import json
import os
from typing import List, Tuple

import matplotlib.pyplot as plt

# Get the current working directory
DIR = os.path.dirname(os.path.abspath(__file__))


def read_from_results(path: str) -> Tuple[str, str, List[float], str, str]:
"""Load the json file with recorded configurations and results."""
with open(path, "r", encoding="UTF-8") as fin:
data = json.load(fin)
algorithm = data["run_config"]["algorithm"]
model = data["run_config"]["model-name"]
accuracies = [res["accuracy"] * 100 for res in data["round_res"]]
dataset = data["run_config"]["dataset-name"]
num_classes = data["run_config"]["dataset-split-num-classes"]

return algorithm, model, accuracies, dataset, num_classes


def make_plot(dir_path: str, plt_title: str) -> None:
"""Given a directory with json files, generated a plot using the provided title."""
plt.figure()
with os.scandir(dir_path) as files:
for file in files:
file_name = os.path.join(dir_path, file.name)
print(file_name, flush=True)
algo, m, acc, d, n = read_from_results(file_name)
rounds = [i + 1 for i in range(len(acc))]
print(f"Max accuracy ({algo}): {max(acc):.2f}")
plt.plot(rounds, acc, label=f"{algo}-{d}-{n}classes")
plt.xlabel("Rounds")
plt.ylabel("Accuracy")
plt.title(plt_title)
plt.grid()
plt.legend()
plt.savefig(os.path.join(DIR, f"{plt_title}-{algo}"))


if __name__ == "__main__":
# Plot results generated by the baseline.
# Combine them into a full file path.
res_dir = os.path.join(DIR, "../results/")
title = "Federated Accuracy over Rounds"
make_plot(res_dir, plt_title=title)
2 changes: 1 addition & 1 deletion baselines/fedrep/fedrep/__init__.py
Original file line number Diff line number Diff line change
@@ -1 +1 @@
"""Template baseline package."""
"""fedrep: A Flower Baseline."""
Loading

0 comments on commit bbb8317

Please sign in to comment.