Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor(baselines) Upgrade FedProx Baselne to new flwr format #4937

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
170 changes: 166 additions & 4 deletions baselines/fedprox/.gitignore
Original file line number Diff line number Diff line change
@@ -1,4 +1,166 @@
dataset/
outputs/
playground.ipynb
multirun/
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# Flower directory
.flwr

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
.pybuilder/
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version

# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock

# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock

# pdm
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
# in version control.
# https://pdm.fming.dev/#use-with-ide
.pdm.toml

# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# pytype static type analyzer
.pytype/

# Cython debug symbols
cython_debug/

# PyCharm
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/

# Project-Specific
results/
2 changes: 1 addition & 1 deletion baselines/fedprox/LICENSE
Original file line number Diff line number Diff line change
Expand Up @@ -199,4 +199,4 @@
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
limitations under the License.
43 changes: 19 additions & 24 deletions baselines/fedprox/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@ url: https://arxiv.org/abs/1812.06127
labels: [image classification, cross-device, stragglers]
dataset: [MNIST]
---

# FedProx: Federated Optimization in Heterogeneous Networks


> Note: If you use this baseline in your work, please remember to cite the original authors of the paper as well as the Flower paper.

**Paper:** [arxiv.org/abs/1812.06127](https://arxiv.org/abs/1812.06127)
Expand All @@ -19,20 +19,18 @@ dataset: [MNIST]
## About this baseline
**What's implemented:** The code in this directory replicates the experiments in *Federated Optimization in Heterogeneous Networks* (Li et al., 2018) for MNIST, which proposed the FedProx algorithm. Concretely, it replicates the results for MNIST in Figure 1 and 7.

**Datasets:** MNIST from PyTorch's Torchvision
**Datasets:** MNIST

**Hardware Setup:** These experiments were run on a desktop machine with 24 CPU threads. Any machine with 4 CPU cores or more would be able to run it in a reasonable amount of time. Note: we install PyTorch with GPU support but by default, the entire experiment runs on CPU-only mode.

**Contributors:** Charles Beauville and Javier Fernandez-Marques
**Contributors:** Charles Beauville, Javier Fernandez-Marques and Andrej Jovanović


## Experimental Setup

**Task:** Image classification

**Model:** This directory implements two models:
* A logistic regression model used in the FedProx paper for MNIST (see `models/LogisticRegression`). This is the model used by default.
* A two-layer CNN network as used in the FedAvg paper (see `models/Net`)
**Model:** A logistic regression model used in the FedProx paper for MNIST (see `model`). This is the model used by default.

**Dataset:** This baseline only includes the MNIST dataset. By default, it will be partitioned into 1000 clients following a pathological split where each client has examples of two (out of ten) class labels. The number of examples in each client is derived by sampling from a powerlaw distribution. The settings are as follows:

Expand All @@ -41,7 +39,7 @@ dataset: [MNIST]
| MNIST | 10 | 1000 | pathological with power law | 2 classes per client |

**Training Hyperparameters:**
The following table shows the main hyperparameters for this baseline with their default value (i.e. the value used if you run `python main.py` directly)
The following table shows the main hyperparameters for this baseline with their default value (i.e. the value used if you run `flwr run .` directly)

| Description | Default Value |
| ----------- | ----- |
Expand All @@ -60,50 +58,47 @@ To construct the Python environment, simply run:

```bash
# Set directory to use python 3.10 (install with `pyenv install <version>` if you don't have it)
pyenv local 3.10.12
pyenv virtualenv 3.10.14 <name-of-your-baseline-env>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
pyenv virtualenv 3.10.14 <name-of-your-baseline-env>
pyenv virtualenv 3.10.14 fedprox

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm thinking it's better to give it a reasonable name directly so people that want to run the baseline can simply copy/paste the commands


# Tell poetry to use python3.10
poetry env use 3.10.12
pyenv activate <name-of-your-baseline-env>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
pyenv activate <name-of-your-baseline-env>
pyenv activate fedprox


# Install
poetry install
pip install -e .
```

## Running the Experiments

To run this FedProx with MNIST baseline, first ensure you have activated your Poetry environment (execute `poetry shell` from this directory), then:
To run this FedProx with MNIST baseline, first ensure you have activated your environment as above, then:

```bash
python -m fedprox.main # this will run using the default settings in the `conf/config.yaml`
flwr run . # this will run using the default settings in the `pyproject.toml`

# you can override settings directly from the command line
python -m fedprox.main mu=1 num_rounds=200 # will set proximal mu to 1 and the number of rounds to 200
flwr run . --run-config "algorithm.mu=2 dataset.mu=2 algorithm.num_server_rounds=200" # will set proximal mu to 2 and the number of rounds to 200

# if you run this baseline with a larger model, you might want to use the GPU (not used by default).
# you can enable this by overriding the `server_device` and `client_resources` config. For example
# you can enable this by overriding the federation config. For example
# the below will run the server model on the GPU and 4 clients will be allowed to run concurrently on a GPU (assuming you also meet the CPU criteria for clients)
python -m fedprox.main server_device=cuda client_resources.num_gpus=0.25
flwr run . gpu-simulation
```

To run using FedAvg:
```bash
# this will use a variation of FedAvg that drops the clients that were flagged as stragglers
# This is done so to match the experimental setup in the FedProx paper
python -m fedprox.main --config-name fedavg

# this config can also be overridden from the CLI
flwr run . --run-config conf/fedavg_sf_0.9.toml
```

## Expected results

With the following command, we run both FedProx and FedAvg configurations while iterating through different values of `mu` and `stragglers_fraction`. We ran each experiment five times (this is achieved by artificially adding an extra element to the config but it doesn't have an impact on the FL setting `'+repeat_num=range(5)'`)
With the following command, we run both FedProx and FedAvg configurations while iterating through different values of `mu` and `stragglers_fraction`. We ran each experiment five times to ensure that the results are significant

```bash
python -m fedprox.main --multirun mu=0.0,2.0 stragglers_fraction=0.0,0.5,0.9 '+repeat_num=range(5)'
# note that for FedAvg we don't want to change the proximal term mu since it should be kept at 0.0
python -m fedprox.main --config-name fedavg --multirun stragglers_fraction=0.0,0.5,0.9 '+repeat_num=range(5)'
bash ./run_experiments.sh
```
The configurations of the specific experiments within this one large ran can be found in the `conf` directory.

The above commands would generate results that you can plot and would look like the plot shown below. This plot was generated using the jupyter notebook in the `docs/` directory of this baseline after running the `--multirun` commands above.
The above commands would generate results that you can plot and would look like the plot shown below. This plot was generated using the jupyter notebook in the `docs/` directory of this baseline after running the command above.

![](_static/FedProx_mnist.png)
![](_static/FedProx_mnist.png)
Binary file modified baselines/fedprox/_static/FedProx_mnist.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
32 changes: 32 additions & 0 deletions baselines/fedprox/conf/fedavg_sf_0.0.toml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In these config files, is it necessary to specify all the settings in the experiment? or is it enough to only set those that are different from those in pyproject.toml. For example, fraction_evaluate is always the same.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed this so the config files only specify the arguments that they are changing :D

Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
[algorithm]
name = "FedAvg"
num_server_rounds = 100
fraction_fit = 0.01
fraction_evaluate= 0.0
min_evaluate_clients= 0
min_available_clients= 1000
min_fit_clients= 1000
local_epochs = 10
stragglers_fraction = 0.0
learning_rate = 0.03
mu = 0.0 # Always 0 when using FedAvg
num_clients = 1000


[dataset]
power_law = true
num_unique_labels_per_partition = 2
num_unique_labels = 10
preassigned_num_samples_per_label = 5
seed=42
mu = 0.0 # Always 0 when using FedAvg
sigma = 2.0
val_ratio = 0.1
batch_size = 10

[fit]
drop_client = true # with FedProx, clients shouldn't be dropped even if they are stragglers

[model]
name = "LogisticRegression"
num_classes = 10
32 changes: 32 additions & 0 deletions baselines/fedprox/conf/fedavg_sf_0.5.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
[algorithm]
name = "FedAvg"
num_server_rounds = 100
fraction_fit = 0.01
fraction_evaluate= 0.0
min_evaluate_clients= 0
min_available_clients= 1000
min_fit_clients= 1000
local_epochs = 10
stragglers_fraction = 0.5
learning_rate = 0.03
mu = 0.0 # Always 0 when using FedAvg
num_clients = 1000


[dataset]
power_law = true
num_unique_labels_per_partition = 2
num_unique_labels = 10
preassigned_num_samples_per_label = 5
seed=42
mu = 0.0 # Always 0 when using FedAvg
sigma = 2.0
val_ratio = 0.1
batch_size = 10

[fit]
drop_client = true # with FedProx, clients shouldn't be dropped even if they are stragglers

[model]
name = "LogisticRegression"
num_classes = 10
32 changes: 32 additions & 0 deletions baselines/fedprox/conf/fedavg_sf_0.9.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
[algorithm]
name = "FedAvg"
num_server_rounds = 100
fraction_fit = 0.01
fraction_evaluate= 0.0
min_evaluate_clients= 0
min_available_clients= 1000
min_fit_clients= 1000
local_epochs = 10
stragglers_fraction = 0.9
learning_rate = 0.03
mu = 0.0 # Always 0 when using FedAvg
num_clients = 1000


[dataset]
power_law = true
num_unique_labels_per_partition = 2
num_unique_labels = 10
preassigned_num_samples_per_label = 5
seed=42
mu = 0.0 # Always 0 when using FedAvg
sigma = 2.0
val_ratio = 0.1
batch_size = 10

[fit]
drop_client = true # with FedProx, clients shouldn't be dropped even if they are stragglers

[model]
name = "LogisticRegression"
num_classes = 10
Loading