From 4c2a0ca6d4137357715caf6707457c4990aa43e3 Mon Sep 17 00:00:00 2001 From: jafermarq Date: Thu, 21 Dec 2023 19:07:27 +0100 Subject: [PATCH] README tweaks --- baselines/heterofl/README.md | 133 +++++++++-------------------------- 1 file changed, 33 insertions(+), 100 deletions(-) diff --git a/baselines/heterofl/README.md b/baselines/heterofl/README.md index fa3dc037c9ce..d978b98d1633 100644 --- a/baselines/heterofl/README.md +++ b/baselines/heterofl/README.md @@ -1,11 +1,11 @@ --- -title: HeteroFL - Computation And Communication Efficient Federated Learning For Heterogeneous Clients +title: "HeteroFL: Computation And Communication Efficient Federated Learning For Heterogeneous Clients" url: https://openreview.net/forum?id=TNkPBBYFkXg labels: [system heterogeneity, image classification] -dataset: [MNIST, CIFAR10] +dataset: [MNIST, CIFAR-10] --- -# HeteroFL : Computation And Communication Efficient Federated Learning For Heterogeneous Clients +# HeteroFL: Computation And Communication Efficient Federated Learning For Heterogeneous Clients **Paper:** [openreview.net/forum?id=TNkPBBYFkXg](https://openreview.net/forum?id=TNkPBBYFkXg) @@ -16,23 +16,23 @@ dataset: [MNIST, CIFAR10] ## About this baseline -**What’s implemented:** The code in this directory is an implementation of HeteroFL in pytorch using flower. The code incorporates references from the authors' implementation. Implementation of custom model split and aggregation as suggested by @negedng, is available [here](https://github.com/msck72/heterofl_custom_aggregation). By modifying the configuration in the base.yaml, the results in the paper can be replicated, with both fixed and dynamic computational complexities among clients. +**What’s implemented:** The code in this directory is an implementation of HeteroFL in PyTorch using Flower. The code incorporates references from the authors' implementation. Implementation of custom model split and aggregation as suggested by [@negedng](https://github.com/negedng), is available [here](https://github.com/msck72/heterofl_custom_aggregation). By modifying the configuration in the `base.yaml`, the results in the paper can be replicated, with both fixed and dynamic computational complexities among clients. **Key Terminology:** -+ *Model rate* defines the computational complextiy of a client. Authors have defined five different computation complexity levels {a, b, c, d, e} with the hidden channel shrinkage ratio r = 0.5. ++ *Model rate* defines the computational complexity of a client. Authors have defined five different computation complexity levels {a, b, c, d, e} with the hidden channel shrinkage ratio r = 0.5. -+ *Model split mode* specifies whether the computaional complexities of clients are fixed (throughout the experiment), or whether they are dynamic (change their mode_rate/computational-complexity every-round). ++ *Model split mode* specifies whether the computational complexities of clients are fixed (throughout the experiment), or whether they are dynamic (change their mode_rate/computational-complexity every-round). -+ *Model mode* determines the proportionality of clients with various computation complexity levels, for example, a4-b2-e4 determines at each round, proportion of clients with computational complexity level a = 4 / (4 + 2 + 4) * num_clients , similarly, proportion of clients with computational complexity level b = 2 / (4 + 2 + 4) * num_clients and so on. ++ *Model mode* determines the proportionality of clients with various computation complexity levels, for example, a4-b2-e4 determines at each round, proportion of clients with computational complexity level a = 4 / (4 + 2 + 4) * num_clients, similarly, proportion of clients with computational complexity level b = 2 / (4 + 2 + 4) * num_clients and so on. **Implementation Insights:** *ModelRateManager* manages the model rate of client in simulation, which changes the model rate based on the model mode of the setup and *ClientManagerHeterofl* keeps track of model rates of the clients, so configure fit knows which/how-much subset of the model that needs to be sent to the client. **Datasets:** The code utilized benchmark MNIST and CIFAR-10 datasets from Pytorch's torchvision for its experimentation. -**Hardware Setup:** The experiments were run on Google colab pro with 50GB RAM and T4 TPU. For MNIST dataset & CNN model, it approximatemy takes 1.5 hours to complete 200 rounds while for CIFAR10 dataset & ResNet18 model it takes around 3-4 hours to complete 400 rounds (may vary based on the model-mode of the setup). +**Hardware Setup:** The experiments were run on Google colab pro with 50GB RAM and T4 TPU. For MNIST dataset & CNN model, it approximately takes 1.5 hours to complete 200 rounds while for CIFAR10 dataset & ResNet18 model it takes around 3-4 hours to complete 400 rounds (may vary based on the model-mode of the setup). -**Contributors:** M S Chaitanya Kumar [(github.com/msck72)](github.com/msck72) +**Contributors:** M S Chaitanya Kumar [(github.com/msck72)](https://github.com/msck72) ## Experimental Setup @@ -45,102 +45,35 @@ dataset: [MNIST, CIFAR10] These models use static batch normalization (sBN) and they incorporate a Scaler module following each convolutional layer. **Dataset:** This baseline includes MNIST and CIFAR10 datasets. - - - - - - - - - - - - - - - - - - - - - -
Dataset#classesIID partitionnon-IID partition
MNIST10Distribution of equal number of data examples among n clientsDistribution of data examples such that each client has at most 2 (customizable) classes
CIFAR1010
+ +| Dataset | #Classes | IID Partition | non-IID Partition | +| :---: | :---: | :---: | :---: | +| MNIST
CIFAR10 | 10| Distribution of equal number of data examples among n clients | Distribution of data examples such that each client has at most 2 (customizable) classes | + **Training Hyperparameters:** - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
DescriptionMNISTCIFAR10
total clients100
clients per round100
#local epochs5
number of roundsIID200400
non-IID400800
optimizerSGD
momentum5.00e-04
weight-decay0.9
learning rate0.010.1
decay scheduleIID[100][200]
non-IID[150, 250][300, 500]
hidden layers[64 , 128 , 256 , 512]
+ +| Description | Data Setting | MNIST | CIFAR-10 | +| :---: | :---: | :---:| :---: | +Total Clients | both | 100 | 100 | +Clients Per Round | both | 100 | 100 +Local Epcohs | both | 5 | 5 +Num. ROunds | IID
non-IID| 200
400 | 400
800 +Optimizer | both | SGD | SGD +Momentum | both | 0.9 | 0.9 +Weight-decay | both | 5.00e-04 | 5.00e-04 +Learning Rate | both | 0.01 | 0.1 +Decay Schedule | IID
non-IID| [100]
[150, 250] | [200]
[300,500] +Hidden Layers | both | [64 , 128 , 256 , 512] | [64 , 128 , 256 , 512] + The hyperparameters of Fedavg baseline are available in [Liang et al (2020)](https://arxiv.org/abs/2001.01523). ## Environment Setup -``` +To construct the Python environment, simply run: + +```bash # Set python version pyenv install 3.10.6 pyenv local 3.10.6 @@ -215,9 +148,9 @@ Results of the combination of various computation complexity levels for **MNIST*
Results of the combination of various computation complexity levels for **CIFAR10** dataset with **dynamic** scenario(where a client does not belong to a fixed computational complexity level): -> *The HeteroFL paper reports a model with 1.8M parameters for their FedAvg baseline. However, as stated by the paper authors, those results are borrowed from [Liang et al (2020)](https://arxiv.org/abs/2001.01523), which uses a small CNN with fewer parameters (~64K as shown in this table below). We believe the HeteroFL authors made a mistake when reporting the number of parameters. We borrowed the model from Liang et al (2020)'s [repo](https://github.com/pliang279/LG-FedAvg/blob/master/models/Nets.py)* +> *The HeteroFL paper reports a model with 1.8M parameters for their FedAvg baseline. However, as stated by the paper authors, those results are borrowed from [Liang et al (2020)](https://arxiv.org/abs/2001.01523), which uses a small CNN with fewer parameters (~64K as shown in this table below). We believe the HeteroFL authors made a mistake when reporting the number of parameters. We borrowed the model from Liang et al (2020)'s [repo](https://github.com/pliang279/LG-FedAvg/blob/master/models/Nets.py).* - +
Model