diff --git a/README.md b/README.md index e4433e517b88..38a11d951fe7 100644 --- a/README.md +++ b/README.md @@ -1,16 +1,16 @@ # Flower: A Friendly Federated Learning Framework

- - Flower Website + + Flower Website

- Website | - Blog | - Docs | - Conference | - Slack + Website | + Blog | + Docs | + Conference | + Slack

@@ -18,7 +18,7 @@ [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](https://github.com/adap/flower/blob/main/CONTRIBUTING.md) ![Build](https://github.com/adap/flower/actions/workflows/framework.yml/badge.svg) [![Downloads](https://static.pepy.tech/badge/flwr)](https://pepy.tech/project/flwr) -[![Slack](https://img.shields.io/badge/Chat-Slack-red)](https://flower.dev/join-slack) +[![Slack](https://img.shields.io/badge/Chat-Slack-red)](https://flower.ai/join-slack) Flower (`flwr`) is a framework for building federated learning systems. The design of Flower is based on a few guiding principles: @@ -39,7 +39,7 @@ design of Flower is based on a few guiding principles: - **Understandable**: Flower is written with maintainability in mind. The community is encouraged to both read and contribute to the codebase. -Meet the Flower community on [flower.dev](https://flower.dev)! +Meet the Flower community on [flower.ai](https://flower.ai)! ## Federated Learning Tutorial @@ -73,19 +73,19 @@ Stay tuned, more tutorials are coming soon. Topics include **Privacy and Securit ## Documentation -[Flower Docs](https://flower.dev/docs): +[Flower Docs](https://flower.ai/docs): -- [Installation](https://flower.dev/docs/framework/how-to-install-flower.html) -- [Quickstart (TensorFlow)](https://flower.dev/docs/framework/tutorial-quickstart-tensorflow.html) -- [Quickstart (PyTorch)](https://flower.dev/docs/framework/tutorial-quickstart-pytorch.html) -- [Quickstart (Hugging Face)](https://flower.dev/docs/framework/tutorial-quickstart-huggingface.html) -- [Quickstart (PyTorch Lightning)](https://flower.dev/docs/framework/tutorial-quickstart-pytorch-lightning.html) -- [Quickstart (Pandas)](https://flower.dev/docs/framework/tutorial-quickstart-pandas.html) -- [Quickstart (fastai)](https://flower.dev/docs/framework/tutorial-quickstart-fastai.html) -- [Quickstart (JAX)](https://flower.dev/docs/framework/tutorial-quickstart-jax.html) -- [Quickstart (scikit-learn)](https://flower.dev/docs/framework/tutorial-quickstart-scikitlearn.html) -- [Quickstart (Android [TFLite])](https://flower.dev/docs/framework/tutorial-quickstart-android.html) -- [Quickstart (iOS [CoreML])](https://flower.dev/docs/framework/tutorial-quickstart-ios.html) +- [Installation](https://flower.ai/docs/framework/how-to-install-flower.html) +- [Quickstart (TensorFlow)](https://flower.ai/docs/framework/tutorial-quickstart-tensorflow.html) +- [Quickstart (PyTorch)](https://flower.ai/docs/framework/tutorial-quickstart-pytorch.html) +- [Quickstart (Hugging Face)](https://flower.ai/docs/framework/tutorial-quickstart-huggingface.html) +- [Quickstart (PyTorch Lightning)](https://flower.ai/docs/framework/tutorial-quickstart-pytorch-lightning.html) +- [Quickstart (Pandas)](https://flower.ai/docs/framework/tutorial-quickstart-pandas.html) +- [Quickstart (fastai)](https://flower.ai/docs/framework/tutorial-quickstart-fastai.html) +- [Quickstart (JAX)](https://flower.ai/docs/framework/tutorial-quickstart-jax.html) +- [Quickstart (scikit-learn)](https://flower.ai/docs/framework/tutorial-quickstart-scikitlearn.html) +- [Quickstart (Android [TFLite])](https://flower.ai/docs/framework/tutorial-quickstart-android.html) +- [Quickstart (iOS [CoreML])](https://flower.ai/docs/framework/tutorial-quickstart-ios.html) ## Flower Baselines @@ -112,9 +112,9 @@ Flower Baselines is a collection of community-contributed projects that reproduc - [FedAvg](https://github.com/adap/flower/tree/main/baselines/flwr_baselines/flwr_baselines/publications/fedavg_mnist) - [FedOpt](https://github.com/adap/flower/tree/main/baselines/flwr_baselines/flwr_baselines/publications/adaptive_federated_optimization) -Please refer to the [Flower Baselines Documentation](https://flower.dev/docs/baselines/) for a detailed categorization of baselines and for additional info including: -* [How to use Flower Baselines](https://flower.dev/docs/baselines/how-to-use-baselines.html) -* [How to contribute a new Flower Baseline](https://flower.dev/docs/baselines/how-to-contribute-baselines.html) +Please refer to the [Flower Baselines Documentation](https://flower.ai/docs/baselines/) for a detailed categorization of baselines and for additional info including: +* [How to use Flower Baselines](https://flower.ai/docs/baselines/how-to-use-baselines.html) +* [How to contribute a new Flower Baseline](https://flower.ai/docs/baselines/how-to-contribute-baselines.html) ## Flower Usage Examples @@ -151,7 +151,7 @@ Other [examples](https://github.com/adap/flower/tree/main/examples): ## Community -Flower is built by a wonderful community of researchers and engineers. [Join Slack](https://flower.dev/join-slack) to meet them, [contributions](#contributing-to-flower) are welcome. +Flower is built by a wonderful community of researchers and engineers. [Join Slack](https://flower.ai/join-slack) to meet them, [contributions](#contributing-to-flower) are welcome. diff --git a/baselines/README.md b/baselines/README.md index a18c0553b2b4..3a84df02d8de 100644 --- a/baselines/README.md +++ b/baselines/README.md @@ -1,7 +1,7 @@ # Flower Baselines -> We are changing the way we structure the Flower baselines. While we complete the transition to the new format, you can still find the existing baselines in the `flwr_baselines` directory. Currently, you can make use of baselines for [FedAvg](https://github.com/adap/flower/tree/main/baselines/flwr_baselines/flwr_baselines/publications/fedavg_mnist), [FedOpt](https://github.com/adap/flower/tree/main/baselines/flwr_baselines/flwr_baselines/publications/adaptive_federated_optimization), and [LEAF-FEMNIST](https://github.com/adap/flower/tree/main/baselines/flwr_baselines/flwr_baselines/publications/leaf/femnist). +> We are changing the way we structure the Flower baselines. While we complete the transition to the new format, you can still find the existing baselines in the `flwr_baselines` directory. Currently, you can make use of baselines for [FedAvg](https://github.com/adap/flower/tree/main/baselines/flwr_baselines/flwr_baselines/publications/fedavg_mnist), [FedOpt](https://github.com/adap/flower/tree/main/baselines/flwr_baselines/flwr_baselines/publications/adaptive_federated_optimization), and [LEAF-FEMNIST](https://github.com/adap/flower/tree/main/baselines/flwr_baselines/flwr_baselines/publications/leaf/femnist). > The documentation below has been updated to reflect the new way of using Flower baselines. @@ -23,7 +23,7 @@ Please note that some baselines might include additional files (e.g. a `requirem ## Running the baselines -Each baseline is self-contained in its own directory. Furthermore, each baseline defines its own Python environment using [Poetry](https://python-poetry.org/docs/) via a `pyproject.toml` file and [`pyenv`](https://github.com/pyenv/pyenv). If you haven't setup `Poetry` and `pyenv` already on your machine, please take a look at the [Documentation](https://flower.dev/docs/baselines/how-to-use-baselines.html#setting-up-your-machine) for a guide on how to do so. +Each baseline is self-contained in its own directory. Furthermore, each baseline defines its own Python environment using [Poetry](https://python-poetry.org/docs/) via a `pyproject.toml` file and [`pyenv`](https://github.com/pyenv/pyenv). If you haven't setup `Poetry` and `pyenv` already on your machine, please take a look at the [Documentation](https://flower.ai/docs/baselines/how-to-use-baselines.html#setting-up-your-machine) for a guide on how to do so. Assuming `pyenv` and `Poetry` are already installed on your system. Running a baseline can be done by: @@ -54,7 +54,7 @@ The steps to follow are: ```bash # This will create a new directory with the same structure as `baseline_template`. ./dev/create-baseline.sh - ``` + ``` 3. Then, go inside your baseline directory and continue with the steps detailed in `EXTENDED_README.md` and `README.md`. 4. Once your code is ready and you have checked that following the instructions in your `README.md` the Python environment can be created correctly and that running the code following your instructions can reproduce the experiments in the paper, you just need to create a Pull Request (PR). Then, the process to merge your baseline into the Flower repo will begin! diff --git a/baselines/fedpara/README.md b/baselines/fedpara/README.md index 068366aa261c..82efe5fac537 100644 --- a/baselines/fedpara/README.md +++ b/baselines/fedpara/README.md @@ -5,7 +5,7 @@ labels: [image classification, personalization, low-rank training, tensor decomp dataset: [CIFAR-10, CIFAR-100, MNIST] --- -# FedPara: Low-rank Hadamard Product for Communication-Efficient Federated Learning +# FedPara: Low-rank Hadamard Product for Communication-Efficient Federated Learning > Note: If you use this baseline in your work, please remember to cite the original authors of the paper as well as the Flower paper. @@ -43,7 +43,7 @@ Specifically, it replicates the results for CIFAR-10 and CIFAR-100 in Figure 3 On a machine with RTX 3090Ti (24GB VRAM) it takes approximately 1h to run each CIFAR-10/100 experiment while using < 12GB of VRAM. You can lower the VRAM footprint my reducing the number of clients allowed to run in parallel in your GPU (do this by raising `client_resources.num_gpus`). -**Contributors:** Yahia Salaheldin Shaaban, Omar Mokhtar and Roeia Amr +**Contributors:** Yahia Salaheldin Shaaban, Omar Mokhtar and Roeia Amr ## Experimental Setup @@ -52,48 +52,48 @@ On a machine with RTX 3090Ti (24GB VRAM) it takes approximately 1h to run each C **Model:** This baseline implements VGG16 with group normalization. -**Dataset:** +**Dataset:** -| Dataset | #classes | #partitions | partitioning method IID | partitioning method non-IID | -|:---------|:--------:|:-----------:|:----------------------:| :----------------------:| -| CIFAR-10 | 10 | 100 | random split | Dirichlet distribution ($\alpha=0.5$)| -| CIFAR-100 | 100 | 50 | random split| Dirichlet distribution ($\alpha=0.5$)| +| Dataset | #classes | #partitions | partitioning method IID | partitioning method non-IID | +| :-------- | :------: | :---------: | :---------------------: | :-----------------------------------: | +| CIFAR-10 | 10 | 100 | random split | Dirichlet distribution ($\alpha=0.5$) | +| CIFAR-100 | 100 | 50 | random split | Dirichlet distribution ($\alpha=0.5$) | **Training Hyperparameters:** -| | Cifar10 IID | Cifar10 Non-IID | Cifar100 IID | Cifar100 Non-IID | MNIST | -|---|-------|-------|------|-------|----------| -| Fraction of client (K) | 16 | 16 | 8 | 8 | 10 | -| Total rounds (T) | 200 | 200 | 400 | 400 | 100 | -| Number of SGD epochs (E) | 10 | 5 | 10 | 5 | 5 | -| Batch size (B) | 64 | 64 | 64 | 64 | 10 | -| Initial learning rate (η) | 0.1 | 0.1 | 0.1 | 0.1 | 0.1-0.01 | -| Learning rate decay (τ) | 0.992 | 0.992 | 0.992| 0.992 | 0.999 | -| Regularization coefficient (λ) | 1 | 1 | 1 | 1 | 0 | +| | Cifar10 IID | Cifar10 Non-IID | Cifar100 IID | Cifar100 Non-IID | MNIST | +| ------------------------------ | ----------- | --------------- | ------------ | ---------------- | -------- | +| Fraction of client (K) | 16 | 16 | 8 | 8 | 10 | +| Total rounds (T) | 200 | 200 | 400 | 400 | 100 | +| Number of SGD epochs (E) | 10 | 5 | 10 | 5 | 5 | +| Batch size (B) | 64 | 64 | 64 | 64 | 10 | +| Initial learning rate (η) | 0.1 | 0.1 | 0.1 | 0.1 | 0.1-0.01 | +| Learning rate decay (τ) | 0.992 | 0.992 | 0.992 | 0.992 | 0.999 | +| Regularization coefficient (λ) | 1 | 1 | 1 | 1 | 0 | As for the parameters ratio ($\gamma$) we use the following model sizes. As in the paper, $\gamma=0.1$ is used for CIFAR-10 and $\gamma=0.4$ for CIFAR-100: | Parameters ratio ($\gamma$) | CIFAR-10 | CIFAR-100 | -|----------|--------|--------| -| 1.0 (original) | 15.25M | 15.30M | -| 0.1 | 1.55M | - | -| 0.4 | - | 4.53M | +| --------------------------- | -------- | --------- | +| 1.0 (original) | 15.25M | 15.30M | +| 0.1 | 1.55M | - | +| 0.4 | - | 4.53M | -### Notes: +### Notes: - Notably, Fedpara's low-rank training technique heavily relies on initialization, with our experiments revealing that employing a 'Fan-in' He initialization (or Kaiming) renders the model incapable of convergence, resulting in a performance akin to that of a random classifier. We found that only Fan-out initialization yielded the anticipated results, and we postulated that this is attributed to the variance conservation during backward propagation. - The paper lacks explicit guidance on calculating the rank, aside from the "Rank_min - Rank_max" equation. To address this, we devised an equation aligning with the literature's explanation and constraint, solving a quadratic equation to determine max_rank and utilizing proposition 2 from the paper to establish min_rank. - The Jacobian correction was not incorporated into our implementation, primarily due to the lack of explicit instructions in the paper regarding the specific implementation of the dual update principle mentioned in the Jacobian correction section. -- It was observed that data generation is crutial for model convergence +- It was observed that data generation is crutial for model convergence ## Environment Setup To construct the Python environment follow these steps: -It is assumed that `pyenv` is installed, `poetry` is installed and python 3.10.6 is installed using `pyenv`. Refer to this [documentation](https://flower.dev/docs/baselines/how-to-usef-baselines.html#setting-up-your-machine) to ensure that your machine is ready. +It is assumed that `pyenv` is installed, `poetry` is installed and python 3.10.6 is installed using `pyenv`. Refer to this [documentation](https://flower.ai/docs/baselines/how-to-usef-baselines.html#setting-up-your-machine) to ensure that your machine is ready. ```bash # Set Python 3.10 @@ -112,7 +112,7 @@ poetry shell Running `FedPara` is easy. You can run it with default parameters directly or by tweaking them directly on the command line. Some command examples are shown below. -```bash +```bash # To run fedpara with default parameters python -m fedpara.main @@ -138,45 +138,45 @@ To reproduce the curves shown below (which correspond to those in Figure 3 in th ```bash # To run fedpara for non-iid CIFAR-10 on vgg16 for lowrank and original schemes -python -m fedpara.main --multirun model.param_type=standard,lowrank +python -m fedpara.main --multirun model.param_type=standard,lowrank # To run fedpara for non-iid CIFAR-100 on vgg16 for lowrank and original schemes -python -m fedpara.main --config-name cifar100 --multirun model.param_type=standard,lowrank +python -m fedpara.main --config-name cifar100 --multirun model.param_type=standard,lowrank # To run fedpara for iid CIFAR-10 on vgg16 for lowrank and original schemes -python -m fedpara.main --multirun model.param_type=standard,lowrank num_epochs=10 dataset_config.partition=iid +python -m fedpara.main --multirun model.param_type=standard,lowrank num_epochs=10 dataset_config.partition=iid # To run fedpara for iid CIFAR-100 on vgg16 for lowrank and original schemes python -m fedpara.main --config-name cifar100 --multirun model.param_type=standard,lowrank num_epochs=10 dataset_config.partition=iid -# To run fedavg for non-iid MINST on FC -python -m fedpara.main --config-name mnist_fedavg -# To run fedper for non-iid MINST on FC -python -m fedpara.main --config-name mnist_fedper -# To run pfedpara for non-iid MINST on FC -python -m fedpara.main --config-name mnist_pfedpara +# To run fedavg for non-iid MINST on FC +python -m fedpara.main --config-name mnist_fedavg +# To run fedper for non-iid MINST on FC +python -m fedpara.main --config-name mnist_fedper +# To run pfedpara for non-iid MINST on FC +python -m fedpara.main --config-name mnist_pfedpara ``` -#### Communication Cost: -Communication costs as measured as described in the paper: +#### Communication Cost: +Communication costs as measured as described in the paper: *"FL evaluation typically measures the required rounds to achieve the target accuracy as communication costs, but we instead assess total transferred bit sizes, 2 × (#participants)×(model size)×(#rounds)"* ### CIFAR-100 (Accuracy vs Communication Cost) -| IID | Non-IID | -|:----:|:----:| -|![Cifar100 iid](_static/Cifar100_iid.jpeg) | ![Cifar100 non-iid](_static/Cifar100_noniid.jpeg) | +| IID | Non-IID | +| :----------------------------------------: | :-----------------------------------------------: | +| ![Cifar100 iid](_static/Cifar100_iid.jpeg) | ![Cifar100 non-iid](_static/Cifar100_noniid.jpeg) | ### CIFAR-10 (Accuracy vs Communication Cost) -| IID | Non-IID | -|:----:|:----:| -|![CIFAR10 iid](_static/Cifar10_iid.jpeg) | ![CIFAR10 non-iid](_static/Cifar10_noniid.jpeg) | +| IID | Non-IID | +| :--------------------------------------: | :---------------------------------------------: | +| ![CIFAR10 iid](_static/Cifar10_iid.jpeg) | ![CIFAR10 non-iid](_static/Cifar10_noniid.jpeg) | ### NON-IID MINST (FedAvg vs FedPer vs pFedPara) The only federated averaging (FedAvg) implementation replicates the results outlined in the paper. However, challenges with convergence were encountered when applying `pFedPara` and `FedPer` methods. -![Personalization algorithms](_static/non-iid_mnist_personalization.png) +![Personalization algorithms](_static/non-iid_mnist_personalization.png) ## Code Acknowledgments Our code is inspired from these repos: diff --git a/datasets/README.md b/datasets/README.md index 876b6f453fa5..61292fe988bf 100644 --- a/datasets/README.md +++ b/datasets/README.md @@ -4,9 +4,9 @@ [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](https://github.com/adap/flower/blob/main/CONTRIBUTING.md) ![Build](https://github.com/adap/flower/actions/workflows/framework.yml/badge.svg) ![Downloads](https://pepy.tech/badge/flwr-datasets) -[![Slack](https://img.shields.io/badge/Chat-Slack-red)](https://flower.dev/join-slack) +[![Slack](https://img.shields.io/badge/Chat-Slack-red)](https://flower.ai/join-slack) -Flower Datasets (`flwr-datasets`) is a library to quickly and easily create datasets for federated learning, federated evaluation, and federated analytics. It was created by the `Flower Labs` team that also created Flower: A Friendly Federated Learning Framework. +Flower Datasets (`flwr-datasets`) is a library to quickly and easily create datasets for federated learning, federated evaluation, and federated analytics. It was created by the `Flower Labs` team that also created Flower: A Friendly Federated Learning Framework. Flower Datasets library supports: * **downloading datasets** - choose the dataset from Hugging Face's `datasets`, * **partitioning datasets** - customize the partitioning scheme, @@ -14,10 +14,10 @@ Flower Datasets library supports: Thanks to using Hugging Face's `datasets` used under the hood, Flower Datasets integrates with the following popular formats/frameworks: * Hugging Face, -* PyTorch, -* TensorFlow, -* Numpy, -* Pandas, +* PyTorch, +* TensorFlow, +* Numpy, +* Pandas, * Jax, * Arrow. @@ -25,7 +25,7 @@ Create **custom partitioning schemes** or choose from the **implemented partitio * Partitioner (the abstract base class) `Partitioner` * IID partitioning `IidPartitioner(num_partitions)` * Natural ID partitioner `NaturalIdPartitioner` -* Size partitioner (the abstract base class for the partitioners dictating the division based the number of samples) `SizePartitioner` +* Size partitioner (the abstract base class for the partitioners dictating the division based the number of samples) `SizePartitioner` * Linear partitioner `LinearPartitioner` * Square partitioner `SquarePartitioner` * Exponential partitioner `ExponentialPartitioner` @@ -83,7 +83,7 @@ Here are a few of the things that we will work on in future releases: * ✅ Support for more datasets (especially the ones that have user id present). * ✅ Creation of custom `Partitioner`s. * ✅ More out-of-the-box `Partitioner`s. -* ✅ Passing `Partitioner`s via `FederatedDataset`'s `partitioners` argument. +* ✅ Passing `Partitioner`s via `FederatedDataset`'s `partitioners` argument. * ✅ Customization of the dataset splitting before the partitioning. * Simplification of the dataset transformation to the popular frameworks/types. * Creation of the synthetic data, diff --git a/examples/advanced-pytorch/README.md b/examples/advanced-pytorch/README.md index 9101105b2618..c1ba85b95879 100644 --- a/examples/advanced-pytorch/README.md +++ b/examples/advanced-pytorch/README.md @@ -1,6 +1,6 @@ # Advanced Flower Example (PyTorch) -This example demonstrates an advanced federated learning setup using Flower with PyTorch. This example uses [Flower Datasets](https://flower.dev/docs/datasets/) and it differs from the quickstart example in the following ways: +This example demonstrates an advanced federated learning setup using Flower with PyTorch. This example uses [Flower Datasets](https://flower.ai/docs/datasets/) and it differs from the quickstart example in the following ways: - 10 clients (instead of just 2) - Each client holds a local dataset of 5000 training examples and 1000 test examples (note that using the `run.sh` script will only select 10 data samples by default, as the `--toy` argument is set). diff --git a/examples/advanced-tensorflow/README.md b/examples/advanced-tensorflow/README.md index b21c0d2545ca..59866fd99a06 100644 --- a/examples/advanced-tensorflow/README.md +++ b/examples/advanced-tensorflow/README.md @@ -1,6 +1,6 @@ # Advanced Flower Example (TensorFlow/Keras) -This example demonstrates an advanced federated learning setup using Flower with TensorFlow/Keras. This example uses [Flower Datasets](https://flower.dev/docs/datasets/) and it differs from the quickstart example in the following ways: +This example demonstrates an advanced federated learning setup using Flower with TensorFlow/Keras. This example uses [Flower Datasets](https://flower.ai/docs/datasets/) and it differs from the quickstart example in the following ways: - 10 clients (instead of just 2) - Each client holds a local dataset of 1/10 of the train datasets and 80% is training examples and 20% as test examples (note that by default only a small subset of this data is used when running the `run.sh` script) diff --git a/examples/custom-metrics/README.md b/examples/custom-metrics/README.md index debcd7919839..317fb6336106 100644 --- a/examples/custom-metrics/README.md +++ b/examples/custom-metrics/README.md @@ -9,7 +9,7 @@ The main takeaways of this implementation are: - the use of the `output_dict` on the client side - inside `evaluate` method on `client.py` - the use of the `evaluate_metrics_aggregation_fn` - to aggregate the metrics on the server side, part of the `strategy` on `server.py` -This example is based on the `quickstart-tensorflow` with CIFAR-10, source [here](https://flower.dev/docs/quickstart-tensorflow.html), with the addition of [Flower Datasets](https://flower.dev/docs/datasets/index.html) to retrieve the CIFAR-10. +This example is based on the `quickstart-tensorflow` with CIFAR-10, source [here](https://flower.ai/docs/quickstart-tensorflow.html), with the addition of [Flower Datasets](https://flower.ai/docs/datasets/index.html) to retrieve the CIFAR-10. Using the CIFAR-10 dataset for classification, this is a multi-class classification problem, thus some changes on how to calculate the metrics using `average='micro'` and `np.argmax` is required. For binary classification, this is not required. Also, for unsupervised learning tasks, such as using a deep autoencoder, a custom metric based on reconstruction error could be implemented on client side. @@ -91,16 +91,16 @@ chmod +x run.sh ./run.sh ``` -You will see that Keras is starting a federated training. Have a look to the [Flower Quickstarter documentation](https://flower.dev/docs/quickstart-tensorflow.html) for a detailed explanation. You can add `steps_per_epoch=3` to `model.fit()` if you just want to evaluate that everything works without having to wait for the client-side training to finish (this will save you a lot of time during development). +You will see that Keras is starting a federated training. Have a look to the [Flower Quickstarter documentation](https://flower.ai/docs/quickstart-tensorflow.html) for a detailed explanation. You can add `steps_per_epoch=3` to `model.fit()` if you just want to evaluate that everything works without having to wait for the client-side training to finish (this will save you a lot of time during development). Running `run.sh` will result in the following output (after 3 rounds): ```shell INFO flwr 2024-01-17 17:45:23,794 | app.py:228 | app_fit: metrics_distributed { - 'accuracy': [(1, 0.10000000149011612), (2, 0.10000000149011612), (3, 0.3393000066280365)], - 'acc': [(1, 0.1), (2, 0.1), (3, 0.3393)], - 'rec': [(1, 0.1), (2, 0.1), (3, 0.3393)], - 'prec': [(1, 0.1), (2, 0.1), (3, 0.3393)], + 'accuracy': [(1, 0.10000000149011612), (2, 0.10000000149011612), (3, 0.3393000066280365)], + 'acc': [(1, 0.1), (2, 0.1), (3, 0.3393)], + 'rec': [(1, 0.1), (2, 0.1), (3, 0.3393)], + 'prec': [(1, 0.1), (2, 0.1), (3, 0.3393)], 'f1': [(1, 0.10000000000000002), (2, 0.10000000000000002), (3, 0.3393)] } ``` diff --git a/examples/flower-via-docker-compose/README.md b/examples/flower-via-docker-compose/README.md index 1d830e46cbdb..3ef1ac37bcda 100644 --- a/examples/flower-via-docker-compose/README.md +++ b/examples/flower-via-docker-compose/README.md @@ -1,7 +1,7 @@ # Leveraging Flower and Docker for Device Heterogeneity Management in Federated Learning

- Flower Website + Flower Website Docker Logo

@@ -141,7 +141,7 @@ By following these steps, you will have a fully functional federated learning en ### Data Pipeline with FLWR-Datasets -We have integrated [`flwr-datasets`](https://flower.dev/docs/datasets/) into our data pipeline, which is managed within the `load_data.py` file in the `helpers/` directory. This script facilitates standardized access to datasets across the federated network and incorporates a `data_sampling_percentage` argument. This argument allows users to specify the percentage of the dataset to be used for training and evaluation, accommodating devices with lower memory capabilities to prevent Out-of-Memory (OOM) errors. +We have integrated [`flwr-datasets`](https://flower.ai/docs/datasets/) into our data pipeline, which is managed within the `load_data.py` file in the `helpers/` directory. This script facilitates standardized access to datasets across the federated network and incorporates a `data_sampling_percentage` argument. This argument allows users to specify the percentage of the dataset to be used for training and evaluation, accommodating devices with lower memory capabilities to prevent Out-of-Memory (OOM) errors. ### Model Selection and Dataset diff --git a/examples/pytorch-from-centralized-to-federated/README.md b/examples/pytorch-from-centralized-to-federated/README.md index fccb14158ecd..06ee89dddcac 100644 --- a/examples/pytorch-from-centralized-to-federated/README.md +++ b/examples/pytorch-from-centralized-to-federated/README.md @@ -2,7 +2,7 @@ This example demonstrates how an already existing centralized PyTorch-based machine learning project can be federated with Flower. -This introductory example for Flower uses PyTorch, but you're not required to be a PyTorch expert to run the example. The example will help you to understand how Flower can be used to build federated learning use cases based on existing machine learning projects. This example uses [Flower Datasets](https://flower.dev/docs/datasets/) to download, partition and preprocess the CIFAR-10 dataset. +This introductory example for Flower uses PyTorch, but you're not required to be a PyTorch expert to run the example. The example will help you to understand how Flower can be used to build federated learning use cases based on existing machine learning projects. This example uses [Flower Datasets](https://flower.ai/docs/datasets/) to download, partition and preprocess the CIFAR-10 dataset. ## Project Setup diff --git a/examples/quickstart-huggingface/README.md b/examples/quickstart-huggingface/README.md index fd868aa1fcce..5fdba887f181 100644 --- a/examples/quickstart-huggingface/README.md +++ b/examples/quickstart-huggingface/README.md @@ -1,6 +1,6 @@ # Federated HuggingFace Transformers using Flower and PyTorch -This introductory example to using [HuggingFace](https://huggingface.co) Transformers with Flower with PyTorch. This example has been extended from the [quickstart-pytorch](https://flower.dev/docs/examples/quickstart-pytorch.html) example. The training script closely follows the [HuggingFace course](https://huggingface.co/course/chapter3?fw=pt), so you are encouraged to check that out for a detailed explanation of the transformer pipeline. +This introductory example to using [HuggingFace](https://huggingface.co) Transformers with Flower with PyTorch. This example has been extended from the [quickstart-pytorch](https://flower.ai/docs/examples/quickstart-pytorch.html) example. The training script closely follows the [HuggingFace course](https://huggingface.co/course/chapter3?fw=pt), so you are encouraged to check that out for a detailed explanation of the transformer pipeline. Like `quickstart-pytorch`, running this example in itself is also meant to be quite easy. diff --git a/examples/quickstart-pandas/README.md b/examples/quickstart-pandas/README.md index a25e6ea6ee36..efcda43cf34d 100644 --- a/examples/quickstart-pandas/README.md +++ b/examples/quickstart-pandas/README.md @@ -1,6 +1,6 @@ # Flower Example using Pandas -This introductory example to Flower uses Pandas, but deep knowledge of Pandas is not necessarily required to run the example. However, it will help you understand how to adapt Flower to your use case. This example uses [Flower Datasets](https://flower.dev/docs/datasets/) to +This introductory example to Flower uses Pandas, but deep knowledge of Pandas is not necessarily required to run the example. However, it will help you understand how to adapt Flower to your use case. This example uses [Flower Datasets](https://flower.ai/docs/datasets/) to download, partition and preprocess the dataset. Running this example in itself is quite easy. @@ -79,4 +79,4 @@ Start client 2 in the second terminal: $ python3 client.py --node-id 1 ``` -You will see that the server is printing aggregated statistics about the dataset distributed amongst clients. Have a look to the [Flower Quickstarter documentation](https://flower.dev/docs/quickstart-pandas.html) for a detailed explanation. +You will see that the server is printing aggregated statistics about the dataset distributed amongst clients. Have a look to the [Flower Quickstarter documentation](https://flower.ai/docs/quickstart-pandas.html) for a detailed explanation. diff --git a/examples/quickstart-pytorch-lightning/README.md b/examples/quickstart-pytorch-lightning/README.md index 1287b50bca65..1d404a5d714f 100644 --- a/examples/quickstart-pytorch-lightning/README.md +++ b/examples/quickstart-pytorch-lightning/README.md @@ -1,6 +1,6 @@ # Flower Example using PyTorch Lightning -This introductory example to Flower uses PyTorch, but deep knowledge of PyTorch Lightning is not necessarily required to run the example. However, it will help you understand how to adapt Flower to your use case. Running this example in itself is quite easy. This example uses [Flower Datasets](https://flower.dev/docs/datasets/) to download, partition and preprocess the MNIST dataset. +This introductory example to Flower uses PyTorch, but deep knowledge of PyTorch Lightning is not necessarily required to run the example. However, it will help you understand how to adapt Flower to your use case. Running this example in itself is quite easy. This example uses [Flower Datasets](https://flower.ai/docs/datasets/) to download, partition and preprocess the MNIST dataset. ## Project Setup diff --git a/examples/quickstart-pytorch/README.md b/examples/quickstart-pytorch/README.md index 6de0dcf7ab32..3b9b9b310608 100644 --- a/examples/quickstart-pytorch/README.md +++ b/examples/quickstart-pytorch/README.md @@ -1,6 +1,6 @@ # Flower Example using PyTorch -This introductory example to Flower uses PyTorch, but deep knowledge of PyTorch is not necessarily required to run the example. However, it will help you understand how to adapt Flower to your use case. Running this example in itself is quite easy. This example uses [Flower Datasets](https://flower.dev/docs/datasets/) to download, partition and preprocess the CIFAR-10 dataset. +This introductory example to Flower uses PyTorch, but deep knowledge of PyTorch is not necessarily required to run the example. However, it will help you understand how to adapt Flower to your use case. Running this example in itself is quite easy. This example uses [Flower Datasets](https://flower.ai/docs/datasets/) to download, partition and preprocess the CIFAR-10 dataset. ## Project Setup diff --git a/examples/quickstart-sklearn-tabular/README.md b/examples/quickstart-sklearn-tabular/README.md index d62525c96c18..373aaea5999c 100644 --- a/examples/quickstart-sklearn-tabular/README.md +++ b/examples/quickstart-sklearn-tabular/README.md @@ -3,7 +3,7 @@ This example of Flower uses `scikit-learn`'s `LogisticRegression` model to train a federated learning system on "iris" (tabular) dataset. It will help you understand how to adapt Flower for use with `scikit-learn`. -Running this example in itself is quite easy. This example uses [Flower Datasets](https://flower.dev/docs/datasets/) to +Running this example in itself is quite easy. This example uses [Flower Datasets](https://flower.ai/docs/datasets/) to download, partition and preprocess the dataset. ## Project Setup diff --git a/examples/quickstart-tensorflow/README.md b/examples/quickstart-tensorflow/README.md index 92d38c9340d7..8d5e9434b086 100644 --- a/examples/quickstart-tensorflow/README.md +++ b/examples/quickstart-tensorflow/README.md @@ -1,7 +1,7 @@ # Flower Example using TensorFlow/Keras This introductory example to Flower uses Keras but deep knowledge of Keras is not necessarily required to run the example. However, it will help you understand how to adapt Flower to your use case. -Running this example in itself is quite easy. This example uses [Flower Datasets](https://flower.dev/docs/datasets/) to download, partition and preprocess the CIFAR-10 dataset. +Running this example in itself is quite easy. This example uses [Flower Datasets](https://flower.ai/docs/datasets/) to download, partition and preprocess the CIFAR-10 dataset. ## Project Setup diff --git a/examples/simulation-pytorch/README.md b/examples/simulation-pytorch/README.md index 11b7a3364376..5ba5ec70dc3e 100644 --- a/examples/simulation-pytorch/README.md +++ b/examples/simulation-pytorch/README.md @@ -1,6 +1,6 @@ # Flower Simulation example using PyTorch -This introductory example uses the simulation capabilities of Flower to simulate a large number of clients on a single machine. Take a look at the [Documentation](https://flower.dev/docs/framework/how-to-run-simulations.html) for a deep dive into how Flower simulation works. This example uses [Flower Datasets](https://flower.dev/docs/datasets/) to download, partition and preprocess the MNIST dataset. This examples uses 100 clients by default. +This introductory example uses the simulation capabilities of Flower to simulate a large number of clients on a single machine. Take a look at the [Documentation](https://flower.ai/docs/framework/how-to-run-simulations.html) for a deep dive into how Flower simulation works. This example uses [Flower Datasets](https://flower.ai/docs/datasets/) to download, partition and preprocess the MNIST dataset. This examples uses 100 clients by default. ## Running the example (via Jupyter Notebook) @@ -79,4 +79,4 @@ python sim.py --num_cpus=2 python sim.py --num_cpus=2 --num_gpus=0.2 ``` -Take a look at the [Documentation](https://flower.dev/docs/framework/how-to-run-simulations.html) for more details on how you can customise your simulation. +Take a look at the [Documentation](https://flower.ai/docs/framework/how-to-run-simulations.html) for more details on how you can customise your simulation. diff --git a/examples/simulation-tensorflow/README.md b/examples/simulation-tensorflow/README.md index f0d94f343d37..75be823db2eb 100644 --- a/examples/simulation-tensorflow/README.md +++ b/examples/simulation-tensorflow/README.md @@ -1,6 +1,6 @@ # Flower Simulation example using TensorFlow/Keras -This introductory example uses the simulation capabilities of Flower to simulate a large number of clients on a single machine. Take a look at the [Documentation](https://flower.dev/docs/framework/how-to-run-simulations.html) for a deep dive into how Flower simulation works. This example uses [Flower Datasets](https://flower.dev/docs/datasets/) to download, partition and preprocess the MNIST dataset. This examples uses 100 clients by default. +This introductory example uses the simulation capabilities of Flower to simulate a large number of clients on a single machine. Take a look at the [Documentation](https://flower.ai/docs/framework/how-to-run-simulations.html) for a deep dive into how Flower simulation works. This example uses [Flower Datasets](https://flower.ai/docs/datasets/) to download, partition and preprocess the MNIST dataset. This examples uses 100 clients by default. ## Running the example (via Jupyter Notebook) @@ -78,4 +78,4 @@ python sim.py --num_cpus=2 python sim.py --num_cpus=2 --num_gpus=0.2 ``` -Take a look at the [Documentation](https://flower.dev/docs/framework/how-to-run-simulations.html) for more details on how you can customise your simulation. +Take a look at the [Documentation](https://flower.ai/docs/framework/how-to-run-simulations.html) for more details on how you can customise your simulation. diff --git a/examples/sklearn-logreg-mnist/README.md b/examples/sklearn-logreg-mnist/README.md index ee3cdfc9768e..50576d98ba3d 100644 --- a/examples/sklearn-logreg-mnist/README.md +++ b/examples/sklearn-logreg-mnist/README.md @@ -1,7 +1,7 @@ # Flower Example using scikit-learn This example of Flower uses `scikit-learn`'s `LogisticRegression` model to train a federated learning system. It will help you understand how to adapt Flower for use with `scikit-learn`. -Running this example in itself is quite easy. This example uses [Flower Datasets](https://flower.dev/docs/datasets/) to download, partition and preprocess the MNIST dataset. +Running this example in itself is quite easy. This example uses [Flower Datasets](https://flower.ai/docs/datasets/) to download, partition and preprocess the MNIST dataset. ## Project Setup diff --git a/examples/whisper-federated-finetuning/README.md b/examples/whisper-federated-finetuning/README.md index e89a09519fed..ddebe51247b2 100644 --- a/examples/whisper-federated-finetuning/README.md +++ b/examples/whisper-federated-finetuning/README.md @@ -110,7 +110,7 @@ An overview of the FL pipeline built with Flower for this example is illustrated 3. Once on-site training is completed, each client sends back the (now updated) classification head to the Flower server. 4. The Flower server aggregates (via FedAvg) the classification heads in order to obtain a new _global_ classification head. This head will be shared with clients in the next round. -Flower supports two ways of doing Federated Learning: simulated and non-simulated FL. The former, managed by the [`VirtualClientEngine`](https://flower.dev/docs/framework/how-to-run-simulations.html), allows you to run large-scale workloads in a system-aware manner, that scales with the resources available on your system (whether it is a laptop, a desktop with a single GPU, or a cluster of GPU servers). The latter is better suited for settings where clients are unique devices (e.g. a server, a smart device, etc). This example shows you how to use both. +Flower supports two ways of doing Federated Learning: simulated and non-simulated FL. The former, managed by the [`VirtualClientEngine`](https://flower.ai/docs/framework/how-to-run-simulations.html), allows you to run large-scale workloads in a system-aware manner, that scales with the resources available on your system (whether it is a laptop, a desktop with a single GPU, or a cluster of GPU servers). The latter is better suited for settings where clients are unique devices (e.g. a server, a smart device, etc). This example shows you how to use both. ### Preparing the dataset @@ -147,7 +147,7 @@ INFO flwr 2023-11-08 14:03:57,557 | app.py:229 | app_fit: metrics_centralized {' With just 5 FL rounds, the global model should be reaching ~95% validation accuracy. A test accuracy of 97% can be reached with 10 rounds of FL training using the default hyperparameters. On an RTX 3090Ti, each round takes ~20-30s depending on the amount of data the clients selected in a round have. -Take a look at the [Documentation](https://flower.dev/docs/framework/how-to-run-simulations.html) for more details on how you can customize your simulation. +Take a look at the [Documentation](https://flower.ai/docs/framework/how-to-run-simulations.html) for more details on how you can customize your simulation. ### Federated Finetuning (non-simulated) diff --git a/examples/xgboost-comprehensive/README.md b/examples/xgboost-comprehensive/README.md index 97ecc39b47f2..01fed646d056 100644 --- a/examples/xgboost-comprehensive/README.md +++ b/examples/xgboost-comprehensive/README.md @@ -1,7 +1,7 @@ # Flower Example using XGBoost (Comprehensive) This example demonstrates a comprehensive federated learning setup using Flower with XGBoost. -We use [HIGGS](https://archive.ics.uci.edu/dataset/280/higgs) dataset to perform a binary classification task. This examples uses [Flower Datasets](https://flower.dev/docs/datasets/) to retrieve, partition and preprocess the data for each Flower client. +We use [HIGGS](https://archive.ics.uci.edu/dataset/280/higgs) dataset to perform a binary classification task. This examples uses [Flower Datasets](https://flower.ai/docs/datasets/) to retrieve, partition and preprocess the data for each Flower client. It differs from the [xgboost-quickstart](https://github.com/adap/flower/tree/main/examples/xgboost-quickstart) example in the following ways: - Arguments parsers of server and clients for hyperparameters selection. @@ -91,7 +91,7 @@ pip install -r requirements.txt ## Run Federated Learning with XGBoost and Flower -You can run this example in two ways: either by manually launching the server, and then several clients that connect to it; or by launching a Flower simulation. Both run the same workload, yielding identical results. The former is ideal for deployments on different machines, while the latter makes it easy to simulate large client cohorts in a resource-aware manner. You can read more about how Flower Simulation works in the [Documentation](https://flower.dev/docs/framework/how-to-run-simulations.html). The commands shown below assume you have activated your environment (if you decide to use Poetry, you can activate it via `poetry shell`). +You can run this example in two ways: either by manually launching the server, and then several clients that connect to it; or by launching a Flower simulation. Both run the same workload, yielding identical results. The former is ideal for deployments on different machines, while the latter makes it easy to simulate large client cohorts in a resource-aware manner. You can read more about how Flower Simulation works in the [Documentation](https://flower.ai/docs/framework/how-to-run-simulations.html). The commands shown below assume you have activated your environment (if you decide to use Poetry, you can activate it via `poetry shell`). ### Independent Client/Server Setup @@ -143,7 +143,7 @@ python sim.py --train-method=cyclic --pool-size=5 --num-rounds=30 --centralised- ``` In addition, we provide more options to customise the experimental settings, including data partitioning and centralised/distributed evaluation (see `utils.py`). -Check the [tutorial](https://flower.dev/docs/framework/tutorial-quickstart-xgboost.html) for a detailed explanation. +Check the [tutorial](https://flower.ai/docs/framework/tutorial-quickstart-xgboost.html) for a detailed explanation. ### Expected Experimental Results diff --git a/examples/xgboost-quickstart/README.md b/examples/xgboost-quickstart/README.md index 5174c236c668..cd99cd4c2895 100644 --- a/examples/xgboost-quickstart/README.md +++ b/examples/xgboost-quickstart/README.md @@ -85,4 +85,4 @@ poetry run ./run.sh ``` Look at the [code](https://github.com/adap/flower/tree/main/examples/xgboost-quickstart) -and [tutorial](https://flower.dev/docs/framework/tutorial-quickstart-xgboost.html) for a detailed explanation. +and [tutorial](https://flower.ai/docs/framework/tutorial-quickstart-xgboost.html) for a detailed explanation.