From 675cf53a9d184929ab5c849cbb40907e957f342f Mon Sep 17 00:00:00 2001 From: Charles Beauville Date: Fri, 15 Mar 2024 14:36:53 +0000 Subject: [PATCH] Update docs text for translations --- doc/locales/fr/LC_MESSAGES/framework-docs.po | 5683 ++++-- .../pt_BR/LC_MESSAGES/framework-docs.po | 15966 +++++++++------- .../zh_Hans/LC_MESSAGES/framework-docs.po | 6333 ++++-- 3 files changed, 18166 insertions(+), 9816 deletions(-) diff --git a/doc/locales/fr/LC_MESSAGES/framework-docs.po b/doc/locales/fr/LC_MESSAGES/framework-docs.po index e7c7783c48ff..d76138ade28a 100644 --- a/doc/locales/fr/LC_MESSAGES/framework-docs.po +++ b/doc/locales/fr/LC_MESSAGES/framework-docs.po @@ -3,7 +3,7 @@ msgid "" msgstr "" "Project-Id-Version: Flower Docs\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" -"POT-Creation-Date: 2024-02-13 11:23+0100\n" +"POT-Creation-Date: 2024-03-15 14:32+0000\n" "PO-Revision-Date: 2023-09-05 17:54+0000\n" "Last-Translator: Charles Beauville \n" "Language: fr\n" @@ -13,7 +13,7 @@ msgstr "" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=utf-8\n" "Content-Transfer-Encoding: 8bit\n" -"Generated-By: Babel 2.13.1\n" +"Generated-By: Babel 2.14.0\n" #: ../../source/contributor-explanation-architecture.rst:2 msgid "Flower Architecture" @@ -27,9 +27,7 @@ msgstr "Moteur client Edge" msgid "" "`Flower `_ core framework architecture with Edge " "Client Engine" -msgstr "" -"`Flower `_ architecture de base avec Edge Client " -"Engine" +msgstr "`Flower `_ architecture de base avec Edge Client Engine" #: ../../source/contributor-explanation-architecture.rst:13 msgid "Virtual Client Engine" @@ -40,8 +38,8 @@ msgid "" "`Flower `_ core framework architecture with Virtual " "Client Engine" msgstr "" -"`Flower `_ architecture de base avec moteur de client" -" virtuel" +"`Flower `_ architecture de base avec moteur de client " +"virtuel" #: ../../source/contributor-explanation-architecture.rst:21 msgid "Virtual Client Engine and Edge Client Engine in the same workload" @@ -86,9 +84,8 @@ msgstr "" #: ../../source/contributor-how-to-build-docker-images.rst:19 msgid "" -"Please follow the first section on `Run Flower using Docker " -"`_ " -"which covers this step in more detail." +"Please follow the first section on :doc:`Run Flower using Docker ` which covers this step in more detail." msgstr "" #: ../../source/contributor-how-to-build-docker-images.rst:23 @@ -303,7 +300,7 @@ msgid "" "to help us in our effort to make Federated Learning accessible to as many" " people as possible by contributing to those translations! This might " "also be a great opportunity for those wanting to become open source " -"contributors with little prerequistes." +"contributors with little prerequisites." msgstr "" #: ../../source/contributor-how-to-contribute-translations.rst:13 @@ -355,7 +352,7 @@ msgstr "" #: ../../source/contributor-how-to-contribute-translations.rst:47 msgid "" -"You input your translation in the textbox at the top and then, once you " +"You input your translation in the text box at the top and then, once you " "are happy with it, you either press ``Save and continue`` (to save the " "translation and go to the next untranslated string), ``Save and stay`` " "(to save the translation and stay on the same page), ``Suggest`` (to add " @@ -393,8 +390,8 @@ msgstr "" #: ../../source/contributor-how-to-contribute-translations.rst:69 msgid "" "If you want to add a new language, you will first have to contact us, " -"either on `Slack `_, or by opening an " -"issue on our `GitHub repo `_." +"either on `Slack `_, or by opening an issue" +" on our `GitHub repo `_." msgstr "" #: ../../source/contributor-how-to-create-new-messages.rst:2 @@ -438,12 +435,13 @@ msgid "Message Types for Protocol Buffers" msgstr "Types de messages pour les tampons de protocole" #: ../../source/contributor-how-to-create-new-messages.rst:32 +#, fuzzy msgid "" "The first thing we need to do is to define a message type for the RPC " "system in :code:`transport.proto`. Note that we have to do it for both " "the request and response messages. For more details on the syntax of " -"proto3, please see the `official documentation " -"`_." +"proto3, please see the `official documentation `_." msgstr "" "La première chose à faire est de définir un type de message pour le " "système RPC dans :code:`transport.proto`. Notez que nous devons le faire " @@ -592,9 +590,10 @@ msgstr "" "conteneur." #: ../../source/contributor-how-to-develop-in-vscode-dev-containers.rst:11 +#, fuzzy msgid "" "Source: `Official VSCode documentation " -"`_" +"`_" msgstr "" "Source : `Documentation officielle de VSCode " "`_" @@ -648,9 +647,10 @@ msgstr "" "cas-là, consulte les sources suivantes :" #: ../../source/contributor-how-to-develop-in-vscode-dev-containers.rst:23 +#, fuzzy msgid "" "`Developing inside a Container " -"`_" msgstr "" "`Développement à l'intérieur d'un conteneur " @@ -658,9 +658,10 @@ msgstr "" "requirements>`_" #: ../../source/contributor-how-to-develop-in-vscode-dev-containers.rst:24 +#, fuzzy msgid "" "`Remote development in Containers " -"`_" +"`_" msgstr "" "`Développement à distance dans les conteneurs " "`_" @@ -961,8 +962,8 @@ msgstr "Ajoute une nouvelle section ``Unreleased`` dans ``changelog.md``." #: ../../source/contributor-how-to-release-flower.rst:25 msgid "" -"Merge the pull request on the same day (i.e., before a new nightly release" -" gets published to PyPI)." +"Merge the pull request on the same day (i.e., before a new nightly " +"release gets published to PyPI)." msgstr "" "Fusionne la pull request le jour même (c'est-à-dire avant qu'une nouvelle" " version nightly ne soit publiée sur PyPI)." @@ -977,11 +978,12 @@ msgstr "Nom de la pré-version" #: ../../source/contributor-how-to-release-flower.rst:33 msgid "" -"PyPI supports pre-releases (alpha, beta, release candidate). Pre-releases " -"MUST use one of the following naming patterns:" +"PyPI supports pre-releases (alpha, beta, release candidate). Pre-releases" +" MUST use one of the following naming patterns:" msgstr "" -"PyPI prend en charge les préversions (alpha, bêta, version candidate). Les" -" préversions DOIVENT utiliser l'un des modèles de dénomination suivants :" +"PyPI prend en charge les préversions (alpha, bêta, version candidate). " +"Les préversions DOIVENT utiliser l'un des modèles de dénomination " +"suivants :" #: ../../source/contributor-how-to-release-flower.rst:35 msgid "Alpha: ``MAJOR.MINOR.PATCHaN``" @@ -1318,21 +1320,23 @@ msgid "Request for Flower Baselines" msgstr "Demande pour une nouvelle Flower Baseline" #: ../../source/contributor-ref-good-first-contributions.rst:25 +#, fuzzy msgid "" "If you are not familiar with Flower Baselines, you should probably check-" -"out our `contributing guide for baselines `_." +"out our `contributing guide for baselines " +"`_." msgstr "" "Si tu n'es pas familier avec les Flower Baselines, tu devrais " "probablement consulter notre `guide de contribution pour les baselines " "`_." #: ../../source/contributor-ref-good-first-contributions.rst:27 +#, fuzzy msgid "" "You should then check out the open `issues " "`_" " for baseline requests. If you find a baseline that you'd like to work on" -" and that has no assignes, feel free to assign it to yourself and start " +" and that has no assignees, feel free to assign it to yourself and start " "working on it!" msgstr "" "Tu devrais ensuite consulter les `issues ouvertes " @@ -1444,9 +1448,8 @@ msgstr "" #, fuzzy msgid "" "If you're familiar with how contributing on GitHub works, you can " -"directly checkout our `getting started guide for contributors " -"`_." +"directly checkout our :doc:`getting started guide for contributors " +"`." msgstr "" "Si tu es familier avec le fonctionnement des contributions sur GitHub, tu" " peux directement consulter notre `guide de démarrage pour les " @@ -1454,21 +1457,22 @@ msgstr "" "contributors.html>`_ et des exemples de `bonnes premières contributions " "`_." -#: ../../source/contributor-tutorial-contribute-on-github.rst:11 +#: ../../source/contributor-tutorial-contribute-on-github.rst:10 msgid "Setting up the repository" msgstr "Mise en place du référentiel" -#: ../../source/contributor-tutorial-contribute-on-github.rst:22 +#: ../../source/contributor-tutorial-contribute-on-github.rst:21 msgid "**Create a GitHub account and setup Git**" msgstr "**Créer un compte GitHub et configurer Git**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:14 +#: ../../source/contributor-tutorial-contribute-on-github.rst:13 +#, fuzzy msgid "" "Git is a distributed version control tool. This allows for an entire " "codebase's history to be stored and every developer's machine. It is a " "software that will need to be installed on your local machine, you can " -"follow this `guide `_ to set it up." +"follow this `guide `_ to set it up." msgstr "" "Git est un outil de contrôle de version distribué. Il permet de stocker " "l'historique d'une base de code entière sur la machine de chaque " @@ -1476,7 +1480,7 @@ msgstr "" "locale, tu peux suivre ce `guide `_ pour le mettre en place." -#: ../../source/contributor-tutorial-contribute-on-github.rst:17 +#: ../../source/contributor-tutorial-contribute-on-github.rst:16 msgid "" "GitHub, itself, is a code hosting platform for version control and " "collaboration. It allows for everyone to collaborate and work from " @@ -1486,7 +1490,7 @@ msgstr "" "contrôle des versions et la collaboration. Il permet à chacun de " "collaborer et de travailler de n'importe où sur des dépôts à distance." -#: ../../source/contributor-tutorial-contribute-on-github.rst:19 +#: ../../source/contributor-tutorial-contribute-on-github.rst:18 msgid "" "If you haven't already, you will need to create an account on `GitHub " "`_." @@ -1494,7 +1498,7 @@ msgstr "" "Si ce n'est pas déjà fait, tu devras créer un compte sur `GitHub " "`_." -#: ../../source/contributor-tutorial-contribute-on-github.rst:21 +#: ../../source/contributor-tutorial-contribute-on-github.rst:20 msgid "" "The idea behind the generic Git and GitHub workflow boils down to this: " "you download code from a remote repository on GitHub, make changes " @@ -1506,14 +1510,15 @@ msgstr "" " des modifications localement et tu en gardes une trace à l'aide de Git, " "puis tu télécharges ton nouvel historique à nouveau sur GitHub." -#: ../../source/contributor-tutorial-contribute-on-github.rst:33 +#: ../../source/contributor-tutorial-contribute-on-github.rst:32 msgid "**Forking the Flower repository**" msgstr "**Fourche le dépôt de Flower**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:25 +#: ../../source/contributor-tutorial-contribute-on-github.rst:24 +#, fuzzy msgid "" "A fork is a personal copy of a GitHub repository. To create one for " -"Flower, you must navigate to https://github.com/adap/flower (while " +"Flower, you must navigate to ``_ (while " "connected to your GitHub account) and click the ``Fork`` button situated " "on the top right of the page." msgstr "" @@ -1522,7 +1527,7 @@ msgstr "" "étant connecté à ton compte GitHub) et cliquer sur le bouton ``Fork`` " "situé en haut à droite de la page." -#: ../../source/contributor-tutorial-contribute-on-github.rst:30 +#: ../../source/contributor-tutorial-contribute-on-github.rst:29 msgid "" "You can change the name if you want, but this is not necessary as this " "version of Flower will be yours and will sit inside your own account " @@ -1535,11 +1540,11 @@ msgstr "" " devrais voir dans le coin supérieur gauche que tu es en train de " "regarder ta propre version de Flower." -#: ../../source/contributor-tutorial-contribute-on-github.rst:48 +#: ../../source/contributor-tutorial-contribute-on-github.rst:47 msgid "**Cloning your forked repository**" msgstr "**Clonage de ton dépôt forké**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:36 +#: ../../source/contributor-tutorial-contribute-on-github.rst:35 msgid "" "The next step is to download the forked repository on your machine to be " "able to make changes to it. On your forked repository page, you should " @@ -1551,7 +1556,7 @@ msgstr "" "forké, tu dois d'abord cliquer sur le bouton ``Code`` à droite, ce qui te" " permettra de copier le lien HTTPS du dépôt." -#: ../../source/contributor-tutorial-contribute-on-github.rst:42 +#: ../../source/contributor-tutorial-contribute-on-github.rst:41 msgid "" "Once you copied the \\, you can open a terminal on your machine, " "navigate to the place you want to download the repository to and type:" @@ -1560,7 +1565,7 @@ msgstr "" "machine, naviguer jusqu'à l'endroit où tu veux télécharger le référentiel" " et taper :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:48 +#: ../../source/contributor-tutorial-contribute-on-github.rst:47 #, fuzzy msgid "" "This will create a ``flower/`` (or the name of your fork if you renamed " @@ -1569,15 +1574,15 @@ msgstr "" "Cela créera un dossier `flower/` (ou le nom de ta fourche si tu l'as " "renommée) dans le répertoire de travail actuel." -#: ../../source/contributor-tutorial-contribute-on-github.rst:67 +#: ../../source/contributor-tutorial-contribute-on-github.rst:66 msgid "**Add origin**" msgstr "**Ajouter l'origine**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:51 +#: ../../source/contributor-tutorial-contribute-on-github.rst:50 msgid "You can then go into the repository folder:" msgstr "Tu peux ensuite aller dans le dossier du référentiel :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:57 +#: ../../source/contributor-tutorial-contribute-on-github.rst:56 msgid "" "And here we will need to add an origin to our repository. The origin is " "the \\ of the remote fork repository. To obtain it, we can do as " @@ -1589,7 +1594,7 @@ msgstr "" "indiqué précédemment en allant sur notre dépôt fork sur notre compte " "GitHub et en copiant le lien." -#: ../../source/contributor-tutorial-contribute-on-github.rst:62 +#: ../../source/contributor-tutorial-contribute-on-github.rst:61 msgid "" "Once the \\ is copied, we can type the following command in our " "terminal:" @@ -1597,26 +1602,27 @@ msgstr "" "Une fois que le \\ est copié, nous pouvons taper la commande " "suivante dans notre terminal :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:91 +#: ../../source/contributor-tutorial-contribute-on-github.rst:90 msgid "**Add upstream**" msgstr "**Ajouter en amont**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:70 +#: ../../source/contributor-tutorial-contribute-on-github.rst:69 +#, fuzzy msgid "" "Now we will add an upstream address to our repository. Still in the same " -"directroy, we must run the following command:" +"directory, we must run the following command:" msgstr "" "Nous allons maintenant ajouter une adresse en amont à notre dépôt. " "Toujours dans le même directroy, nous devons exécuter la commande " "suivante :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:77 +#: ../../source/contributor-tutorial-contribute-on-github.rst:76 msgid "The following diagram visually explains what we did in the previous steps:" msgstr "" "Le schéma suivant explique visuellement ce que nous avons fait dans les " "étapes précédentes :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:81 +#: ../../source/contributor-tutorial-contribute-on-github.rst:80 msgid "" "The upstream is the GitHub remote address of the parent repository (in " "this case Flower), i.e. the one we eventually want to contribute to and " @@ -1630,7 +1636,7 @@ msgstr "" "simplement l'adresse distante GitHub du dépôt forké que nous avons créé, " "c'est-à-dire la copie (fork) dans notre propre compte." -#: ../../source/contributor-tutorial-contribute-on-github.rst:85 +#: ../../source/contributor-tutorial-contribute-on-github.rst:84 msgid "" "To make sure our local version of the fork is up-to-date with the latest " "changes from the Flower repository, we can execute the following command:" @@ -1639,27 +1645,28 @@ msgstr "" "dernières modifications du dépôt Flower, nous pouvons exécuter la " "commande suivante :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:94 +#: ../../source/contributor-tutorial-contribute-on-github.rst:93 msgid "Setting up the coding environment" msgstr "Mise en place de l'environnement de codage" -#: ../../source/contributor-tutorial-contribute-on-github.rst:96 +#: ../../source/contributor-tutorial-contribute-on-github.rst:95 +#, fuzzy msgid "" -"This can be achieved by following this `getting started guide for " -"contributors`_ (note that you won't need to clone the repository). Once " -"you are able to write code and test it, you can finally start making " -"changes!" +"This can be achieved by following this :doc:`getting started guide for " +"contributors ` (note " +"that you won't need to clone the repository). Once you are able to write " +"code and test it, you can finally start making changes!" msgstr "" "Pour ce faire, tu peux suivre ce `guide de démarrage pour les " "contributeurs`_ (note que tu n'auras pas besoin de cloner le dépôt). Une " "fois que tu es capable d'écrire du code et de le tester, tu peux enfin " "commencer à faire des changements !" -#: ../../source/contributor-tutorial-contribute-on-github.rst:101 +#: ../../source/contributor-tutorial-contribute-on-github.rst:100 msgid "Making changes" msgstr "Apporter des changements" -#: ../../source/contributor-tutorial-contribute-on-github.rst:103 +#: ../../source/contributor-tutorial-contribute-on-github.rst:102 msgid "" "Before making any changes make sure you are up-to-date with your " "repository:" @@ -1667,15 +1674,15 @@ msgstr "" "Avant de faire des changements, assure-toi que tu es à jour avec ton " "référentiel :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:109 +#: ../../source/contributor-tutorial-contribute-on-github.rst:108 msgid "And with Flower's repository:" msgstr "Et avec le référentiel de Flower :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:123 +#: ../../source/contributor-tutorial-contribute-on-github.rst:122 msgid "**Create a new branch**" msgstr "**Créer une nouvelle branche**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:116 +#: ../../source/contributor-tutorial-contribute-on-github.rst:115 msgid "" "To make the history cleaner and easier to work with, it is good practice " "to create a new branch for each feature/project that needs to be " @@ -1685,7 +1692,7 @@ msgstr "" "une bonne pratique de créer une nouvelle branche pour chaque " "fonctionnalité/projet qui doit être mis en œuvre." -#: ../../source/contributor-tutorial-contribute-on-github.rst:119 +#: ../../source/contributor-tutorial-contribute-on-github.rst:118 msgid "" "To do so, just run the following command inside the repository's " "directory:" @@ -1693,21 +1700,21 @@ msgstr "" "Pour ce faire, il suffit d'exécuter la commande suivante dans le " "répertoire du référentiel :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:126 +#: ../../source/contributor-tutorial-contribute-on-github.rst:125 msgid "**Make changes**" msgstr "**Apporter des modifications**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:126 +#: ../../source/contributor-tutorial-contribute-on-github.rst:125 msgid "Write great code and create wonderful changes using your favorite editor!" msgstr "" "Écris du bon code et crée de merveilleuses modifications à l'aide de ton " "éditeur préféré !" -#: ../../source/contributor-tutorial-contribute-on-github.rst:139 +#: ../../source/contributor-tutorial-contribute-on-github.rst:138 msgid "**Test and format your code**" msgstr "**Teste et mets en forme ton code**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:129 +#: ../../source/contributor-tutorial-contribute-on-github.rst:128 msgid "" "Don't forget to test and format your code! Otherwise your code won't be " "able to be merged into the Flower repository. This is done so the " @@ -1717,15 +1724,15 @@ msgstr "" "pourra pas être fusionné dans le dépôt Flower, et ce, afin que la base de" " code reste cohérente et facile à comprendre." -#: ../../source/contributor-tutorial-contribute-on-github.rst:132 +#: ../../source/contributor-tutorial-contribute-on-github.rst:131 msgid "To do so, we have written a few scripts that you can execute:" msgstr "Pour ce faire, nous avons écrit quelques scripts que tu peux exécuter :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:151 +#: ../../source/contributor-tutorial-contribute-on-github.rst:150 msgid "**Stage changes**" msgstr "**Changements de scène**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:142 +#: ../../source/contributor-tutorial-contribute-on-github.rst:141 msgid "" "Before creating a commit that will update your history, you must specify " "to Git which files it needs to take into account." @@ -1733,11 +1740,11 @@ msgstr "" "Avant de créer un commit qui mettra à jour ton historique, tu dois " "spécifier à Git les fichiers qu'il doit prendre en compte." -#: ../../source/contributor-tutorial-contribute-on-github.rst:144 +#: ../../source/contributor-tutorial-contribute-on-github.rst:143 msgid "This can be done with:" msgstr "Cela peut se faire avec :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:150 +#: ../../source/contributor-tutorial-contribute-on-github.rst:149 msgid "" "To check which files have been modified compared to the last version " "(last commit) and to see which files are staged for commit, you can use " @@ -1747,11 +1754,11 @@ msgstr "" "version (last commit) et pour voir quels fichiers sont mis à disposition " "pour le commit, tu peux utiliser la commande :code:`git status`." -#: ../../source/contributor-tutorial-contribute-on-github.rst:161 +#: ../../source/contributor-tutorial-contribute-on-github.rst:160 msgid "**Commit changes**" msgstr "**Commit changes**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:154 +#: ../../source/contributor-tutorial-contribute-on-github.rst:153 msgid "" "Once you have added all the files you wanted to commit using :code:`git " "add`, you can finally create your commit using this command:" @@ -1760,7 +1767,7 @@ msgstr "" "l'aide de :code:`git add`, tu peux enfin créer ta livraison à l'aide de " "cette commande :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:160 +#: ../../source/contributor-tutorial-contribute-on-github.rst:159 msgid "" "The \\ is there to explain to others what the commit " "does. It should be written in an imperative style and be concise. An " @@ -1770,11 +1777,11 @@ msgstr "" "commit. Il doit être écrit dans un style impératif et être concis. Un " "exemple serait :code:`git commit -m \"Ajouter des images au README\"`." -#: ../../source/contributor-tutorial-contribute-on-github.rst:172 +#: ../../source/contributor-tutorial-contribute-on-github.rst:171 msgid "**Push the changes to the fork**" msgstr "**Pousser les changements vers la fourche**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:164 +#: ../../source/contributor-tutorial-contribute-on-github.rst:163 msgid "" "Once we have committed our changes, we have effectively updated our local" " history, but GitHub has no way of knowing this unless we push our " @@ -1785,7 +1792,7 @@ msgstr "" "moyen de le savoir à moins que nous ne poussions nos modifications vers " "l'adresse distante de notre origine :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:171 +#: ../../source/contributor-tutorial-contribute-on-github.rst:170 msgid "" "Once this is done, you will see on the GitHub that your forked repo was " "updated with the changes you have made." @@ -1793,15 +1800,15 @@ msgstr "" "Une fois que c'est fait, tu verras sur GitHub que ton repo forké a été " "mis à jour avec les modifications que tu as apportées." -#: ../../source/contributor-tutorial-contribute-on-github.rst:175 +#: ../../source/contributor-tutorial-contribute-on-github.rst:174 msgid "Creating and merging a pull request (PR)" msgstr "Créer et fusionner une pull request (PR)" -#: ../../source/contributor-tutorial-contribute-on-github.rst:206 +#: ../../source/contributor-tutorial-contribute-on-github.rst:205 msgid "**Create the PR**" msgstr "**Créer le PR**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:178 +#: ../../source/contributor-tutorial-contribute-on-github.rst:177 msgid "" "Once you have pushed changes, on the GitHub webpage of your repository " "you should see the following message:" @@ -1809,12 +1816,12 @@ msgstr "" "Une fois que tu as poussé les modifications, sur la page web GitHub de " "ton dépôt, tu devrais voir le message suivant :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:182 +#: ../../source/contributor-tutorial-contribute-on-github.rst:181 #, fuzzy msgid "Otherwise you can always find this option in the ``Branches`` page." msgstr "Sinon, tu peux toujours trouver cette option dans la page `Branches`." -#: ../../source/contributor-tutorial-contribute-on-github.rst:184 +#: ../../source/contributor-tutorial-contribute-on-github.rst:183 #, fuzzy msgid "" "Once you click the ``Compare & pull request`` button, you should see " @@ -1823,13 +1830,13 @@ msgstr "" "Une fois que tu as cliqué sur le bouton `Compare & pull request`, tu " "devrais voir quelque chose de similaire à ceci :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:188 +#: ../../source/contributor-tutorial-contribute-on-github.rst:187 msgid "At the top you have an explanation of which branch will be merged where:" msgstr "" "En haut, tu as une explication de quelle branche sera fusionnée à quel " "endroit :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:192 +#: ../../source/contributor-tutorial-contribute-on-github.rst:191 msgid "" "In this example you can see that the request is to merge the branch " "``doc-fixes`` from my forked repository to branch ``main`` from the " @@ -1839,7 +1846,7 @@ msgstr "" "branche ``doc-fixes`` de mon dépôt forké à la branche ``main`` du dépôt " "Flower." -#: ../../source/contributor-tutorial-contribute-on-github.rst:194 +#: ../../source/contributor-tutorial-contribute-on-github.rst:193 msgid "" "The input box in the middle is there for you to describe what your PR " "does and to link it to existing issues. We have placed comments (that " @@ -1851,7 +1858,7 @@ msgstr "" "commentaires (qui ne seront pas rendus une fois le PR ouvert) pour te " "guider tout au long du processus." -#: ../../source/contributor-tutorial-contribute-on-github.rst:197 +#: ../../source/contributor-tutorial-contribute-on-github.rst:196 msgid "" "It is important to follow the instructions described in comments. For " "instance, in order to not break how our changelog system works, you " @@ -1860,7 +1867,7 @@ msgid "" ":ref:`changelogentry` appendix." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:201 +#: ../../source/contributor-tutorial-contribute-on-github.rst:200 msgid "" "At the bottom you will find the button to open the PR. This will notify " "reviewers that a new PR has been opened and that they should look over it" @@ -1870,7 +1877,7 @@ msgstr "" "qui informera les réviseurs qu'un nouveau PR a été ouvert et qu'ils " "doivent le consulter pour le fusionner ou demander des modifications." -#: ../../source/contributor-tutorial-contribute-on-github.rst:204 +#: ../../source/contributor-tutorial-contribute-on-github.rst:203 msgid "" "If your PR is not yet ready for review, and you don't want to notify " "anyone, you have the option to create a draft pull request:" @@ -1879,11 +1886,11 @@ msgstr "" " personne, tu as la possibilité de créer un brouillon de demande de " "traction :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:209 +#: ../../source/contributor-tutorial-contribute-on-github.rst:208 msgid "**Making new changes**" msgstr "**Faire de nouveaux changements**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:209 +#: ../../source/contributor-tutorial-contribute-on-github.rst:208 msgid "" "Once the PR has been opened (as draft or not), you can still push new " "commits to it the same way we did before, by making changes to the branch" @@ -1893,11 +1900,11 @@ msgstr "" "toujours y pousser de nouveaux commits de la même manière qu'auparavant, " "en apportant des modifications à la branche associée au PR." -#: ../../source/contributor-tutorial-contribute-on-github.rst:231 +#: ../../source/contributor-tutorial-contribute-on-github.rst:230 msgid "**Review the PR**" msgstr "**Review the PR**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:212 +#: ../../source/contributor-tutorial-contribute-on-github.rst:211 msgid "" "Once the PR has been opened or once the draft PR has been marked as " "ready, a review from code owners will be automatically requested:" @@ -1906,7 +1913,7 @@ msgstr "" " étant prêt, une révision des propriétaires de code sera automatiquement " "demandée :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:216 +#: ../../source/contributor-tutorial-contribute-on-github.rst:215 msgid "" "Code owners will then look into the code, ask questions, request changes " "or validate the PR." @@ -1914,11 +1921,11 @@ msgstr "" "Les propriétaires du code vont alors se pencher sur le code, poser des " "questions, demander des modifications ou valider le RP." -#: ../../source/contributor-tutorial-contribute-on-github.rst:218 +#: ../../source/contributor-tutorial-contribute-on-github.rst:217 msgid "Merging will be blocked if there are ongoing requested changes." msgstr "La fusion sera bloquée s'il y a des changements demandés en cours." -#: ../../source/contributor-tutorial-contribute-on-github.rst:222 +#: ../../source/contributor-tutorial-contribute-on-github.rst:221 msgid "" "To resolve them, just push the necessary changes to the branch associated" " with the PR:" @@ -1926,11 +1933,11 @@ msgstr "" "Pour les résoudre, il suffit de pousser les changements nécessaires vers " "la branche associée au PR :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:226 +#: ../../source/contributor-tutorial-contribute-on-github.rst:225 msgid "And resolve the conversation:" msgstr "Et résous la conversation :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:230 +#: ../../source/contributor-tutorial-contribute-on-github.rst:229 msgid "" "Once all the conversations have been resolved, you can re-request a " "review." @@ -1938,11 +1945,11 @@ msgstr "" "Une fois que toutes les conversations ont été résolues, tu peux " "redemander un examen." -#: ../../source/contributor-tutorial-contribute-on-github.rst:251 +#: ../../source/contributor-tutorial-contribute-on-github.rst:250 msgid "**Once the PR is merged**" msgstr "**Une fois que le PR est fusionné**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:234 +#: ../../source/contributor-tutorial-contribute-on-github.rst:233 msgid "" "If all the automatic tests have passed and reviewers have no more changes" " to request, they can approve the PR and merge it." @@ -1951,7 +1958,7 @@ msgstr "" " de modifications à demander, ils peuvent approuver le PR et le " "fusionner." -#: ../../source/contributor-tutorial-contribute-on-github.rst:238 +#: ../../source/contributor-tutorial-contribute-on-github.rst:237 msgid "" "Once it is merged, you can delete the branch on GitHub (a button should " "appear to do so) and also delete it locally by doing:" @@ -1960,36 +1967,38 @@ msgstr "" "(un bouton devrait apparaître pour le faire) et aussi la supprimer " "localement en faisant :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:245 +#: ../../source/contributor-tutorial-contribute-on-github.rst:244 msgid "Then you should update your forked repository by doing:" msgstr "Ensuite, tu dois mettre à jour ton dépôt forké en faisant :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:254 +#: ../../source/contributor-tutorial-contribute-on-github.rst:253 msgid "Example of first contribution" msgstr "Exemple de première contribution" -#: ../../source/contributor-tutorial-contribute-on-github.rst:257 +#: ../../source/contributor-tutorial-contribute-on-github.rst:256 msgid "Problem" msgstr "Problème" -#: ../../source/contributor-tutorial-contribute-on-github.rst:259 +#: ../../source/contributor-tutorial-contribute-on-github.rst:258 +#, fuzzy msgid "" -"For our documentation, we’ve started to use the `Diàtaxis framework " +"For our documentation, we've started to use the `Diàtaxis framework " "`_." msgstr "" "Pour notre documentation, nous avons commencé à utiliser le cadre " "`Diàtaxis `_." -#: ../../source/contributor-tutorial-contribute-on-github.rst:261 +#: ../../source/contributor-tutorial-contribute-on-github.rst:260 +#, fuzzy msgid "" -"Our “How to” guides should have titles that continue the sencence “How to" -" …”, for example, “How to upgrade to Flower 1.0”." +"Our \"How to\" guides should have titles that continue the sentence \"How" +" to …\", for example, \"How to upgrade to Flower 1.0\"." msgstr "" "Nos guides \"Comment faire\" devraient avoir des titres qui poursuivent " "la phrase \"Comment faire pour...\", par exemple, \"Comment passer à " "Flower 1.0\"." -#: ../../source/contributor-tutorial-contribute-on-github.rst:263 +#: ../../source/contributor-tutorial-contribute-on-github.rst:262 msgid "" "Most of our guides do not follow this new format yet, and changing their " "title is (unfortunately) more involved than one might think." @@ -1998,50 +2007,55 @@ msgstr "" "changer leur titre est (malheureusement) plus compliqué qu'on ne le " "pense." -#: ../../source/contributor-tutorial-contribute-on-github.rst:265 +#: ../../source/contributor-tutorial-contribute-on-github.rst:264 +#, fuzzy msgid "" -"This issue is about changing the title of a doc from present continious " +"This issue is about changing the title of a doc from present continuous " "to present simple." msgstr "" "Cette question porte sur le changement du titre d'un document du présent " "continu au présent simple." -#: ../../source/contributor-tutorial-contribute-on-github.rst:267 +#: ../../source/contributor-tutorial-contribute-on-github.rst:266 +#, fuzzy msgid "" -"Let's take the example of “Saving Progress” which we changed to “Save " -"Progress”. Does this pass our check?" +"Let's take the example of \"Saving Progress\" which we changed to \"Save " +"Progress\". Does this pass our check?" msgstr "" "Prenons l'exemple de \"Sauvegarder la progression\" que nous avons " "remplacé par \"Sauvegarder la progression\". Est-ce que cela passe notre " "contrôle ?" -#: ../../source/contributor-tutorial-contribute-on-github.rst:269 -msgid "Before: ”How to saving progress” ❌" +#: ../../source/contributor-tutorial-contribute-on-github.rst:268 +#, fuzzy +msgid "Before: \"How to saving progress\" ❌" msgstr "Avant : \"Comment sauvegarder les progrès\" ❌" -#: ../../source/contributor-tutorial-contribute-on-github.rst:271 -msgid "After: ”How to save progress” ✅" +#: ../../source/contributor-tutorial-contribute-on-github.rst:270 +#, fuzzy +msgid "After: \"How to save progress\" ✅" msgstr "Après : \"Comment sauvegarder la progression\" ✅" -#: ../../source/contributor-tutorial-contribute-on-github.rst:274 +#: ../../source/contributor-tutorial-contribute-on-github.rst:273 msgid "Solution" msgstr "Solution" -#: ../../source/contributor-tutorial-contribute-on-github.rst:276 +#: ../../source/contributor-tutorial-contribute-on-github.rst:275 +#, fuzzy msgid "" -"This is a tiny change, but it’ll allow us to test your end-to-end setup. " -"After cloning and setting up the Flower repo, here’s what you should do:" +"This is a tiny change, but it'll allow us to test your end-to-end setup. " +"After cloning and setting up the Flower repo, here's what you should do:" msgstr "" "C'est un tout petit changement, mais il nous permettra de tester ta " "configuration de bout en bout. Après avoir cloné et configuré le repo " "Flower, voici ce que tu dois faire :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:278 +#: ../../source/contributor-tutorial-contribute-on-github.rst:277 #, fuzzy msgid "Find the source file in ``doc/source``" msgstr "Trouve le fichier source dans `doc/source`" -#: ../../source/contributor-tutorial-contribute-on-github.rst:279 +#: ../../source/contributor-tutorial-contribute-on-github.rst:278 #, fuzzy msgid "" "Make the change in the ``.rst`` file (beware, the dashes under the title " @@ -2050,20 +2064,20 @@ msgstr "" "Effectue la modification dans le fichier `.rst` (attention, les tirets " "sous le titre doivent être de la même longueur que le titre lui-même)" -#: ../../source/contributor-tutorial-contribute-on-github.rst:280 +#: ../../source/contributor-tutorial-contribute-on-github.rst:279 +#, fuzzy msgid "" -"Build the docs and check the result: ``_" msgstr "" -"Construis les documents et vérifie le résultat : " -"``_" +"Construis les documents et vérifie le résultat : ``_" -#: ../../source/contributor-tutorial-contribute-on-github.rst:283 +#: ../../source/contributor-tutorial-contribute-on-github.rst:282 msgid "Rename file" msgstr "Renommer le fichier" -#: ../../source/contributor-tutorial-contribute-on-github.rst:285 +#: ../../source/contributor-tutorial-contribute-on-github.rst:284 msgid "" "You might have noticed that the file name still reflects the old wording." " If we just change the file, then we break all existing links to it - it " @@ -2076,21 +2090,22 @@ msgstr "" "important** d'éviter cela, car briser des liens peut nuire à notre " "classement dans les moteurs de recherche." -#: ../../source/contributor-tutorial-contribute-on-github.rst:288 -msgid "Here’s how to change the file name:" +#: ../../source/contributor-tutorial-contribute-on-github.rst:287 +#, fuzzy +msgid "Here's how to change the file name:" msgstr "Voici comment changer le nom du fichier :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:290 +#: ../../source/contributor-tutorial-contribute-on-github.rst:289 #, fuzzy msgid "Change the file name to ``save-progress.rst``" msgstr "Change le nom du fichier en `save-progress.rst`" -#: ../../source/contributor-tutorial-contribute-on-github.rst:291 +#: ../../source/contributor-tutorial-contribute-on-github.rst:290 #, fuzzy msgid "Add a redirect rule to ``doc/source/conf.py``" msgstr "Ajouter une règle de redirection à `doc/source/conf.py`" -#: ../../source/contributor-tutorial-contribute-on-github.rst:293 +#: ../../source/contributor-tutorial-contribute-on-github.rst:292 #, fuzzy msgid "" "This will cause a redirect from ``saving-progress.html`` to ``save-" @@ -2099,11 +2114,11 @@ msgstr "" "Cela entraînera une redirection de `saving-progress.html` vers `save-" "progress.html`, les anciens liens continueront à fonctionner." -#: ../../source/contributor-tutorial-contribute-on-github.rst:296 +#: ../../source/contributor-tutorial-contribute-on-github.rst:295 msgid "Apply changes in the index file" msgstr "Applique les changements dans le fichier d'index" -#: ../../source/contributor-tutorial-contribute-on-github.rst:298 +#: ../../source/contributor-tutorial-contribute-on-github.rst:297 #, fuzzy msgid "" "For the lateral navigation bar to work properly, it is very important to " @@ -2114,46 +2129,47 @@ msgstr "" "très important de mettre également à jour le fichier `index.rst`. C'est " "là que nous définissons toute l'arborescence de la barre de navigation." -#: ../../source/contributor-tutorial-contribute-on-github.rst:301 +#: ../../source/contributor-tutorial-contribute-on-github.rst:300 #, fuzzy msgid "Find and modify the file name in ``index.rst``" msgstr "Trouve et modifie le nom du fichier dans `index.rst`" -#: ../../source/contributor-tutorial-contribute-on-github.rst:304 +#: ../../source/contributor-tutorial-contribute-on-github.rst:303 msgid "Open PR" msgstr "Open PR" -#: ../../source/contributor-tutorial-contribute-on-github.rst:306 +#: ../../source/contributor-tutorial-contribute-on-github.rst:305 +#, fuzzy msgid "" -"Commit the changes (commit messages are always imperative: “Do " -"something”, in this case “Change …”)" +"Commit the changes (commit messages are always imperative: \"Do " +"something\", in this case \"Change …\")" msgstr "" "Valide les modifications (les messages de validation sont toujours " "impératifs : \"Fais quelque chose\", dans ce cas \"Modifie...\")" -#: ../../source/contributor-tutorial-contribute-on-github.rst:307 +#: ../../source/contributor-tutorial-contribute-on-github.rst:306 msgid "Push the changes to your fork" msgstr "Transmets les changements à ta fourchette" -#: ../../source/contributor-tutorial-contribute-on-github.rst:308 +#: ../../source/contributor-tutorial-contribute-on-github.rst:307 msgid "Open a PR (as shown above)" msgstr "Ouvre un RP (comme indiqué ci-dessus)" -#: ../../source/contributor-tutorial-contribute-on-github.rst:309 +#: ../../source/contributor-tutorial-contribute-on-github.rst:308 msgid "Wait for it to be approved!" msgstr "Attends qu'elle soit approuvée !" -#: ../../source/contributor-tutorial-contribute-on-github.rst:310 +#: ../../source/contributor-tutorial-contribute-on-github.rst:309 msgid "Congrats! 🥳 You're now officially a Flower contributor!" msgstr "" "Félicitations 🥳 Tu es désormais officiellement une contributrice de " "Flower !" -#: ../../source/contributor-tutorial-contribute-on-github.rst:314 +#: ../../source/contributor-tutorial-contribute-on-github.rst:313 msgid "How to write a good PR title" msgstr "Comment écrire un bon titre de PR" -#: ../../source/contributor-tutorial-contribute-on-github.rst:316 +#: ../../source/contributor-tutorial-contribute-on-github.rst:315 msgid "" "A well-crafted PR title helps team members quickly understand the purpose" " and scope of the changes being proposed. Here's a guide to help you " @@ -2163,7 +2179,7 @@ msgstr "" "comprendre l'intérêt et le scope des changements proposés. Voici un guide" " pour vous aider à écrire des bons titres de PR :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:318 +#: ../../source/contributor-tutorial-contribute-on-github.rst:317 msgid "" "1. Be Clear and Concise: Provide a clear summary of the changes in a " "concise manner. 1. Use Actionable Verbs: Start with verbs like \"Add,\" " @@ -2181,7 +2197,7 @@ msgstr "" "capitalisation et une ponctuation : Suivre les règles de grammaire pour " "la clarté." -#: ../../source/contributor-tutorial-contribute-on-github.rst:324 +#: ../../source/contributor-tutorial-contribute-on-github.rst:323 msgid "" "Let's start with a few examples for titles that should be avoided because" " they do not provide meaningful information:" @@ -2189,27 +2205,27 @@ msgstr "" "Commençons par quelques exemples de titres qui devraient être évités " "parce qu'ils ne fournissent pas d'information significative :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:326 +#: ../../source/contributor-tutorial-contribute-on-github.rst:325 msgid "Implement Algorithm" msgstr "Implement Algorithm" -#: ../../source/contributor-tutorial-contribute-on-github.rst:327 +#: ../../source/contributor-tutorial-contribute-on-github.rst:326 msgid "Database" msgstr "Database" -#: ../../source/contributor-tutorial-contribute-on-github.rst:328 +#: ../../source/contributor-tutorial-contribute-on-github.rst:327 msgid "Add my_new_file.py to codebase" msgstr "Add my_new_file.py to codebase" -#: ../../source/contributor-tutorial-contribute-on-github.rst:329 +#: ../../source/contributor-tutorial-contribute-on-github.rst:328 msgid "Improve code in module" msgstr "Improve code in module" -#: ../../source/contributor-tutorial-contribute-on-github.rst:330 +#: ../../source/contributor-tutorial-contribute-on-github.rst:329 msgid "Change SomeModule" msgstr "Change SomeModule" -#: ../../source/contributor-tutorial-contribute-on-github.rst:332 +#: ../../source/contributor-tutorial-contribute-on-github.rst:331 msgid "" "Here are a few positive examples which provide helpful information " "without repeating how they do it, as that is already visible in the " @@ -2219,24 +2235,24 @@ msgstr "" "répéter comment ils le font, comme cela est déjà visible dans la section " "\"Files changed\" de la PR :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:334 +#: ../../source/contributor-tutorial-contribute-on-github.rst:333 msgid "Update docs banner to mention Flower Summit 2023" msgstr "Update docs banner to mention Flower Summit 2023" -#: ../../source/contributor-tutorial-contribute-on-github.rst:335 +#: ../../source/contributor-tutorial-contribute-on-github.rst:334 msgid "Remove unnecessary XGBoost dependency" msgstr "Remove unnecessary XGBoost dependency" -#: ../../source/contributor-tutorial-contribute-on-github.rst:336 +#: ../../source/contributor-tutorial-contribute-on-github.rst:335 msgid "Remove redundant attributes in strategies subclassing FedAvg" msgstr "Remove redundant attributes in strategies subclassing FedAvg" -#: ../../source/contributor-tutorial-contribute-on-github.rst:337 +#: ../../source/contributor-tutorial-contribute-on-github.rst:336 #, fuzzy msgid "Add CI job to deploy the staging system when the ``main`` branch changes" msgstr "Add CI job to deploy the staging system when the `main` branch changes" -#: ../../source/contributor-tutorial-contribute-on-github.rst:338 +#: ../../source/contributor-tutorial-contribute-on-github.rst:337 msgid "" "Add new amazing library which will be used to improve the simulation " "engine" @@ -2244,7 +2260,7 @@ msgstr "" "Add new amazing library which will be used to improve the simulation " "engine" -#: ../../source/contributor-tutorial-contribute-on-github.rst:342 +#: ../../source/contributor-tutorial-contribute-on-github.rst:341 #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:548 #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:946 #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:727 @@ -2253,7 +2269,7 @@ msgstr "" msgid "Next steps" msgstr "Prochaines étapes" -#: ../../source/contributor-tutorial-contribute-on-github.rst:344 +#: ../../source/contributor-tutorial-contribute-on-github.rst:343 msgid "" "Once you have made your first PR, and want to contribute more, be sure to" " check out the following :" @@ -2261,148 +2277,149 @@ msgstr "" "Une fois que tu auras fait ton premier RP, et que tu voudras contribuer " "davantage, ne manque pas de consulter les sites suivants :" -#: ../../source/contributor-tutorial-contribute-on-github.rst:346 +#: ../../source/contributor-tutorial-contribute-on-github.rst:345 +#, fuzzy msgid "" -"`Good first contributions `_, where you should particularly look " -"into the :code:`baselines` contributions." +":doc:`Good first contributions `, where you should particularly look into the " +":code:`baselines` contributions." msgstr "" "`Bonnes premières contributions `_, où vous devriez " "particulièrement regarder les contributions :code:`baselines`." -#: ../../source/contributor-tutorial-contribute-on-github.rst:350 +#: ../../source/contributor-tutorial-contribute-on-github.rst:349 #: ../../source/fed/0000-20200102-fed-template.md:60 msgid "Appendix" msgstr "Annexe" -#: ../../source/contributor-tutorial-contribute-on-github.rst:355 +#: ../../source/contributor-tutorial-contribute-on-github.rst:354 #, fuzzy msgid "Changelog entry" msgstr "Changelog" -#: ../../source/contributor-tutorial-contribute-on-github.rst:357 +#: ../../source/contributor-tutorial-contribute-on-github.rst:356 msgid "" "When opening a new PR, inside its description, there should be a " "``Changelog entry`` header." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:359 +#: ../../source/contributor-tutorial-contribute-on-github.rst:358 msgid "" "Above this header you should see the following comment that explains how " "to write your changelog entry:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:361 +#: ../../source/contributor-tutorial-contribute-on-github.rst:360 msgid "" "Inside the following 'Changelog entry' section, you should put the " "description of your changes that will be added to the changelog alongside" " your PR title." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:364 +#: ../../source/contributor-tutorial-contribute-on-github.rst:363 msgid "" -"If the section is completely empty (without any token) or non-existant, " +"If the section is completely empty (without any token) or non-existent, " "the changelog will just contain the title of the PR for the changelog " "entry, without any description." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:367 +#: ../../source/contributor-tutorial-contribute-on-github.rst:366 msgid "" "If the section contains some text other than tokens, it will use it to " "add a description to the change." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:369 +#: ../../source/contributor-tutorial-contribute-on-github.rst:368 msgid "" "If the section contains one of the following tokens it will ignore any " "other text and put the PR under the corresponding section of the " "changelog:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:371 +#: ../../source/contributor-tutorial-contribute-on-github.rst:370 msgid " is for classifying a PR as a general improvement." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:373 +#: ../../source/contributor-tutorial-contribute-on-github.rst:372 msgid " is to not add the PR to the changelog" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:375 +#: ../../source/contributor-tutorial-contribute-on-github.rst:374 msgid " is to add a general baselines change to the PR" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:377 +#: ../../source/contributor-tutorial-contribute-on-github.rst:376 msgid " is to add a general examples change to the PR" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:379 +#: ../../source/contributor-tutorial-contribute-on-github.rst:378 msgid " is to add a general sdk change to the PR" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:381 +#: ../../source/contributor-tutorial-contribute-on-github.rst:380 msgid " is to add a general simulations change to the PR" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:383 +#: ../../source/contributor-tutorial-contribute-on-github.rst:382 msgid "Note that only one token should be used." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:385 +#: ../../source/contributor-tutorial-contribute-on-github.rst:384 msgid "" "Its content must have a specific format. We will break down what each " "possibility does:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:387 +#: ../../source/contributor-tutorial-contribute-on-github.rst:386 msgid "" "If the ``### Changelog entry`` section contains nothing or doesn't exist," " the following text will be added to the changelog::" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:391 +#: ../../source/contributor-tutorial-contribute-on-github.rst:390 msgid "" "If the ``### Changelog entry`` section contains a description (and no " "token), the following text will be added to the changelog::" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:397 +#: ../../source/contributor-tutorial-contribute-on-github.rst:396 msgid "" "If the ``### Changelog entry`` section contains ````, nothing will " "change in the changelog." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:399 +#: ../../source/contributor-tutorial-contribute-on-github.rst:398 msgid "" "If the ``### Changelog entry`` section contains ````, the " "following text will be added to the changelog::" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:403 +#: ../../source/contributor-tutorial-contribute-on-github.rst:402 msgid "" "If the ``### Changelog entry`` section contains ````, the " "following text will be added to the changelog::" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:407 +#: ../../source/contributor-tutorial-contribute-on-github.rst:406 msgid "" "If the ``### Changelog entry`` section contains ````, the " "following text will be added to the changelog::" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:411 +#: ../../source/contributor-tutorial-contribute-on-github.rst:410 msgid "" "If the ``### Changelog entry`` section contains ````, the following " "text will be added to the changelog::" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:415 +#: ../../source/contributor-tutorial-contribute-on-github.rst:414 msgid "" "If the ``### Changelog entry`` section contains ````, the " "following text will be added to the changelog::" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:419 +#: ../../source/contributor-tutorial-contribute-on-github.rst:418 msgid "" "Note that only one token must be provided, otherwise, only the first " "action (in the order listed above), will be performed." @@ -2436,10 +2453,11 @@ msgstr "" "virtualenv>`_" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:12 +#, fuzzy msgid "" "Flower uses :code:`pyproject.toml` to manage dependencies and configure " "development tools (the ones which support it). Poetry is a build tool " -"which supports `PEP 517 `_." +"which supports `PEP 517 `_." msgstr "" "Flower utilise un fichier :code:`pyproject.toml` pour gérer les " "dependences et configurer les outils de développement (du moins ceux qui " @@ -2645,9 +2663,9 @@ msgid "" "`_, a federated training strategy " "designed for non-iid data. We are using PyTorch to train a Convolutional " "Neural Network(with Batch Normalization layers) on the CIFAR-10 dataset. " -"When applying FedBN, only few changes needed compared to `Example: " -"PyTorch - From Centralized To Federated `_." +"When applying FedBN, only few changes needed compared to :doc:`Example: " +"PyTorch - From Centralized To Federated `." msgstr "" "Ce tutoriel te montrera comment utiliser Flower pour construire une " "version fédérée d'une charge de travail d'apprentissage automatique " @@ -2668,10 +2686,10 @@ msgstr "Formation centralisée" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:10 #, fuzzy msgid "" -"All files are revised based on `Example: PyTorch - From Centralized To " -"Federated `_. The only thing to do is modifying the file called " -":code:`cifar.py`, revised part is shown below:" +"All files are revised based on :doc:`Example: PyTorch - From Centralized " +"To Federated `. The only " +"thing to do is modifying the file called :code:`cifar.py`, revised part " +"is shown below:" msgstr "" "Tous les fichiers sont révisés sur la base de `Exemple : PyTorch - From " "Centralized To Federated `_, the following parts are easy to follow, onyl " -":code:`get_parameters` and :code:`set_parameters` function in " -":code:`client.py` needed to revise. If not, please read the `Example: " -"PyTorch - From Centralized To Federated `_. first." +"If you have read :doc:`Example: PyTorch - From Centralized To Federated " +"`, the following parts are" +" easy to follow, only :code:`get_parameters` and :code:`set_parameters` " +"function in :code:`client.py` needed to revise. If not, please read the " +":doc:`Example: PyTorch - From Centralized To Federated `. first." msgstr "" "Si vous avez lu `Exemple : PyTorch - From Centralized To Federated " "` function and " -"leave all the configuration possibilities at their default values, as " -"seen below." +"the `start_server `_ function " +"and leave all the configuration possibilities at their default values, as" +" seen below." msgstr "" "Nous pouvons aller un peu plus loin et voir que :code:`server.py` lance " "simplement un serveur qui coordonnera trois tours de formation. Flower " @@ -4168,477 +4189,301 @@ msgid "You are ready now. Enjoy learning in a federated way!" msgstr "Tu es prêt maintenant. Profite de l'apprentissage de manière fédérée !" #: ../../source/explanation-differential-privacy.rst:2 +#: ../../source/explanation-differential-privacy.rst:11 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:303 #, fuzzy -msgid "Differential privacy" +msgid "Differential Privacy" msgstr "Confidentialité différentielle" -#: ../../source/explanation-differential-privacy.rst:4 +#: ../../source/explanation-differential-privacy.rst:3 msgid "" -"Flower provides differential privacy (DP) wrapper classes for the easy " -"integration of the central DP guarantees provided by DP-FedAvg into " -"training pipelines defined in any of the various ML frameworks that " -"Flower is compatible with." +"The information in datasets like healthcare, financial transactions, user" +" preferences, etc., is valuable and has the potential for scientific " +"breakthroughs and provides important business insights. However, such " +"data is also sensitive and there is a risk of compromising individual " +"privacy." msgstr "" -"Flower fournit des classes d'enveloppe de confidentialité différentielle " -"(DP) pour l'intégration facile des garanties centrales de DP fournies par" -" DP-FedAvg dans les pipelines de formation définis dans n'importe lequel " -"des divers cadres de ML avec lesquels Flower est compatible." -#: ../../source/explanation-differential-privacy.rst:7 -#, fuzzy +#: ../../source/explanation-differential-privacy.rst:6 msgid "" -"Please note that these components are still experimental; the correct " -"configuration of DP for a specific task is still an unsolved problem." +"Traditional methods like anonymization alone would not work because of " +"attacks like Re-identification and Data Linkage. That's where " +"differential privacy comes in. It provides the possibility of analyzing " +"data while ensuring the privacy of individuals." msgstr "" -"Note que ces composants sont encore expérimentaux, la configuration " -"correcte du DP pour une tâche spécifique est encore un problème non " -"résolu." -#: ../../source/explanation-differential-privacy.rst:10 +#: ../../source/explanation-differential-privacy.rst:12 msgid "" -"The name DP-FedAvg is misleading since it can be applied on top of any FL" -" algorithm that conforms to the general structure prescribed by the " -"FedOpt family of algorithms." +"Imagine two datasets that are identical except for a single record (for " +"instance, Alice's data). Differential Privacy (DP) guarantees that any " +"analysis (M), like calculating the average income, will produce nearly " +"identical results for both datasets (O and O' would be similar). This " +"preserves group patterns while obscuring individual details, ensuring the" +" individual's information remains hidden in the crowd." msgstr "" -"Le nom DP-FedAvg est trompeur car il peut être appliqué à n'importe quel " -"algorithme FL qui se conforme à la structure générale prescrite par la " -"famille d'algorithmes FedOpt." -#: ../../source/explanation-differential-privacy.rst:13 -msgid "DP-FedAvg" -msgstr "DP-FedAvg" +#: ../../source/explanation-differential-privacy.rst:-1 +msgid "DP Intro" +msgstr "" + +#: ../../source/explanation-differential-privacy.rst:22 +msgid "" +"One of the most commonly used mechanisms to achieve DP is adding enough " +"noise to the output of the analysis to mask the contribution of each " +"individual in the data while preserving the overall accuracy of the " +"analysis." +msgstr "" + +#: ../../source/explanation-differential-privacy.rst:25 +#, fuzzy +msgid "Formal Definition" +msgstr "Compiler les définitions ProtoBuf" -#: ../../source/explanation-differential-privacy.rst:15 +#: ../../source/explanation-differential-privacy.rst:26 msgid "" -"DP-FedAvg, originally proposed by McMahan et al. [mcmahan]_ and extended " -"by Andrew et al. [andrew]_, is essentially FedAvg with the following " -"modifications." +"Differential Privacy (DP) provides statistical guarantees against the " +"information an adversary can infer through the output of a randomized " +"algorithm. It provides an unconditional upper bound on the influence of a" +" single individual on the output of the algorithm by adding noise [1]. A " +"randomized mechanism M provides (:math:`\\epsilon`, " +":math:`\\delta`)-differential privacy if for any two neighboring " +"databases, D :sub:`1` and D :sub:`2`, that differ in only a single " +"record, and for all possible outputs S ⊆ Range(A):" msgstr "" -"DP-FedAvg, proposé à l'origine par McMahan et al. [mcmahan]_ et étendu " -"par Andrew et al. [andrew]_, est essentiellement FedAvg avec les " -"modifications suivantes." -#: ../../source/explanation-differential-privacy.rst:17 +#: ../../source/explanation-differential-privacy.rst:32 msgid "" -"**Clipping** : The influence of each client's update is bounded by " -"clipping it. This is achieved by enforcing a cap on the L2 norm of the " -"update, scaling it down if needed." +"\\small\n" +"P[M(D_{1} \\in A)] \\leq e^{\\delta} P[M(D_{2} \\in A)] + \\delta" msgstr "" -"**Clipping** : L'influence de la mise à jour de chaque client est limitée" -" en l'écrêtant. Ceci est réalisé en imposant un plafond à la norme L2 de " -"la mise à jour, en la réduisant si nécessaire." -#: ../../source/explanation-differential-privacy.rst:18 +#: ../../source/explanation-differential-privacy.rst:38 msgid "" -"**Noising** : Gaussian noise, calibrated to the clipping threshold, is " -"added to the average computed at the server." +"The :math:`\\epsilon` parameter, also known as the privacy budget, is a " +"metric of privacy loss. It also controls the privacy-utility trade-off; " +"lower :math:`\\epsilon` values indicate higher levels of privacy but are " +"likely to reduce utility as well. The :math:`\\delta` parameter accounts " +"for a small probability on which the upper bound :math:`\\epsilon` does " +"not hold. The amount of noise needed to achieve differential privacy is " +"proportional to the sensitivity of the output, which measures the maximum" +" change in the output due to the inclusion or removal of a single record." msgstr "" -"**Bruit** : un bruit gaussien, calibré sur le seuil d'écrêtage, est " -"ajouté à la moyenne calculée au niveau du serveur." -#: ../../source/explanation-differential-privacy.rst:20 +#: ../../source/explanation-differential-privacy.rst:45 #, fuzzy +msgid "Differential Privacy in Machine Learning" +msgstr "Confidentialité différentielle" + +#: ../../source/explanation-differential-privacy.rst:46 msgid "" -"The distribution of the update norm has been shown to vary from task-to-" -"task and to evolve as training progresses. This variability is crucial in" -" understanding its impact on differential privacy guarantees, emphasizing" -" the need for an adaptive approach [andrew]_ that continuously adjusts " -"the clipping threshold to track a prespecified quantile of the update " -"norm distribution." +"DP can be utilized in machine learning to preserve the privacy of the " +"training data. Differentially private machine learning algorithms are " +"designed in a way to prevent the algorithm to learn any specific " +"information about any individual data points and subsequently prevent the" +" model from revealing sensitive information. Depending on the stage at " +"which noise is introduced, various methods exist for applying DP to " +"machine learning algorithms. One approach involves adding noise to the " +"training data (either to the features or labels), while another method " +"entails injecting noise into the gradients of the loss function during " +"model training. Additionally, such noise can be incorporated into the " +"model's output." msgstr "" -"Il a été démontré que la distribution de la norme de mise à jour varie " -"d'une tâche à l'autre et évolue au fur et à mesure de la formation. C'est" -" pourquoi nous utilisons une approche adaptative [andrew]_ qui ajuste " -"continuellement le seuil d'écrêtage pour suivre un quantile prédéfini de " -"la distribution de la norme de mise à jour." -#: ../../source/explanation-differential-privacy.rst:23 -msgid "Simplifying Assumptions" -msgstr "Simplifier les hypothèses" - -#: ../../source/explanation-differential-privacy.rst:25 +#: ../../source/explanation-differential-privacy.rst:53 #, fuzzy +msgid "Differential Privacy in Federated Learning" +msgstr "Mise à l'échelle de l'apprentissage fédéré" + +#: ../../source/explanation-differential-privacy.rst:54 +msgid "" +"Federated learning is a data minimization approach that allows multiple " +"parties to collaboratively train a model without sharing their raw data. " +"However, federated learning also introduces new privacy challenges. The " +"model updates between parties and the central server can leak information" +" about the local data. These leaks can be exploited by attacks such as " +"membership inference and property inference attacks, or model inversion " +"attacks." +msgstr "" + +#: ../../source/explanation-differential-privacy.rst:58 msgid "" -"We make (and attempt to enforce) a number of assumptions that must be " -"satisfied to ensure that the training process actually realizes the " -":math:`(\\epsilon, \\delta)` guarantees the user has in mind when " -"configuring the setup." +"DP can play a crucial role in federated learning to provide privacy for " +"the clients' data." msgstr "" -"Nous formulons (et tentons d'appliquer) un certain nombre d'hypothèses " -"qui doivent être satisfaites pour que le processus de formation réalise " -"réellement les garanties :math:`(\\epsilon, \\delta)` que l'utilisateur a" -" à l'esprit lorsqu'il configure l'installation." -#: ../../source/explanation-differential-privacy.rst:27 +#: ../../source/explanation-differential-privacy.rst:60 msgid "" -"**Fixed-size subsampling** :Fixed-size subsamples of the clients must be " -"taken at each round, as opposed to variable-sized Poisson subsamples." +"Depending on the granularity of privacy provision or the location of " +"noise addition, different forms of DP exist in federated learning. In " +"this explainer, we focus on two approaches of DP utilization in federated" +" learning based on where the noise is added: at the server (also known as" +" the center) or at the client (also known as the local)." msgstr "" -"**Sous-échantillonnage de taille fixe** :Des sous-échantillons de taille " -"fixe des clients doivent être prélevés à chaque tour, par opposition aux " -"sous-échantillons de Poisson de taille variable." -#: ../../source/explanation-differential-privacy.rst:28 +#: ../../source/explanation-differential-privacy.rst:63 msgid "" -"**Unweighted averaging** : The contributions from all the clients must " -"weighted equally in the aggregate to eliminate the requirement for the " -"server to know in advance the sum of the weights of all clients available" -" for selection." +"**Central Differential Privacy**: DP is applied by the server and the " +"goal is to prevent the aggregated model from leaking information about " +"each client's data." msgstr "" -"**Moyenne non pondérée** : Les contributions de tous les clients doivent " -"être pondérées de façon égale dans l'ensemble afin que le serveur n'ait " -"pas à connaître à l'avance la somme des poids de tous les clients " -"disponibles pour la sélection." -#: ../../source/explanation-differential-privacy.rst:29 +#: ../../source/explanation-differential-privacy.rst:65 msgid "" -"**No client failures** : The set of available clients must stay constant " -"across all rounds of training. In other words, clients cannot drop out or" -" fail." +"**Local Differential Privacy**: DP is applied on the client side before " +"sending any information to the server and the goal is to prevent the " +"updates that are sent to the server from leaking any information about " +"the client's data." msgstr "" -"**Aucune défaillance de client** : L'ensemble des clients disponibles " -"doit rester constant pendant toutes les séries de formation. En d'autres " -"termes, les clients ne peuvent pas abandonner ou échouer." -#: ../../source/explanation-differential-privacy.rst:31 +#: ../../source/explanation-differential-privacy.rst:-1 +#: ../../source/explanation-differential-privacy.rst:68 +#: ../../source/how-to-use-differential-privacy.rst:11 #, fuzzy +msgid "Central Differential Privacy" +msgstr "Confidentialité différentielle" + +#: ../../source/explanation-differential-privacy.rst:69 msgid "" -"The first two are useful for eliminating a multitude of complications " -"associated with calibrating the noise to the clipping threshold, while " -"the third one is required to comply with the assumptions of the privacy " -"analysis." +"In this approach, which is also known as user-level DP, the central " +"server is responsible for adding noise to the globally aggregated " +"parameters. It should be noted that trust in the server is required." msgstr "" -"Les deux premiers sont utiles pour éliminer une multitude de " -"complications liées au calibrage du bruit en fonction du seuil " -"d'écrêtage, tandis que le troisième est nécessaire pour se conformer aux " -"hypothèses de l'analyse de la vie privée." -#: ../../source/explanation-differential-privacy.rst:34 +#: ../../source/explanation-differential-privacy.rst:76 msgid "" -"These restrictions are in line with constraints imposed by Andrew et al. " -"[andrew]_." +"While there are various ways to implement central DP in federated " +"learning, we concentrate on the algorithms proposed by [2] and [3]. The " +"overall approach is to clip the model updates sent by the clients and add" +" some amount of noise to the aggregated model. In each iteration, a " +"random set of clients is chosen with a specific probability for training." +" Each client performs local training on its own data. The update of each " +"client is then clipped by some value `S` (sensitivity `S`). This would " +"limit the impact of any individual client which is crucial for privacy " +"and often beneficial for robustness. A common approach to achieve this is" +" by restricting the `L2` norm of the clients' model updates, ensuring " +"that larger updates are scaled down to fit within the norm `S`." msgstr "" -"Ces restrictions sont conformes aux contraintes imposées par Andrew et " -"al. [andrew]_." -#: ../../source/explanation-differential-privacy.rst:37 -msgid "Customizable Responsibility for Noise injection" -msgstr "Responsabilité personnalisable pour l'injection de bruit" +#: ../../source/explanation-differential-privacy.rst:-1 +msgid "clipping" +msgstr "" -#: ../../source/explanation-differential-privacy.rst:38 +#: ../../source/explanation-differential-privacy.rst:89 msgid "" -"In contrast to other implementations where the addition of noise is " -"performed at the server, you can configure the site of noise injection to" -" better match your threat model. We provide users with the flexibility to" -" set up the training such that each client independently adds a small " -"amount of noise to the clipped update, with the result that simply " -"aggregating the noisy updates is equivalent to the explicit addition of " -"noise to the non-noisy aggregate at the server." -msgstr "" -"Contrairement à d'autres implémentations où l'ajout de bruit est effectué" -" au niveau du serveur, tu peux configurer le site d'injection de bruit " -"pour qu'il corresponde mieux à ton modèle de menace. Nous offrons aux " -"utilisateurs la possibilité de configurer l'entraînement de telle sorte " -"que chaque client ajoute indépendamment une petite quantité de bruit à la" -" mise à jour écrêtée, ce qui fait que le simple fait d'agréger les mises " -"à jour bruyantes équivaut à l'ajout explicite de bruit à l'agrégat non " -"bruyant au niveau du serveur." - -#: ../../source/explanation-differential-privacy.rst:41 -msgid "" -"To be precise, if we let :math:`m` be the number of clients sampled each " -"round and :math:`\\sigma_\\Delta` be the scale of the total Gaussian " -"noise that needs to be added to the sum of the model updates, we can use " -"simple maths to show that this is equivalent to each client adding noise " -"with scale :math:`\\sigma_\\Delta/\\sqrt{m}`." -msgstr "" -"Pour être précis, si nous laissons :math:`m` être le nombre de clients " -"échantillonnés à chaque tour et :math:\\sigma_\\Delta` être l'échelle du " -"bruit gaussien total qui doit être ajouté à la somme des mises à jour du " -"modèle, nous pouvons utiliser des mathématiques simples pour montrer que " -"cela équivaut à ce que chaque client ajoute du bruit avec l'échelle " -":math:\\sigma_\\Delta/\\sqrt{m}`." - -#: ../../source/explanation-differential-privacy.rst:44 -msgid "Wrapper-based approach" -msgstr "Approche basée sur l'enveloppe" +"Afterwards, the Gaussian mechanism is used to add noise in order to " +"distort the sum of all clients' updates. The amount of noise is scaled to" +" the sensitivity value to obtain a privacy guarantee. The Gaussian " +"mechanism is used with a noise sampled from `N (0, σ²)` where `σ = ( " +"noise_scale * S ) / (number of sampled clients)`." +msgstr "" -#: ../../source/explanation-differential-privacy.rst:46 +#: ../../source/explanation-differential-privacy.rst:94 +msgid "Clipping" +msgstr "" + +#: ../../source/explanation-differential-privacy.rst:96 msgid "" -"Introducing DP to an existing workload can be thought of as adding an " -"extra layer of security around it. This inspired us to provide the " -"additional server and client-side logic needed to make the training " -"process differentially private as wrappers for instances of the " -":code:`Strategy` and :code:`NumPyClient` abstract classes respectively. " -"This wrapper-based approach has the advantage of being easily composable " -"with other wrappers that someone might contribute to the Flower library " -"in the future, e.g., for secure aggregation. Using Inheritance instead " -"can be tedious because that would require the creation of new sub- " -"classes every time a new class implementing :code:`Strategy` or " -":code:`NumPyClient` is defined." -msgstr "" -"L'introduction du DP dans une charge de travail existante peut être " -"considérée comme l'ajout d'une couche de sécurité supplémentaire autour " -"d'elle. Cela nous a incités à fournir la logique supplémentaire côté " -"serveur et côté client nécessaire pour rendre le processus de formation " -"différentiellement privé en tant qu'enveloppes pour les instances des " -"classes abstraites :code:`Strategy` et :code:`NumPyClient` " -"respectivement. Cette approche basée sur l'enveloppe a l'avantage d'être " -"facilement composable avec d'autres enveloppes que quelqu'un pourrait " -"contribuer à la bibliothèque Flower à l'avenir, par exemple, pour " -"l'agrégation sécurisée. L'utilisation de l'héritage à la place peut être " -"fastidieuse car cela nécessiterait la création de nouvelles sous-classes " -"chaque fois qu'une nouvelle classe mettant en œuvre :code:`Strategy` ou " -":code:`NumPyClient` est définie." - -#: ../../source/explanation-differential-privacy.rst:49 -msgid "Server-side logic" -msgstr "Logique côté serveur" +"There are two forms of clipping commonly used in Central DP: Fixed " +"Clipping and Adaptive Clipping." +msgstr "" -#: ../../source/explanation-differential-privacy.rst:51 -#, fuzzy -msgid "" -"The first version of our solution was to define a decorator whose " -"constructor accepted, among other things, a boolean-valued variable " -"indicating whether adaptive clipping was to be enabled or not. We quickly" -" realized that this would clutter its :code:`__init__()` function with " -"variables corresponding to hyperparameters of adaptive clipping that " -"would remain unused when it was disabled. A cleaner implementation could " -"be achieved by splitting the functionality into two decorators, " -":code:`DPFedAvgFixed` and :code:`DPFedAvgAdaptive`, with the latter sub- " -"classing the former. The constructors for both classes accept a boolean " -"parameter :code:`server_side_noising`, which, as the name suggests, " -"determines where noising is to be performed." -msgstr "" -"La première version de notre solution consistait à définir un décorateur " -"dont le constructeur acceptait, entre autres, une variable à valeur " -"booléenne indiquant si l'écrêtage adaptatif devait être activé ou non. " -"Nous nous sommes rapidement rendu compte que cela encombrerait sa " -"fonction :code:`__init__()` avec des variables correspondant aux " -"hyperparamètres de l'écrêtage adaptatif qui resteraient inutilisées " -"lorsque celui-ci était désactivé. Une implémentation plus propre pourrait" -" être obtenue en divisant la fonctionnalité en deux décorateurs, " -":code:`DPFedAvgFixed` et :code:`DPFedAvgAdaptive`, le second sous-" -"classant le premier. Les constructeurs des deux classes acceptent un " -"paramètre booléen :code:`server_side_noising` qui, comme son nom " -"l'indique, détermine l'endroit où le noising doit être effectué." +#: ../../source/explanation-differential-privacy.rst:98 +msgid "" +"**Fixed Clipping** : A predefined fix threshold is set for the magnitude " +"of clients' updates. Any update exceeding this threshold is clipped back " +"to the threshold value." +msgstr "" -#: ../../source/explanation-differential-privacy.rst:54 -#: ../../source/ref-api/flwr.server.strategy.DPFedAvgFixed.rst:2 -msgid "DPFedAvgFixed" -msgstr "DPFedAvgFixed" +#: ../../source/explanation-differential-privacy.rst:100 +msgid "" +"**Adaptive Clipping** : The clipping threshold dynamically adjusts based " +"on the observed update distribution [4]. It means that the clipping value" +" is tuned during the rounds with respect to the quantile of the update " +"norm distribution." +msgstr "" -#: ../../source/explanation-differential-privacy.rst:56 +#: ../../source/explanation-differential-privacy.rst:102 msgid "" -"The server-side capabilities required for the original version of DP-" -"FedAvg, i.e., the one which performed fixed clipping, can be completely " -"captured with the help of wrapper logic for just the following two " -"methods of the :code:`Strategy` abstract class." +"The choice between fixed and adaptive clipping depends on various factors" +" such as privacy requirements, data distribution, model complexity, and " +"others." msgstr "" -"Les capacités côté serveur requises pour la version originale de DP-" -"FedAvg, c'est-à-dire celle qui effectue un écrêtage fixe, peuvent être " -"entièrement capturées à l'aide d'une logique d'enveloppement pour les " -"deux méthodes suivantes de la classe abstraite :code:`Strategy`." -#: ../../source/explanation-differential-privacy.rst:58 +#: ../../source/explanation-differential-privacy.rst:-1 +#: ../../source/explanation-differential-privacy.rst:105 +#: ../../source/how-to-use-differential-privacy.rst:96 +#, fuzzy +msgid "Local Differential Privacy" +msgstr "Confidentialité différentielle" + +#: ../../source/explanation-differential-privacy.rst:107 msgid "" -":code:`configure_fit()` : The config dictionary being sent by the wrapped" -" :code:`Strategy` to each client needs to be augmented with an additional" -" value equal to the clipping threshold (keyed under " -":code:`dpfedavg_clip_norm`) and, if :code:`server_side_noising=true`, " -"another one equal to the scale of the Gaussian noise that needs to be " -"added at the client (keyed under :code:`dpfedavg_noise_stddev`). This " -"entails *post*-processing of the results returned by the wrappee's " -"implementation of :code:`configure_fit()`." -msgstr "" -":code:`configure_fit()` : Le dictionnaire de configuration envoyé par la " -":code:`Strategy` enveloppée à chaque client doit être augmenté d'une " -"valeur supplémentaire égale au seuil d'écrêtage (indiqué sous " -":code:`dpfedavg_clip_norm`) et, si :code:`server_side_noising=true`, " -"d'une autre égale à l'échelle du bruit gaussien qui doit être ajouté au " -"client (indiqué sous :code:`dpfedavg_noise_stddev`)." - -#: ../../source/explanation-differential-privacy.rst:59 -#, fuzzy -msgid "" -":code:`aggregate_fit()`: We check whether any of the sampled clients " -"dropped out or failed to upload an update before the round timed out. In " -"that case, we need to abort the current round, discarding any successful " -"updates that were received, and move on to the next one. On the other " -"hand, if all clients responded successfully, we must force the averaging " -"of the updates to happen in an unweighted manner by intercepting the " -":code:`parameters` field of :code:`FitRes` for each received update and " -"setting it to 1. Furthermore, if :code:`server_side_noising=true`, each " -"update is perturbed with an amount of noise equal to what it would have " -"been subjected to had client-side noising being enabled. This entails " -"*pre*-processing of the arguments to this method before passing them on " -"to the wrappee's implementation of :code:`aggregate_fit()`." -msgstr "" -":code:`aggregate_fit()`: We check whether any of the sampled clients " -"dropped out or failed to upload an update before the round timed out. In " -"that case, we need to abort the current round, discarding any successful " -"updates that were received, and move on to the next one. On the other " -"hand, if all clients responded successfully, we must force the averaging " -"of the updates to happen in an unweighted manner by intercepting the " -":code:`parameters` field of :code:`FitRes` for each received update and " -"setting it to 1. Furthermore, if :code:`server_side_noising=true`, each " -"update is perturbed with an amount of noise equal to what it would have " -"been subjected to had client-side noising being enabled. This entails " -"*pre*-processing of the arguments to this method before passing them on " -"to the wrappee's implementation of :code:`aggregate_fit()`." - -#: ../../source/explanation-differential-privacy.rst:62 -msgid "" -"We can't directly change the aggregation function of the wrapped strategy" -" to force it to add noise to the aggregate, hence we simulate client-side" -" noising to implement server-side noising." -msgstr "" -"Nous ne pouvons pas modifier directement la fonction d'agrégation de la " -"stratégie enveloppée pour la forcer à ajouter du bruit à l'agrégat, c'est" -" pourquoi nous simulons le bruit côté client pour mettre en œuvre le " -"bruit côté serveur." - -#: ../../source/explanation-differential-privacy.rst:64 -msgid "" -"These changes have been put together into a class called " -":code:`DPFedAvgFixed`, whose constructor accepts the strategy being " -"decorated, the clipping threshold and the number of clients sampled every" -" round as compulsory arguments. The user is expected to specify the " -"clipping threshold since the order of magnitude of the update norms is " -"highly dependent on the model being trained and providing a default value" -" would be misleading. The number of clients sampled at every round is " -"required to calculate the amount of noise that must be added to each " -"individual update, either by the server or the clients." -msgstr "" -"Ces modifications ont été regroupées dans une classe appelée " -":code:`DPFedAvgFixed`, dont le constructeur accepte la stratégie décorée," -" le seuil d'écrêtage et le nombre de clients échantillonnés à chaque tour" -" comme arguments obligatoires. L'utilisateur est censé spécifier le seuil" -" d'écrêtage car l'ordre de grandeur des normes de mise à jour dépend " -"fortement du modèle formé et fournir une valeur par défaut serait " -"trompeur. Le nombre de clients échantillonnés à chaque tour est " -"nécessaire pour calculer la quantité de bruit qui doit être ajoutée à " -"chaque mise à jour individuelle, que ce soit par le serveur ou par les " -"clients." +"In this approach, each client is responsible for performing DP. Local DP " +"avoids the need for a fully trusted aggregator, but it should be noted " +"that local DP leads to a decrease in accuracy but better privacy in " +"comparison to central DP." +msgstr "" -#: ../../source/explanation-differential-privacy.rst:67 -#: ../../source/ref-api/flwr.server.strategy.DPFedAvgAdaptive.rst:2 -msgid "DPFedAvgAdaptive" -msgstr "DPFedAvgAdaptive" +#: ../../source/explanation-differential-privacy.rst:116 +msgid "In this explainer, we focus on two forms of achieving Local DP:" +msgstr "" -#: ../../source/explanation-differential-privacy.rst:69 +#: ../../source/explanation-differential-privacy.rst:118 msgid "" -"The additional functionality required to facilitate adaptive clipping has" -" been provided in :code:`DPFedAvgAdaptive`, a subclass of " -":code:`DPFedAvgFixed`. It overrides the above-mentioned methods to do the" -" following." -msgstr "" -"La fonctionnalité supplémentaire nécessaire pour faciliter l'écrêtage " -"adaptatif a été fournie dans :code:`DPFedAvgAdaptive`, une sous-classe de" -" :code:`DPFedAvgFixed`. Elle remplace les méthodes mentionnées ci-dessus " -"pour effectuer les opérations suivantes." - -#: ../../source/explanation-differential-privacy.rst:71 -msgid "" -":code:`configure_fit()` : It intercepts the config dict returned by " -":code:`super.configure_fit()` to add the key-value pair " -":code:`dpfedavg_adaptive_clip_enabled:True` to it, which the client " -"interprets as an instruction to include an indicator bit (1 if update " -"norm <= clipping threshold, 0 otherwise) in the results returned by it." -msgstr "" -":code:`configure_fit()` : Il intercepte le dict de configuration renvoyé " -"par :code:`super.configure_fit()` pour y ajouter la paire clé-valeur " -":code:`dpfedavg_adaptive_clip_enabled:True`, que le client interprète " -"comme une instruction d'inclure un bit indicateur (1 si la norme de mise " -"à jour <= seuil d'écrêtage, 0 sinon) dans les résultats qu'il renvoie." - -#: ../../source/explanation-differential-privacy.rst:73 -msgid "" -":code:`aggregate_fit()` : It follows a call to " -":code:`super.aggregate_fit()` with one to :code:`__update_clip_norm__()`," -" a procedure which adjusts the clipping threshold on the basis of the " -"indicator bits received from the sampled clients." -msgstr "" -":code:`aggregate_fit()` : Il fait suivre un appel à " -":code:`super.aggregate_fit()` d'un appel à " -":code:`__update_clip_norm__()`, une procédure qui ajuste le seuil " -"d'écrêtage sur la base des bits indicateurs reçus des clients " -"échantillonnés." - -#: ../../source/explanation-differential-privacy.rst:77 -msgid "Client-side logic" -msgstr "Logique côté client" +"Each client adds noise to the local updates before sending them to the " +"server. To achieve (:math:`\\epsilon`, :math:`\\delta`)-DP, considering " +"the sensitivity of the local model to be ∆, Gaussian noise is applied " +"with a noise scale of σ where:" +msgstr "" -#: ../../source/explanation-differential-privacy.rst:79 +#: ../../source/explanation-differential-privacy.rst:120 msgid "" -"The client-side capabilities required can be completely captured through " -"wrapper logic for just the :code:`fit()` method of the " -":code:`NumPyClient` abstract class. To be precise, we need to *post-" -"process* the update computed by the wrapped client to clip it, if " -"necessary, to the threshold value supplied by the server as part of the " -"config dictionary. In addition to this, it may need to perform some extra" -" work if either (or both) of the following keys are also present in the " -"dict." +"\\small\n" +"\\frac{∆ \\times \\sqrt{2 \\times " +"\\log\\left(\\frac{1.25}{\\delta}\\right)}}{\\epsilon}\n" +"\n" msgstr "" -"Les capacités requises côté client peuvent être entièrement capturées par" -" une logique de wrapper pour la seule méthode :code:`fit()` de la classe " -"abstraite :code:`NumPyClient`. Pour être précis, nous devons *post-" -"traiter* la mise à jour calculée par le client wrapped pour l'écrêter, si" -" nécessaire, à la valeur seuil fournie par le serveur dans le cadre du " -"dictionnaire de configuration. En plus de cela, il peut avoir besoin " -"d'effectuer un travail supplémentaire si l'une des clés suivantes (ou les" -" deux) est également présente dans le dict." -#: ../../source/explanation-differential-privacy.rst:81 +#: ../../source/explanation-differential-privacy.rst:125 msgid "" -":code:`dpfedavg_noise_stddev` : Generate and add the specified amount of " -"noise to the clipped update." +"Each client adds noise to the gradients of the model during the local " +"training (DP-SGD). More specifically, in this approach, gradients are " +"clipped and an amount of calibrated noise is injected into the gradients." msgstr "" -":code:`dpfedavg_noise_stddev` : Génère et ajoute la quantité de bruit " -"spécifiée à la mise à jour de l'écrêtage." -#: ../../source/explanation-differential-privacy.rst:82 +#: ../../source/explanation-differential-privacy.rst:128 msgid "" -":code:`dpfedavg_adaptive_clip_enabled` : Augment the metrics dict in the " -":code:`FitRes` object being returned to the server with an indicator bit," -" calculated as described earlier." +"Please note that these two approaches are providing privacy at different " +"levels." msgstr "" -":code:`dpfedavg_adaptive_clip_enabled` : Complète les métriques dict dans" -" l'objet :code:`FitRes` renvoyé au serveur avec un bit indicateur, " -"calculé comme décrit précédemment." -#: ../../source/explanation-differential-privacy.rst:86 -msgid "Performing the :math:`(\\epsilon, \\delta)` analysis" -msgstr "Effectuer l'analyse :math:`(\\epsilon, \\delta)`" +#: ../../source/explanation-differential-privacy.rst:131 +#, fuzzy +msgid "**References:**" +msgstr "Référence" -#: ../../source/explanation-differential-privacy.rst:88 -msgid "" -"Assume you have trained for :math:`n` rounds with sampling fraction " -":math:`q` and noise multiplier :math:`z`. In order to calculate the " -":math:`\\epsilon` value this would result in for a particular " -":math:`\\delta`, the following script may be used." +#: ../../source/explanation-differential-privacy.rst:133 +msgid "[1] Dwork et al. The Algorithmic Foundations of Differential Privacy." msgstr "" -"Supposons que tu te sois entraîné pendant :math:`n` tours avec la " -"fraction d'échantillonnage :math:`q` et le multiplicateur de bruit " -":math:`z`. Afin de calculer la valeur :math:`epsilon` qui en résulterait " -"pour un :math:`\\delta` particulier, le script suivant peut être utilisé." -#: ../../source/explanation-differential-privacy.rst:98 +#: ../../source/explanation-differential-privacy.rst:135 #, fuzzy msgid "" -"McMahan et al. \"Learning Differentially Private Recurrent Language " -"Models.\" International Conference on Learning Representations (ICLR), " -"2017." +"[2] McMahan et al. Learning Differentially Private Recurrent Language " +"Models." msgstr "" "McMahan, H. Brendan, et al. \"Learning differentially private recurrent " "language models\", arXiv preprint arXiv:1710.06963 (2017)." -#: ../../source/explanation-differential-privacy.rst:100 -#, fuzzy +#: ../../source/explanation-differential-privacy.rst:137 msgid "" -"Andrew, Galen, et al. \"Differentially Private Learning with Adaptive " -"Clipping.\" Advances in Neural Information Processing Systems (NeurIPS), " -"2021." +"[3] Geyer et al. Differentially Private Federated Learning: A Client " +"Level Perspective." +msgstr "" + +#: ../../source/explanation-differential-privacy.rst:139 +#, fuzzy +msgid "[4] Galen et al. Differentially Private Learning with Adaptive Clipping." msgstr "" "Andrew, Galen, et al. \"Differentially private learning with adaptive " "clipping\" Advances in Neural Information Processing Systems 34 (2021) : " @@ -5161,6 +5006,7 @@ msgid "As a reference, this document follows the above structure." msgstr "À titre de référence, ce document suit la structure ci-dessus." #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:90 +#: ../../source/ref-api/flwr.common.Metadata.rst:2 msgid "Metadata" msgstr "Métadonnées" @@ -5598,13 +5444,12 @@ msgstr "" #, fuzzy msgid "" "This can be achieved by customizing an existing strategy or by " -"`implementing a custom strategy from scratch " -"`_. " -"Here's a nonsensical example that customizes :code:`FedAvg` by adding a " -"custom ``\"hello\": \"world\"`` configuration key/value pair to the " -"config dict of a *single client* (only the first client in the list, the " -"other clients in this round to not receive this \"special\" config " -"value):" +":doc:`implementing a custom strategy from scratch `. Here's a nonsensical example that customizes :code:`FedAvg`" +" by adding a custom ``\"hello\": \"world\"`` configuration key/value pair" +" to the config dict of a *single client* (only the first client in the " +"list, the other clients in this round to not receive this \"special\" " +"config value):" msgstr "" "Ceci peut être réalisé en personnalisant une stratégie existante ou en " "`mettant en œuvre une stratégie personnalisée à partir de zéro " @@ -6048,11 +5893,12 @@ msgstr "" "modèle global actuel :code:`parameters` et :code:`config` dict" #: ../../source/how-to-implement-strategies.rst:236 +#, fuzzy msgid "" "More sophisticated implementations can use :code:`configure_fit` to " "implement custom client selection logic. A client will only participate " "in a round if the corresponding :code:`ClientProxy` is included in the " -"the list returned from :code:`configure_fit`." +"list returned from :code:`configure_fit`." msgstr "" "Les implémentations plus sophistiquées peuvent utiliser " ":code:`configure_fit` pour mettre en œuvre une logique de sélection des " @@ -6154,11 +6000,12 @@ msgstr "" "le modèle global actuel :code:`parameters` et :code:`config` dict" #: ../../source/how-to-implement-strategies.rst:283 +#, fuzzy msgid "" "More sophisticated implementations can use :code:`configure_evaluate` to " "implement custom client selection logic. A client will only participate " "in a round if the corresponding :code:`ClientProxy` is included in the " -"the list returned from :code:`configure_evaluate`." +"list returned from :code:`configure_evaluate`." msgstr "" "Les implémentations plus sophistiquées peuvent utiliser " ":code:`configure_evaluate` pour mettre en œuvre une logique de sélection " @@ -6334,9 +6181,7 @@ msgid "Install via Docker" msgstr "Installer Flower" #: ../../source/how-to-install-flower.rst:60 -msgid "" -"`How to run Flower using Docker `_" +msgid ":doc:`How to run Flower using Docker `" msgstr "" #: ../../source/how-to-install-flower.rst:63 @@ -6689,17 +6534,17 @@ msgid "Resources" msgstr "Ressources" #: ../../source/how-to-monitor-simulation.rst:234 +#, fuzzy msgid "" -"Ray Dashboard: ``_" +"Ray Dashboard: ``_" msgstr "" "Tableau de bord Ray : ``_" #: ../../source/how-to-monitor-simulation.rst:236 -msgid "" -"Ray Metrics: ``_" +#, fuzzy +msgid "Ray Metrics: ``_" msgstr "" "Ray Metrics : ``_" @@ -7695,7 +7540,8 @@ msgstr "" msgid "" "Remove \"placeholder\" methods from subclasses of ``Client`` or " "``NumPyClient``. If you, for example, use server-side evaluation, then " -"empty placeholder implementations of ``evaluate`` are no longer necessary." +"empty placeholder implementations of ``evaluate`` are no longer " +"necessary." msgstr "" "Supprime les méthodes \"placeholder\" des sous-classes de ``Client`` ou " "de ``NumPyClient``. Si tu utilises, par exemple, l'évaluation côté " @@ -7848,7 +7694,157 @@ msgid "" msgstr "" #: ../../source/how-to-use-built-in-mods.rst:89 -msgid "Enjoy building more robust and flexible ``ClientApp``s with mods!" +msgid "Enjoy building a more robust and flexible ``ClientApp`` with mods!" +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:2 +#, fuzzy +msgid "Use Differential Privacy" +msgstr "Confidentialité différentielle" + +#: ../../source/how-to-use-differential-privacy.rst:3 +msgid "" +"This guide explains how you can utilize differential privacy in the " +"Flower framework. If you are not yet familiar with differential privacy, " +"you can refer to :doc:`explanation-differential-privacy`." +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:7 +msgid "" +"Differential Privacy in Flower is in a preview phase. If you plan to use " +"these features in a production environment with sensitive data, feel free" +" contact us to discuss your requirements and to receive guidance on how " +"to best use these features." +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:12 +msgid "" +"This approach consists of two seprate phases: clipping of the updates and" +" adding noise to the aggregated model. For the clipping phase, Flower " +"framework has made it possible to decide whether to perform clipping on " +"the server side or the client side." +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:15 +msgid "" +"**Server-side Clipping**: This approach has the advantage of the server " +"enforcing uniform clipping across all clients' updates and reducing the " +"communication overhead for clipping values. However, it also has the " +"disadvantage of increasing the computational load on the server due to " +"the need to perform the clipping operation for all clients." +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:16 +msgid "" +"**Client-side Clipping**: This approach has the advantage of reducing the" +" computational overhead on the server. However, it also has the " +"disadvantage of lacking centralized control, as the server has less " +"control over the clipping process." +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:21 +#, fuzzy +msgid "Server-side Clipping" +msgstr "Logique côté serveur" + +#: ../../source/how-to-use-differential-privacy.rst:22 +msgid "" +"For central DP with server-side clipping, there are two :code:`Strategy` " +"classes that act as wrappers around the actual :code:`Strategy` instance " +"(for example, :code:`FedAvg`). The two wrapper classes are " +":code:`DifferentialPrivacyServerSideFixedClipping` and " +":code:`DifferentialPrivacyServerSideAdaptiveClipping` for fixed and " +"adaptive clipping." +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:-1 +#, fuzzy +msgid "server side clipping" +msgstr "Logique côté serveur" + +#: ../../source/how-to-use-differential-privacy.rst:31 +msgid "" +"The code sample below enables the :code:`FedAvg` strategy to use server-" +"side fixed clipping using the " +":code:`DifferentialPrivacyServerSideFixedClipping` wrapper class. The " +"same approach can be used with " +":code:`DifferentialPrivacyServerSideAdaptiveClipping` by adjusting the " +"corresponding input parameters." +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:52 +#, fuzzy +msgid "Client-side Clipping" +msgstr "Logique côté client" + +#: ../../source/how-to-use-differential-privacy.rst:53 +msgid "" +"For central DP with client-side clipping, the server sends the clipping " +"value to selected clients on each round. Clients can use existing Flower " +":code:`Mods` to perform the clipping. Two mods are available for fixed " +"and adaptive client-side clipping: :code:`fixedclipping_mod` and " +":code:`adaptiveclipping_mod` with corresponding server-side wrappers " +":code:`DifferentialPrivacyClientSideFixedClipping` and " +":code:`DifferentialPrivacyClientSideAdaptiveClipping`." +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:-1 +#, fuzzy +msgid "client side clipping" +msgstr "Logique côté client" + +#: ../../source/how-to-use-differential-privacy.rst:63 +msgid "" +"The code sample below enables the :code:`FedAvg` strategy to use " +"differential privacy with client-side fixed clipping using both the " +":code:`DifferentialPrivacyClientSideFixedClipping` wrapper class and, on " +"the client, :code:`fixedclipping_mod`:" +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:80 +msgid "" +"In addition to the server-side strategy wrapper, the :code:`ClientApp` " +"needs to configure the matching :code:`fixedclipping_mod` to perform the " +"client-side clipping:" +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:97 +msgid "" +"To utilize local differential privacy (DP) and add noise to the client " +"model parameters before transmitting them to the server in Flower, you " +"can use the `LocalDpMod`. The following hyperparameters need to be set: " +"clipping norm value, sensitivity, epsilon, and delta." +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:-1 +msgid "local DP mod" +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:104 +msgid "Below is a code example that shows how to use :code:`LocalDpMod`:" +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:122 +msgid "" +"Please note that the order of mods, especially those that modify " +"parameters, is important when using multiple modifiers. Typically, " +"differential privacy (DP) modifiers should be the last to operate on " +"parameters." +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:125 +msgid "Local Training using Privacy Engines" +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:126 +msgid "" +"For ensuring data instance-level privacy during local model training on " +"the client side, consider leveraging privacy engines such as Opacus and " +"TensorFlow Privacy. For examples of using Flower with these engines, " +"please refer to the Flower examples directory (`Opacus " +"`_, `Tensorflow" +" Privacy `_)." msgstr "" #: ../../source/how-to-use-strategies.rst:2 @@ -8004,11 +8000,11 @@ msgstr "Quickstart tutorials" msgid "How-to guides" msgstr "Guides" -#: ../../source/index.rst:97 +#: ../../source/index.rst:98 msgid "Legacy example guides" msgstr "" -#: ../../source/index.rst:108 ../../source/index.rst:112 +#: ../../source/index.rst:109 ../../source/index.rst:113 msgid "Explanations" msgstr "Explications" @@ -8016,26 +8012,26 @@ msgstr "Explications" msgid "API reference" msgstr "Référence pour l'API" -#: ../../source/index.rst:137 +#: ../../source/index.rst:138 msgid "Reference docs" msgstr "Référence pour la documentation" -#: ../../source/index.rst:153 +#: ../../source/index.rst:154 #, fuzzy msgid "Contributor tutorials" msgstr "Configuration du contributeur" -#: ../../source/index.rst:160 +#: ../../source/index.rst:161 #, fuzzy msgid "Contributor how-to guides" msgstr "Guide pour les contributeurs" -#: ../../source/index.rst:173 +#: ../../source/index.rst:174 #, fuzzy msgid "Contributor explanations" msgstr "Explications" -#: ../../source/index.rst:179 +#: ../../source/index.rst:180 #, fuzzy msgid "Contributor references" msgstr "Configuration du contributeur" @@ -8144,7 +8140,7 @@ msgstr "" "Guides orientés sur la résolutions étapes par étapes de problèmes ou " "objectifs specifiques." -#: ../../source/index.rst:110 +#: ../../source/index.rst:111 msgid "" "Understanding-oriented concept guides explain and discuss key topics and " "underlying ideas behind Flower and collaborative AI." @@ -8152,29 +8148,29 @@ msgstr "" "Guides orientés sur la compréhension et l'explication des sujets et idées" " de fonds sur lesquels sont construits Flower et l'IA collaborative." -#: ../../source/index.rst:120 +#: ../../source/index.rst:121 #, fuzzy msgid "References" msgstr "Référence" -#: ../../source/index.rst:122 +#: ../../source/index.rst:123 msgid "Information-oriented API reference and other reference material." msgstr "Référence de l'API orientée sur l'information pure." -#: ../../source/index.rst:131::1 +#: ../../source/index.rst:132::1 msgid ":py:obj:`flwr `\\" msgstr "" -#: ../../source/index.rst:131::1 flwr:1 of +#: ../../source/index.rst:132::1 flwr:1 of msgid "Flower main package." msgstr "" -#: ../../source/index.rst:148 +#: ../../source/index.rst:149 #, fuzzy msgid "Contributor docs" msgstr "Configuration du contributeur" -#: ../../source/index.rst:150 +#: ../../source/index.rst:151 #, fuzzy msgid "" "The Flower community welcomes contributions. The following docs are " @@ -8201,12 +8197,22 @@ msgstr "flower-driver-api" msgid "flower-fleet-api" msgstr "flower-fleet-api" +#: ../../source/ref-api-cli.rst:37 +#, fuzzy +msgid "flower-client-app" +msgstr "Flower ClientApp." + +#: ../../source/ref-api-cli.rst:47 +#, fuzzy +msgid "flower-server-app" +msgstr "flower-driver-api" + #: ../../source/ref-api/flwr.rst:2 #, fuzzy msgid "flwr" msgstr "Fleur" -#: ../../source/ref-api/flwr.rst:25 ../../source/ref-api/flwr.server.rst:48 +#: ../../source/ref-api/flwr.rst:25 ../../source/ref-api/flwr.server.rst:52 msgid "Modules" msgstr "" @@ -8232,7 +8238,7 @@ msgid ":py:obj:`flwr.server `\\" msgstr "" #: ../../source/ref-api/flwr.rst:35::1 -#: ../../source/ref-api/flwr.server.rst:37::1 flwr.server:1 +#: ../../source/ref-api/flwr.server.rst:41::1 flwr.server:1 #: flwr.server.server.Server:1 of #, fuzzy msgid "Flower server." @@ -8253,7 +8259,6 @@ msgstr "client" #: ../../source/ref-api/flwr.client.rst:13 #: ../../source/ref-api/flwr.common.rst:13 -#: ../../source/ref-api/flwr.server.driver.rst:13 #: ../../source/ref-api/flwr.server.rst:13 #: ../../source/ref-api/flwr.simulation.rst:13 #, fuzzy @@ -8293,10 +8298,10 @@ msgid "Start a Flower NumPyClient which connects to a gRPC server." msgstr "" #: ../../source/ref-api/flwr.client.rst:26 -#: ../../source/ref-api/flwr.common.rst:31 -#: ../../source/ref-api/flwr.server.driver.rst:24 -#: ../../source/ref-api/flwr.server.rst:28 +#: ../../source/ref-api/flwr.common.rst:32 +#: ../../source/ref-api/flwr.server.rst:29 #: ../../source/ref-api/flwr.server.strategy.rst:17 +#: ../../source/ref-api/flwr.server.workflow.rst:17 msgid "Classes" msgstr "" @@ -8311,7 +8316,7 @@ msgstr "" #: ../../source/ref-api/flwr.client.rst:33::1 msgid "" -":py:obj:`ClientApp `\\ \\(client\\_fn\\[\\, " +":py:obj:`ClientApp `\\ \\(\\[client\\_fn\\, " "mods\\]\\)" msgstr "" @@ -8339,8 +8344,12 @@ msgstr "" #: ../../source/ref-api/flwr.client.Client.rst:15 #: ../../source/ref-api/flwr.client.ClientApp.rst:15 #: ../../source/ref-api/flwr.client.NumPyClient.rst:15 +#: ../../source/ref-api/flwr.common.Array.rst:15 #: ../../source/ref-api/flwr.common.ClientMessage.rst:15 +#: ../../source/ref-api/flwr.common.ConfigsRecord.rst:15 +#: ../../source/ref-api/flwr.common.Context.rst:15 #: ../../source/ref-api/flwr.common.DisconnectRes.rst:15 +#: ../../source/ref-api/flwr.common.Error.rst:15 #: ../../source/ref-api/flwr.common.EvaluateIns.rst:15 #: ../../source/ref-api/flwr.common.EvaluateRes.rst:15 #: ../../source/ref-api/flwr.common.FitIns.rst:15 @@ -8349,20 +8358,32 @@ msgstr "" #: ../../source/ref-api/flwr.common.GetParametersRes.rst:15 #: ../../source/ref-api/flwr.common.GetPropertiesIns.rst:15 #: ../../source/ref-api/flwr.common.GetPropertiesRes.rst:15 +#: ../../source/ref-api/flwr.common.Message.rst:15 +#: ../../source/ref-api/flwr.common.MessageType.rst:15 +#: ../../source/ref-api/flwr.common.MessageTypeLegacy.rst:15 +#: ../../source/ref-api/flwr.common.Metadata.rst:15 +#: ../../source/ref-api/flwr.common.MetricsRecord.rst:15 #: ../../source/ref-api/flwr.common.Parameters.rst:15 +#: ../../source/ref-api/flwr.common.ParametersRecord.rst:15 #: ../../source/ref-api/flwr.common.ReconnectIns.rst:15 +#: ../../source/ref-api/flwr.common.RecordSet.rst:15 #: ../../source/ref-api/flwr.common.ServerMessage.rst:15 #: ../../source/ref-api/flwr.common.Status.rst:15 #: ../../source/ref-api/flwr.server.ClientManager.rst:15 +#: ../../source/ref-api/flwr.server.Driver.rst:15 #: ../../source/ref-api/flwr.server.History.rst:15 +#: ../../source/ref-api/flwr.server.LegacyContext.rst:15 #: ../../source/ref-api/flwr.server.Server.rst:15 +#: ../../source/ref-api/flwr.server.ServerApp.rst:15 #: ../../source/ref-api/flwr.server.ServerConfig.rst:15 #: ../../source/ref-api/flwr.server.SimpleClientManager.rst:15 -#: ../../source/ref-api/flwr.server.driver.Driver.rst:15 -#: ../../source/ref-api/flwr.server.driver.GrpcDriver.rst:15 #: ../../source/ref-api/flwr.server.strategy.Bulyan.rst:15 #: ../../source/ref-api/flwr.server.strategy.DPFedAvgAdaptive.rst:15 #: ../../source/ref-api/flwr.server.strategy.DPFedAvgFixed.rst:15 +#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyClientSideAdaptiveClipping.rst:15 +#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyClientSideFixedClipping.rst:15 +#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyServerSideAdaptiveClipping.rst:15 +#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyServerSideFixedClipping.rst:15 #: ../../source/ref-api/flwr.server.strategy.FaultTolerantFedAvg.rst:15 #: ../../source/ref-api/flwr.server.strategy.FedAdagrad.rst:15 #: ../../source/ref-api/flwr.server.strategy.FedAdam.rst:15 @@ -8380,6 +8401,9 @@ msgstr "" #: ../../source/ref-api/flwr.server.strategy.Krum.rst:15 #: ../../source/ref-api/flwr.server.strategy.QFedAvg.rst:15 #: ../../source/ref-api/flwr.server.strategy.Strategy.rst:15 +#: ../../source/ref-api/flwr.server.workflow.DefaultWorkflow.rst:15 +#: ../../source/ref-api/flwr.server.workflow.SecAggPlusWorkflow.rst:15 +#: ../../source/ref-api/flwr.server.workflow.SecAggWorkflow.rst:15 msgid "Methods" msgstr "" @@ -8459,9 +8483,12 @@ msgstr "" #: ../../source/ref-api/flwr.client.Client.rst:46 #: ../../source/ref-api/flwr.client.NumPyClient.rst:46 +#: ../../source/ref-api/flwr.common.Array.rst:28 #: ../../source/ref-api/flwr.common.ClientMessage.rst:25 #: ../../source/ref-api/flwr.common.Code.rst:19 +#: ../../source/ref-api/flwr.common.Context.rst:25 #: ../../source/ref-api/flwr.common.DisconnectRes.rst:25 +#: ../../source/ref-api/flwr.common.Error.rst:25 #: ../../source/ref-api/flwr.common.EvaluateIns.rst:25 #: ../../source/ref-api/flwr.common.EvaluateRes.rst:25 #: ../../source/ref-api/flwr.common.EventType.rst:19 @@ -8471,10 +8498,16 @@ msgstr "" #: ../../source/ref-api/flwr.common.GetParametersRes.rst:25 #: ../../source/ref-api/flwr.common.GetPropertiesIns.rst:25 #: ../../source/ref-api/flwr.common.GetPropertiesRes.rst:25 +#: ../../source/ref-api/flwr.common.Message.rst:37 +#: ../../source/ref-api/flwr.common.MessageType.rst:25 +#: ../../source/ref-api/flwr.common.MessageTypeLegacy.rst:25 +#: ../../source/ref-api/flwr.common.Metadata.rst:25 #: ../../source/ref-api/flwr.common.Parameters.rst:25 #: ../../source/ref-api/flwr.common.ReconnectIns.rst:25 +#: ../../source/ref-api/flwr.common.RecordSet.rst:25 #: ../../source/ref-api/flwr.common.ServerMessage.rst:25 #: ../../source/ref-api/flwr.common.Status.rst:25 +#: ../../source/ref-api/flwr.server.LegacyContext.rst:25 #: ../../source/ref-api/flwr.server.ServerConfig.rst:25 msgid "Attributes" msgstr "" @@ -8492,14 +8525,25 @@ msgstr "" #: flwr.client.numpy_client.NumPyClient.fit #: flwr.client.numpy_client.NumPyClient.get_parameters #: flwr.client.numpy_client.NumPyClient.get_properties -#: flwr.server.app.start_server +#: flwr.common.context.Context flwr.common.message.Error +#: flwr.common.message.Message flwr.common.message.Message.create_error_reply +#: flwr.common.message.Message.create_reply flwr.common.message.Metadata +#: flwr.common.record.parametersrecord.Array flwr.server.app.start_server #: flwr.server.client_manager.ClientManager.register #: flwr.server.client_manager.ClientManager.unregister #: flwr.server.client_manager.SimpleClientManager.register #: flwr.server.client_manager.SimpleClientManager.unregister #: flwr.server.client_manager.SimpleClientManager.wait_for -#: flwr.server.driver.app.start_driver flwr.server.driver.driver.Driver +#: flwr.server.compat.app.start_driver flwr.server.driver.driver.Driver +#: flwr.server.driver.driver.Driver.create_message +#: flwr.server.driver.driver.Driver.pull_messages +#: flwr.server.driver.driver.Driver.push_messages +#: flwr.server.driver.driver.Driver.send_and_receive #: flwr.server.strategy.bulyan.Bulyan +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_evaluate #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit #: flwr.server.strategy.fedadagrad.FedAdagrad @@ -8515,7 +8559,10 @@ msgstr "" #: flwr.server.strategy.strategy.Strategy.configure_fit #: flwr.server.strategy.strategy.Strategy.evaluate #: flwr.server.strategy.strategy.Strategy.initialize_parameters -#: flwr.simulation.app.start_simulation of +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow +#: flwr.simulation.app.start_simulation +#: flwr.simulation.run_simulation.run_simulation of #, fuzzy msgid "Parameters" msgstr "Paramètres du modèle." @@ -8534,13 +8581,17 @@ msgstr "" #: flwr.client.numpy_client.NumPyClient.fit #: flwr.client.numpy_client.NumPyClient.get_parameters #: flwr.client.numpy_client.NumPyClient.get_properties -#: flwr.server.app.start_server +#: flwr.common.message.Message.create_reply flwr.server.app.start_server #: flwr.server.client_manager.ClientManager.num_available #: flwr.server.client_manager.ClientManager.register #: flwr.server.client_manager.SimpleClientManager.num_available #: flwr.server.client_manager.SimpleClientManager.register #: flwr.server.client_manager.SimpleClientManager.wait_for -#: flwr.server.driver.app.start_driver +#: flwr.server.compat.app.start_driver +#: flwr.server.driver.driver.Driver.create_message +#: flwr.server.driver.driver.Driver.pull_messages +#: flwr.server.driver.driver.Driver.push_messages +#: flwr.server.driver.driver.Driver.send_and_receive #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_evaluate #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit #: flwr.server.strategy.strategy.Strategy.aggregate_evaluate @@ -8565,13 +8616,17 @@ msgstr "" #: flwr.client.client.Client.get_properties #: flwr.client.numpy_client.NumPyClient.get_parameters #: flwr.client.numpy_client.NumPyClient.get_properties -#: flwr.server.app.start_server +#: flwr.common.message.Message.create_reply flwr.server.app.start_server #: flwr.server.client_manager.ClientManager.num_available #: flwr.server.client_manager.ClientManager.register #: flwr.server.client_manager.SimpleClientManager.num_available #: flwr.server.client_manager.SimpleClientManager.register #: flwr.server.client_manager.SimpleClientManager.wait_for -#: flwr.server.driver.app.start_driver +#: flwr.server.compat.app.start_driver +#: flwr.server.driver.driver.Driver.create_message +#: flwr.server.driver.driver.Driver.pull_messages +#: flwr.server.driver.driver.Driver.push_messages +#: flwr.server.driver.driver.Driver.send_and_receive #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_evaluate #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit #: flwr.server.strategy.strategy.Strategy.aggregate_evaluate @@ -8623,23 +8678,38 @@ msgstr "" msgid "ClientApp" msgstr "client" -#: flwr.client.client_app.ClientApp:1 flwr.common.typing.ClientMessage:1 +#: flwr.client.client_app.ClientApp:1 flwr.common.constant.MessageType:1 +#: flwr.common.constant.MessageTypeLegacy:1 flwr.common.context.Context:1 +#: flwr.common.message.Error:1 flwr.common.message.Message:1 +#: flwr.common.message.Metadata:1 flwr.common.record.parametersrecord.Array:1 +#: flwr.common.record.recordset.RecordSet:1 flwr.common.typing.ClientMessage:1 #: flwr.common.typing.DisconnectRes:1 flwr.common.typing.EvaluateIns:1 #: flwr.common.typing.EvaluateRes:1 flwr.common.typing.FitIns:1 #: flwr.common.typing.FitRes:1 flwr.common.typing.GetParametersIns:1 #: flwr.common.typing.GetParametersRes:1 flwr.common.typing.GetPropertiesIns:1 #: flwr.common.typing.GetPropertiesRes:1 flwr.common.typing.Parameters:1 #: flwr.common.typing.ReconnectIns:1 flwr.common.typing.ServerMessage:1 -#: flwr.common.typing.Status:1 flwr.server.app.ServerConfig:1 -#: flwr.server.driver.driver.Driver:1 -#: flwr.server.driver.grpc_driver.GrpcDriver:1 flwr.server.history.History:1 -#: flwr.server.server.Server:1 of +#: flwr.common.typing.Status:1 flwr.server.driver.driver.Driver:1 +#: flwr.server.history.History:1 flwr.server.server.Server:1 +#: flwr.server.server_app.ServerApp:1 flwr.server.server_config.ServerConfig:1 +#: flwr.server.workflow.default_workflows.DefaultWorkflow:1 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:1 +#: of msgid "Bases: :py:class:`object`" msgstr "" -#: flwr.client.app.start_client:33 flwr.client.app.start_numpy_client:36 -#: flwr.client.client_app.ClientApp:4 flwr.server.app.start_server:41 -#: flwr.server.driver.app.start_driver:30 of +#: flwr.client.app.start_client:41 flwr.client.app.start_numpy_client:36 +#: flwr.client.client_app.ClientApp:4 +#: flwr.client.client_app.ClientApp.evaluate:4 +#: flwr.client.client_app.ClientApp.query:4 +#: flwr.client.client_app.ClientApp.train:4 flwr.server.app.start_server:41 +#: flwr.server.compat.app.start_driver:32 flwr.server.server_app.ServerApp:4 +#: flwr.server.server_app.ServerApp.main:4 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:29 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:22 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:21 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:14 +#: of #, fuzzy msgid "Examples" msgstr "Exemples de PyTorch" @@ -8663,6 +8733,34 @@ msgid "" "global attribute `app` that points to an object of type `ClientApp`." msgstr "" +#: flwr.client.client_app.ClientApp.evaluate:1::1 of +msgid ":py:obj:`evaluate `\\ \\(\\)" +msgstr "" + +#: flwr.client.client_app.ClientApp.evaluate:1 +#: flwr.client.client_app.ClientApp.evaluate:1::1 of +msgid "Return a decorator that registers the evaluate fn with the client app." +msgstr "" + +#: flwr.client.client_app.ClientApp.evaluate:1::1 of +msgid ":py:obj:`query `\\ \\(\\)" +msgstr "" + +#: flwr.client.client_app.ClientApp.evaluate:1::1 +#: flwr.client.client_app.ClientApp.query:1 of +msgid "Return a decorator that registers the query fn with the client app." +msgstr "" + +#: flwr.client.client_app.ClientApp.evaluate:1::1 of +#, fuzzy +msgid ":py:obj:`train `\\ \\(\\)" +msgstr "serveur.stratégie.Stratégie" + +#: flwr.client.client_app.ClientApp.evaluate:1::1 +#: flwr.client.client_app.ClientApp.train:1 of +msgid "Return a decorator that registers the train fn with the client app." +msgstr "" + #: ../../source/ref-api/flwr.client.NumPyClient.rst:2 msgid "NumPyClient" msgstr "NumPyClient" @@ -8866,7 +8964,7 @@ msgid "" msgstr "" #: flwr.client.app.start_client:19 flwr.client.app.start_numpy_client:22 -#: flwr.server.driver.app.start_driver:21 of +#: flwr.server.compat.app.start_driver:21 of msgid "" "The PEM-encoded root certificates as a byte string or a path string. If " "provided, a secure connection using the certificates will be established " @@ -8886,15 +8984,29 @@ msgid "" "(experimental) - 'rest': HTTP (experimental)" msgstr "" -#: flwr.client.app.start_client:34 flwr.client.app.start_numpy_client:37 of +#: flwr.client.app.start_client:31 of +msgid "" +"The maximum number of times the client will try to connect to the server " +"before giving up in case of a connection error. If set to None, there is " +"no limit to the number of tries." +msgstr "" + +#: flwr.client.app.start_client:35 of +msgid "" +"The maximum duration before the client stops trying to connect to the " +"server in case of connection error. If set to None, there is no limit to " +"the total time." +msgstr "" + +#: flwr.client.app.start_client:42 flwr.client.app.start_numpy_client:37 of msgid "Starting a gRPC client with an insecure server connection:" msgstr "" -#: flwr.client.app.start_client:41 flwr.client.app.start_numpy_client:44 of +#: flwr.client.app.start_client:49 flwr.client.app.start_numpy_client:44 of msgid "Starting an SSL-enabled gRPC client using system certificates:" msgstr "" -#: flwr.client.app.start_client:52 flwr.client.app.start_numpy_client:52 of +#: flwr.client.app.start_client:60 flwr.client.app.start_numpy_client:52 of msgid "Starting an SSL-enabled gRPC client using provided certificates:" msgstr "" @@ -8919,77 +9031,87 @@ msgstr "" msgid "common" msgstr "commun" -#: ../../source/ref-api/flwr.common.rst:29::1 +#: ../../source/ref-api/flwr.common.rst:30::1 +msgid ":py:obj:`array_from_numpy `\\ \\(ndarray\\)" +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:30::1 +#: flwr.common.record.conversion_utils.array_from_numpy:1 of +#, fuzzy +msgid "Create Array from NumPy ndarray." +msgstr "Convertit l'objet des paramètres en ndarrays NumPy." + +#: ../../source/ref-api/flwr.common.rst:30::1 msgid ":py:obj:`bytes_to_ndarray `\\ \\(tensor\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:29::1 +#: ../../source/ref-api/flwr.common.rst:30::1 #: flwr.common.parameter.bytes_to_ndarray:1 of msgid "Deserialize NumPy ndarray from bytes." msgstr "Désérialise le tableau numérique NumPy à partir d'octets." -#: ../../source/ref-api/flwr.common.rst:29::1 +#: ../../source/ref-api/flwr.common.rst:30::1 msgid "" ":py:obj:`configure `\\ \\(identifier\\[\\, " "filename\\, host\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:29::1 +#: ../../source/ref-api/flwr.common.rst:30::1 #: flwr.common.logger.configure:1 of msgid "Configure logging to file and/or remote log server." msgstr "" "Configure la journalisation vers un fichier et/ou un serveur de " "journalisation distant." -#: ../../source/ref-api/flwr.common.rst:29::1 +#: ../../source/ref-api/flwr.common.rst:30::1 msgid "" ":py:obj:`event `\\ \\(event\\_type\\[\\, " "event\\_details\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:29::1 +#: ../../source/ref-api/flwr.common.rst:30::1 #: flwr.common.telemetry.event:1 of msgid "Submit create_event to ThreadPoolExecutor to avoid blocking." msgstr "" -#: ../../source/ref-api/flwr.common.rst:29::1 +#: ../../source/ref-api/flwr.common.rst:30::1 msgid "" ":py:obj:`log `\\ \\(level\\, msg\\, \\*args\\, " "\\*\\*kwargs\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:29::1 logging.Logger.log:1 +#: ../../source/ref-api/flwr.common.rst:30::1 logging.Logger.log:1 #: of msgid "Log 'msg % args' with the integer severity 'level'." msgstr "Enregistre 'msg % args' avec le niveau de sévérité entier 'level'." -#: ../../source/ref-api/flwr.common.rst:29::1 +#: ../../source/ref-api/flwr.common.rst:30::1 msgid ":py:obj:`ndarray_to_bytes `\\ \\(ndarray\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:29::1 +#: ../../source/ref-api/flwr.common.rst:30::1 #: flwr.common.parameter.ndarray_to_bytes:1 of msgid "Serialize NumPy ndarray to bytes." msgstr "Sérialise le tableau numérique NumPy en octets." -#: ../../source/ref-api/flwr.common.rst:29::1 +#: ../../source/ref-api/flwr.common.rst:30::1 msgid ":py:obj:`now `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:29::1 +#: ../../source/ref-api/flwr.common.rst:30::1 #: flwr.common.date.now:1 of msgid "Construct a datetime from time.time() with time zone set to UTC." msgstr "" "Construit une date à partir de time.time() avec le fuseau horaire réglé " "sur UTC." -#: ../../source/ref-api/flwr.common.rst:29::1 +#: ../../source/ref-api/flwr.common.rst:30::1 msgid "" ":py:obj:`ndarrays_to_parameters `\\ " "\\(ndarrays\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:29::1 +#: ../../source/ref-api/flwr.common.rst:30::1 #: flwr.common.parameter.ndarrays_to_parameters:1 #: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 #: flwr.server.strategy.fedavg_android.FedAvgAndroid.ndarrays_to_parameters:1 @@ -8997,191 +9119,372 @@ msgstr "" msgid "Convert NumPy ndarrays to parameters object." msgstr "Convertit les ndarrays NumPy en objets de paramètres." -#: ../../source/ref-api/flwr.common.rst:29::1 +#: ../../source/ref-api/flwr.common.rst:30::1 msgid "" ":py:obj:`parameters_to_ndarrays `\\ " "\\(parameters\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:29::1 +#: ../../source/ref-api/flwr.common.rst:30::1 #: flwr.common.parameter.parameters_to_ndarrays:1 of msgid "Convert parameters object to NumPy ndarrays." msgstr "Convertit l'objet des paramètres en ndarrays NumPy." -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 +msgid "" +":py:obj:`Array `\\ \\(dtype\\, shape\\, stype\\, " +"data\\)" +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +#: flwr.common.record.parametersrecord.Array:1 of +msgid "Array type." +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 msgid "" ":py:obj:`ClientMessage `\\ " "\\(\\[get\\_properties\\_res\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.ClientMessage:1 of msgid "ClientMessage is a container used to hold one result message." msgstr "" "ClientMessage est un conteneur utilisé pour contenir un message de " "résultat." -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 msgid ":py:obj:`Code `\\ \\(value\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.Code:1 of msgid "Client status codes." msgstr "Codes d'état du client." -#: ../../source/ref-api/flwr.common.rst:52::1 -msgid ":py:obj:`DisconnectRes `\\ \\(reason\\)" +#: ../../source/ref-api/flwr.common.rst:64::1 +#, fuzzy +msgid "" +":py:obj:`ConfigsRecord `\\ " +"\\(\\[configs\\_dict\\, keep\\_input\\]\\)" msgstr "" - -#: ../../source/ref-api/flwr.common.rst:52::1 -#: flwr.common.typing.DisconnectRes:1 of +"Flower 1.0 : ``start_server(..., " +"config=flwr.server.ServerConfig(num_rounds=3, round_timeout=600.0), " +"...)``" + +#: ../../source/ref-api/flwr.common.rst:64::1 +#: flwr.common.record.configsrecord.ConfigsRecord:1 of +#, fuzzy +msgid "Configs record." +msgstr "Configurer les clients" + +#: ../../source/ref-api/flwr.common.rst:64::1 +msgid ":py:obj:`Context `\\ \\(state\\)" +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +#: flwr.common.context.Context:1 of +msgid "State of your run." +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +msgid ":py:obj:`DisconnectRes `\\ \\(reason\\)" +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +#: flwr.common.typing.DisconnectRes:1 of msgid "DisconnectRes message from client to server." msgstr "Message DisconnectRes envoyé par le client au serveur." -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 msgid "" ":py:obj:`EvaluateIns `\\ \\(parameters\\, " "config\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.EvaluateIns:1 of msgid "Evaluate instructions for a client." msgstr "Évaluer les instructions pour un client." -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 msgid "" ":py:obj:`EvaluateRes `\\ \\(status\\, loss\\, " "num\\_examples\\, metrics\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.EvaluateRes:1 of msgid "Evaluate response from a client." msgstr "Évaluer la réponse d'un client." -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 msgid ":py:obj:`EventType `\\ \\(value\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.telemetry.EventType:1 of msgid "Types of telemetry events." msgstr "Types d'événements télémétriques." -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 msgid ":py:obj:`FitIns `\\ \\(parameters\\, config\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.FitIns:1 of msgid "Fit instructions for a client." msgstr "Instructions d'ajustement pour un client." -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 msgid "" ":py:obj:`FitRes `\\ \\(status\\, parameters\\, " "num\\_examples\\, metrics\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.FitRes:1 of msgid "Fit response from a client." msgstr "Réponse adaptée d'un client." -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 +msgid ":py:obj:`Error `\\ \\(code\\[\\, reason\\]\\)" +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +#: flwr.common.message.Error:1 of +msgid "A dataclass that stores information about an error that occurred." +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 msgid ":py:obj:`GetParametersIns `\\ \\(config\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.GetParametersIns:1 of msgid "Parameters request for a client." msgstr "Demande de paramètres pour un client." -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 msgid "" ":py:obj:`GetParametersRes `\\ \\(status\\, " "parameters\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.GetParametersRes:1 of msgid "Response when asked to return parameters." msgstr "Réponse lorsqu'on te demande de renvoyer des paramètres." -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 msgid ":py:obj:`GetPropertiesIns `\\ \\(config\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.GetPropertiesIns:1 of msgid "Properties request for a client." msgstr "Demande de propriétés pour un client." -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 msgid "" ":py:obj:`GetPropertiesRes `\\ \\(status\\, " "properties\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.GetPropertiesRes:1 of msgid "Properties response from a client." msgstr "Réponse des propriétés d'un client." -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 +msgid "" +":py:obj:`Message `\\ \\(metadata\\[\\, content\\, " +"error\\]\\)" +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +#: flwr.common.message.Message:1 of +msgid "State of your application from the viewpoint of the entity using it." +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +msgid ":py:obj:`MessageType `\\ \\(\\)" +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +#: flwr.common.constant.MessageType:1 of +msgid "Message type." +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +msgid ":py:obj:`MessageTypeLegacy `\\ \\(\\)" +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +#: flwr.common.constant.MessageTypeLegacy:1 of +msgid "Legacy message type." +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +msgid "" +":py:obj:`Metadata `\\ \\(run\\_id\\, " +"message\\_id\\, src\\_node\\_id\\, ...\\)" +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +#: flwr.common.message.Metadata:1 of +msgid "A dataclass holding metadata associated with the current message." +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +msgid "" +":py:obj:`MetricsRecord `\\ " +"\\(\\[metrics\\_dict\\, keep\\_input\\]\\)" +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +#: flwr.common.record.metricsrecord.MetricsRecord:1 of +msgid "Metrics record." +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 msgid ":py:obj:`NDArray `\\" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 msgid "" "alias of :py:class:`~numpy.ndarray`\\ [:py:obj:`~typing.Any`, " ":py:class:`~numpy.dtype`\\ [:py:obj:`~typing.Any`]]" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 msgid "" ":py:obj:`Parameters `\\ \\(tensors\\, " "tensor\\_type\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.Parameters:1 of msgid "Model parameters." msgstr "Paramètres du modèle." -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 +msgid "" +":py:obj:`ParametersRecord `\\ " +"\\(\\[array\\_dict\\, keep\\_input\\]\\)" +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +#: flwr.common.record.parametersrecord.ParametersRecord:1 of +#, fuzzy +msgid "Parameters record." +msgstr "Paramètres du modèle." + +#: ../../source/ref-api/flwr.common.rst:64::1 msgid ":py:obj:`ReconnectIns `\\ \\(seconds\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.ReconnectIns:1 of msgid "ReconnectIns message from server to client." msgstr "Message de reconnexion du serveur au client." -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 +msgid "" +":py:obj:`RecordSet `\\ " +"\\(\\[parameters\\_records\\, ...\\]\\)" +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +#: flwr.common.record.recordset.RecordSet:1 of +msgid "RecordSet stores groups of parameters, metrics and configs." +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 msgid "" ":py:obj:`ServerMessage `\\ " "\\(\\[get\\_properties\\_ins\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.ServerMessage:1 of msgid "ServerMessage is a container used to hold one instruction message." msgstr "" "ServerMessage est un conteneur utilisé pour contenir un message " "d'instruction." -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 msgid ":py:obj:`Status `\\ \\(code\\, message\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.Status:1 of msgid "Client status." msgstr "Statut du client." +#: ../../source/ref-api/flwr.common.Array.rst:2 +msgid "Array" +msgstr "" + +#: flwr.common.record.parametersrecord.Array:3 of +msgid "" +"A dataclass containing serialized data from an array-like or tensor-like " +"object along with some metadata about it." +msgstr "" + +#: flwr.common.record.parametersrecord.Array:6 of +msgid "" +"A string representing the data type of the serialised object (e.g. " +"`np.float32`)" +msgstr "" + +#: flwr.common.record.parametersrecord.Array:8 of +msgid "" +"A list representing the shape of the unserialized array-like object. This" +" is used to deserialize the data (depending on the serialization method) " +"or simply as a metadata field." +msgstr "" + +#: flwr.common.record.parametersrecord.Array:12 of +msgid "" +"A string indicating the type of serialisation mechanism used to generate " +"the bytes in `data` from an array-like or tensor-like object." +msgstr "" + +#: flwr.common.record.parametersrecord.Array:15 of +msgid "A buffer of bytes containing the data." +msgstr "" + +#: ../../source/ref-api/flwr.common.Array.rst:26::1 +#, fuzzy +msgid ":py:obj:`numpy `\\ \\(\\)" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.common.Array.rst:26::1 +#: flwr.common.record.parametersrecord.Array.numpy:1 of +#, fuzzy +msgid "Return the array as a NumPy array." +msgstr "renvoie le poids du modèle sous la forme d'une liste de ndarrays NumPy" + +#: flwr.common.record.parametersrecord.Array.numpy:1::1 of +msgid ":py:obj:`dtype `\\" +msgstr "" + +#: flwr.common.record.parametersrecord.Array.numpy:1::1 of +#, fuzzy +msgid ":py:obj:`shape `\\" +msgstr "serveur.stratégie.Stratégie" + +#: flwr.common.record.parametersrecord.Array.numpy:1::1 of +#, fuzzy +msgid ":py:obj:`stype `\\" +msgstr "serveur.stratégie.Stratégie" + +#: flwr.common.record.parametersrecord.Array.numpy:1::1 of +msgid ":py:obj:`data `\\" +msgstr "" + #: ../../source/ref-api/flwr.common.ClientMessage.rst:2 #, fuzzy msgid "ClientMessage" @@ -9241,6 +9544,106 @@ msgid "" "`\\" msgstr "" +#: ../../source/ref-api/flwr.common.ConfigsRecord.rst:2 +#, fuzzy +msgid "ConfigsRecord" +msgstr "Configurer les clients" + +#: flwr.common.record.configsrecord.ConfigsRecord:1 of +msgid "" +"Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ " +"[:py:class:`str`, :py:obj:`~typing.Union`\\ [:py:class:`int`, " +":py:class:`float`, :py:class:`str`, :py:class:`bytes`, :py:class:`bool`, " +":py:class:`~typing.List`\\ [:py:class:`int`], :py:class:`~typing.List`\\ " +"[:py:class:`float`], :py:class:`~typing.List`\\ [:py:class:`str`], " +":py:class:`~typing.List`\\ [:py:class:`bytes`], " +":py:class:`~typing.List`\\ [:py:class:`bool`]]]" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`clear `\\ \\(\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1 +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid "Remove all items from R." +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`count_bytes `\\ \\(\\)" +msgstr "" + +#: flwr.common.record.configsrecord.ConfigsRecord.count_bytes:1 +#: flwr.common.record.metricsrecord.MetricsRecord.count_bytes:1 +#: flwr.common.record.parametersrecord.ParametersRecord.count_bytes:1 +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid "Return number of Bytes stored in this object." +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`get `\\ \\(k\\[\\,d\\]\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 +#: flwr.common.record.typeddict.TypedDict.get:1 of +msgid "d defaults to None." +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`items `\\ \\(\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`keys `\\ \\(\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`pop `\\ \\(k\\[\\,d\\]\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 +#: flwr.common.record.typeddict.TypedDict.pop:1 of +msgid "If key is not found, d is returned if given, otherwise KeyError is raised." +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid "" +":py:obj:`update `\\ \\(\\[E\\, " +"\\]\\*\\*F\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 +#: flwr.common.record.typeddict.TypedDict.update:1 of +msgid "Update R from dict/iterable E and F." +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`values `\\ \\(\\)" +msgstr "" + +#: flwr.common.record.configsrecord.ConfigsRecord.count_bytes:3 of +msgid "This function counts booleans as occupying 1 Byte." +msgstr "" + +#: ../../source/ref-api/flwr.common.Context.rst:2 +msgid "Context" +msgstr "" + +#: flwr.common.context.Context:3 of +msgid "" +"Holds records added by the entity in a given run and that will stay " +"local. This means that the data it holds will never leave the system it's" +" running from. This can be used as an intermediate storage or scratchpad " +"when executing mods. It can also be used as a memory to access at " +"different points during the lifecycle of this entity (e.g. across " +"multiple rounds)" +msgstr "" + +#: ../../source/ref-api/flwr.common.Context.rst:28::1 +#, fuzzy +msgid ":py:obj:`state `\\" +msgstr "serveur.stratégie.Stratégie" + #: ../../source/ref-api/flwr.common.DisconnectRes.rst:2 msgid "DisconnectRes" msgstr "" @@ -9249,6 +9652,34 @@ msgstr "" msgid ":py:obj:`reason `\\" msgstr "" +#: ../../source/ref-api/flwr.common.Error.rst:2 +msgid "Error" +msgstr "" + +#: flwr.common.message.Error:3 of +msgid "An identifier for the error." +msgstr "" + +#: flwr.common.message.Error:5 of +msgid "A reason for why the error arose (e.g. an exception stack-trace)" +msgstr "" + +#: flwr.common.Error.code:1::1 of +msgid ":py:obj:`code `\\" +msgstr "" + +#: flwr.common.Error.code:1 flwr.common.Error.code:1::1 of +msgid "Error code." +msgstr "" + +#: flwr.common.Error.code:1::1 of +msgid ":py:obj:`reason `\\" +msgstr "" + +#: flwr.common.Error.code:1::1 flwr.common.Error.reason:1 of +msgid "Reason reported about the error." +msgstr "" + #: ../../source/ref-api/flwr.common.EvaluateIns.rst:2 #, fuzzy msgid "EvaluateIns" @@ -9472,325 +9903,960 @@ msgstr "" msgid ":py:obj:`properties `\\" msgstr "" -#: ../../source/ref-api/flwr.common.NDArray.rst:2 -msgid "NDArray" +#: ../../source/ref-api/flwr.common.Message.rst:2 +#, fuzzy +msgid "Message" +msgstr "Côté serveur" + +#: flwr.common.Message.content:1::1 flwr.common.Message.metadata:1 +#: flwr.common.message.Message:3 of +msgid "A dataclass including information about the message to be executed." msgstr "" -#: ../../source/ref-api/flwr.common.Parameters.rst:29::1 -msgid ":py:obj:`tensors `\\" +#: flwr.common.message.Message:5 of +msgid "" +"Holds records either sent by another entity (e.g. sent by the server-side" +" logic to a client, or vice-versa) or that will be sent to it." msgstr "" -#: ../../source/ref-api/flwr.common.Parameters.rst:29::1 -msgid ":py:obj:`tensor_type `\\" +#: flwr.common.message.Message:8 of +msgid "" +"A dataclass that captures information about an error that took place when" +" processing another message." msgstr "" -#: ../../source/ref-api/flwr.common.ReconnectIns.rst:2 -#, fuzzy -msgid "ReconnectIns" -msgstr "Collecte centralisée des données" +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +msgid "" +":py:obj:`create_error_reply `\\ " +"\\(error\\, ttl\\)" +msgstr "" -#: ../../source/ref-api/flwr.common.ReconnectIns.rst:28::1 -msgid ":py:obj:`seconds `\\" +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +#: flwr.common.message.Message.create_error_reply:1 of +msgid "Construct a reply message indicating an error happened." msgstr "" -#: ../../source/ref-api/flwr.common.ServerMessage.rst:2 -#, fuzzy -msgid "ServerMessage" -msgstr "Côté serveur" +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +msgid "" +":py:obj:`create_reply `\\ \\(content\\," +" ttl\\)" +msgstr "" -#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 -msgid ":py:obj:`evaluate_ins `\\" +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +#: flwr.common.message.Message.create_reply:1 of +msgid "Create a reply to this message with specified content and TTL." msgstr "" -#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 -msgid ":py:obj:`fit_ins `\\" +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +msgid ":py:obj:`has_content `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 -msgid "" -":py:obj:`get_parameters_ins " -"`\\" +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +#: flwr.common.message.Message.has_content:1 of +msgid "Return True if message has content, else False." msgstr "" -#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 -msgid "" -":py:obj:`get_properties_ins " -"`\\" +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +msgid ":py:obj:`has_error `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.common.Status.rst:2 -#, fuzzy -msgid "Status" -msgstr "Statut du client." +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +#: flwr.common.message.Message.has_error:1 of +msgid "Return True if message has an error, else False." +msgstr "" -#: ../../source/ref-api/flwr.common.Status.rst:29::1 -msgid ":py:obj:`code `\\" +#: flwr.common.Message.content:1::1 of +msgid ":py:obj:`content `\\" msgstr "" -#: ../../source/ref-api/flwr.common.Status.rst:29::1 -msgid ":py:obj:`message `\\" +#: flwr.common.Message.content:1 flwr.common.Message.content:1::1 +#: of +#, fuzzy +msgid "The content of this message." +msgstr "Évaluer la réponse d'un client." + +#: flwr.common.Message.content:1::1 of +msgid ":py:obj:`error `\\" msgstr "" -#: ../../source/ref-api/flwr.common.bytes_to_ndarray.rst:2 -msgid "bytes\\_to\\_ndarray" +#: flwr.common.Message.content:1::1 flwr.common.Message.error:1 of +msgid "Error captured by this message." msgstr "" -#: ../../source/ref-api/flwr.common.configure.rst:2 -#, fuzzy -msgid "configure" -msgstr "Configurer les clients" +#: flwr.common.Message.content:1::1 of +msgid ":py:obj:`metadata `\\" +msgstr "" -#: ../../source/ref-api/flwr.common.event.rst:2 -msgid "event" +#: flwr.common.message.Message.create_error_reply:3 of +msgid "The error that was encountered." msgstr "" -#: ../../source/ref-api/flwr.common.log.rst:2 -msgid "log" +#: flwr.common.Metadata.dst_node_id:1::1 +#: flwr.common.Metadata.ttl:1 flwr.common.message.Message.create_error_reply:5 +#: flwr.common.message.Message.create_reply:9 flwr.common.message.Metadata:16 +#: of +msgid "Time-to-live for this message." msgstr "" -#: logging.Logger.log:3 of +#: flwr.common.message.Message.create_reply:3 of msgid "" -"To pass exception information, use the keyword argument exc_info with a " -"true value, e.g." +"The method generates a new `Message` as a reply to this message. It " +"inherits 'run_id', 'src_node_id', 'dst_node_id', and 'message_type' from " +"this message and sets 'reply_to_message' to the ID of this message." msgstr "" -"Pour transmettre des informations sur les exceptions, utilise l'argument " -"mot-clé exc_info avec une valeur vraie, par ex." -#: logging.Logger.log:6 of -#, python-format -msgid "logger.log(level, \"We have a %s\", \"mysterious problem\", exc_info=1)" -msgstr "logger.log(level, \"We have a %s\", \"mysterious problem\", exc_info=1)" +#: flwr.common.message.Message.create_reply:7 of +msgid "The content for the reply message." +msgstr "" -#: ../../source/ref-api/flwr.common.ndarray_to_bytes.rst:2 -msgid "ndarray\\_to\\_bytes" +#: flwr.common.message.Message.create_reply:12 of +msgid "A new `Message` instance representing the reply." msgstr "" -#: ../../source/ref-api/flwr.common.ndarrays_to_parameters.rst:2 -msgid "ndarrays\\_to\\_parameters" +#: ../../source/ref-api/flwr.common.MessageType.rst:2 +msgid "MessageType" msgstr "" -#: ../../source/ref-api/flwr.common.now.rst:2 -msgid "now" +#: ../../source/ref-api/flwr.common.MessageType.rst:30::1 +msgid ":py:obj:`EVALUATE `\\" msgstr "" -#: ../../source/ref-api/flwr.common.parameters_to_ndarrays.rst:2 -msgid "parameters\\_to\\_ndarrays" +#: ../../source/ref-api/flwr.common.MessageType.rst:30::1 +msgid ":py:obj:`QUERY `\\" msgstr "" -#: ../../source/ref-api/flwr.server.rst:2 -msgid "server" -msgstr "serveur" +#: ../../source/ref-api/flwr.common.MessageType.rst:30::1 +msgid ":py:obj:`TRAIN `\\" +msgstr "" -#: ../../source/ref-api/flwr.server.rst:26::1 -msgid ":py:obj:`run_driver_api `\\ \\(\\)" +#: ../../source/ref-api/flwr.common.MessageTypeLegacy.rst:2 +msgid "MessageTypeLegacy" msgstr "" -#: ../../source/ref-api/flwr.server.rst:26::1 -#: flwr.server.app.run_driver_api:1 of -#, fuzzy -msgid "Run Flower server (Driver API)." -msgstr "flower-driver-api" +#: ../../source/ref-api/flwr.common.MessageTypeLegacy.rst:29::1 +msgid ":py:obj:`GET_PARAMETERS `\\" +msgstr "" -#: ../../source/ref-api/flwr.server.rst:26::1 -msgid ":py:obj:`run_fleet_api `\\ \\(\\)" +#: ../../source/ref-api/flwr.common.MessageTypeLegacy.rst:29::1 +msgid ":py:obj:`GET_PROPERTIES `\\" msgstr "" -#: ../../source/ref-api/flwr.server.rst:26::1 -#: flwr.server.app.run_fleet_api:1 of -#, fuzzy -msgid "Run Flower server (Fleet API)." -msgstr "flower-fleet-api" +#: flwr.common.Metadata.dst_node_id:1::1 +#: flwr.common.Metadata.run_id:1 flwr.common.message.Metadata:3 of +msgid "An identifier for the current run." +msgstr "" -#: ../../source/ref-api/flwr.server.rst:26::1 -msgid ":py:obj:`run_server_app `\\ \\(\\)" +#: flwr.common.Metadata.dst_node_id:1::1 +#: flwr.common.Metadata.message_id:1 flwr.common.message.Metadata:5 of +msgid "An identifier for the current message." msgstr "" -#: ../../source/ref-api/flwr.server.rst:26::1 -#: flwr.server.app.run_server_app:1 of -#, fuzzy -msgid "Run Flower server app." -msgstr "Serveur de Flower" +#: flwr.common.Metadata.dst_node_id:1::1 +#: flwr.common.Metadata.src_node_id:1 flwr.common.message.Metadata:7 of +msgid "An identifier for the node sending this message." +msgstr "" -#: ../../source/ref-api/flwr.server.rst:26::1 -msgid ":py:obj:`run_superlink `\\ \\(\\)" +#: flwr.common.Metadata.dst_node_id:1 +#: flwr.common.Metadata.dst_node_id:1::1 +#: flwr.common.message.Metadata:9 of +msgid "An identifier for the node receiving this message." msgstr "" -#: ../../source/ref-api/flwr.server.rst:26::1 -#: flwr.server.app.run_superlink:1 of -msgid "Run Flower server (Driver API and Fleet API)." +#: flwr.common.Metadata.dst_node_id:1::1 +#: flwr.common.Metadata.reply_to_message:1 flwr.common.message.Metadata:11 of +msgid "An identifier for the message this message replies to." msgstr "" -#: ../../source/ref-api/flwr.server.rst:26::1 +#: flwr.common.message.Metadata:13 of msgid "" -":py:obj:`start_server `\\ \\(\\*\\[\\, " -"server\\_address\\, server\\, ...\\]\\)" +"An identifier for grouping messages. In some settings, this is used as " +"the FL round." msgstr "" -#: ../../source/ref-api/flwr.server.rst:26::1 -#: flwr.server.app.start_server:1 of -msgid "Start a Flower server using the gRPC transport layer." +#: flwr.common.Metadata.dst_node_id:1::1 +#: flwr.common.Metadata.message_type:1 flwr.common.message.Metadata:18 of +msgid "A string that encodes the action to be executed on the receiving end." msgstr "" -#: ../../source/ref-api/flwr.server.rst:37::1 -msgid ":py:obj:`ClientManager `\\ \\(\\)" +#: flwr.common.message.Metadata:21 of +msgid "" +"An identifier that can be used when loading a particular data partition " +"for a ClientApp. Making use of this identifier is more relevant when " +"conducting simulations." msgstr "" -#: ../../source/ref-api/flwr.server.rst:37::1 -#: flwr.server.client_manager.ClientManager:1 of -msgid "Abstract base class for managing Flower clients." +#: flwr.common.Metadata.dst_node_id:1::1 of +msgid ":py:obj:`dst_node_id `\\" msgstr "" -#: ../../source/ref-api/flwr.server.rst:37::1 -msgid ":py:obj:`History `\\ \\(\\)" +#: flwr.common.Metadata.dst_node_id:1::1 of +msgid ":py:obj:`group_id `\\" msgstr "" -#: ../../source/ref-api/flwr.server.rst:37::1 -#: flwr.server.history.History:1 of +#: flwr.common.Metadata.dst_node_id:1::1 +#: flwr.common.Metadata.group_id:1 of +msgid "An identifier for grouping messages." +msgstr "" + +#: flwr.common.Metadata.dst_node_id:1::1 of +msgid ":py:obj:`message_id `\\" +msgstr "" + +#: flwr.common.Metadata.dst_node_id:1::1 of +msgid ":py:obj:`message_type `\\" +msgstr "" + +#: flwr.common.Metadata.dst_node_id:1::1 of +msgid ":py:obj:`partition_id `\\" +msgstr "" + +#: flwr.common.Metadata.dst_node_id:1::1 +#: flwr.common.Metadata.partition_id:1 of +msgid "An identifier telling which data partition a ClientApp should use." +msgstr "" + +#: flwr.common.Metadata.dst_node_id:1::1 of +msgid ":py:obj:`reply_to_message `\\" +msgstr "" + +#: flwr.common.Metadata.dst_node_id:1::1 of +msgid ":py:obj:`run_id `\\" +msgstr "" + +#: flwr.common.Metadata.dst_node_id:1::1 of +msgid ":py:obj:`src_node_id `\\" +msgstr "" + +#: flwr.common.Metadata.dst_node_id:1::1 of +msgid ":py:obj:`ttl `\\" +msgstr "" + +#: ../../source/ref-api/flwr.common.MetricsRecord.rst:2 +msgid "MetricsRecord" +msgstr "" + +#: flwr.common.record.metricsrecord.MetricsRecord:1 of +msgid "" +"Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ " +"[:py:class:`str`, :py:obj:`~typing.Union`\\ [:py:class:`int`, " +":py:class:`float`, :py:class:`~typing.List`\\ [:py:class:`int`], " +":py:class:`~typing.List`\\ [:py:class:`float`]]]" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`clear `\\ \\(\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`count_bytes `\\ \\(\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`get `\\ \\(k\\[\\,d\\]\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`items `\\ \\(\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`keys `\\ \\(\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`pop `\\ \\(k\\[\\,d\\]\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid "" +":py:obj:`update `\\ \\(\\[E\\, " +"\\]\\*\\*F\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`values `\\ \\(\\)" +msgstr "" + +#: ../../source/ref-api/flwr.common.NDArray.rst:2 +msgid "NDArray" +msgstr "" + +#: ../../source/ref-api/flwr.common.Parameters.rst:29::1 +msgid ":py:obj:`tensors `\\" +msgstr "" + +#: ../../source/ref-api/flwr.common.Parameters.rst:29::1 +msgid ":py:obj:`tensor_type `\\" +msgstr "" + +#: ../../source/ref-api/flwr.common.ParametersRecord.rst:2 +#, fuzzy +msgid "ParametersRecord" +msgstr "Paramètres du modèle." + +#: flwr.common.record.parametersrecord.ParametersRecord:1 of +msgid "" +"Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ " +"[:py:class:`str`, :py:class:`~flwr.common.record.parametersrecord.Array`]" +msgstr "" + +#: flwr.common.record.parametersrecord.ParametersRecord:3 of +msgid "" +"A dataclass storing named Arrays in order. This means that it holds " +"entries as an OrderedDict[str, Array]. ParametersRecord objects can be " +"viewed as an equivalent to PyTorch's state_dict, but holding serialised " +"tensors instead." +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`clear `\\ \\(\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`count_bytes `\\ \\(\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`get `\\ \\(k\\[\\,d\\]\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`items `\\ \\(\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`keys `\\ \\(\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`pop `\\ \\(k\\[\\,d\\]\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid "" +":py:obj:`update `\\ \\(\\[E\\, " +"\\]\\*\\*F\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`values `\\ \\(\\)" +msgstr "" + +#: flwr.common.record.parametersrecord.ParametersRecord.count_bytes:3 of +msgid "" +"Note that a small amount of Bytes might also be included in this counting" +" that correspond to metadata of the serialized object (e.g. of NumPy " +"array) needed for deseralization." +msgstr "" + +#: ../../source/ref-api/flwr.common.ReconnectIns.rst:2 +#, fuzzy +msgid "ReconnectIns" +msgstr "Collecte centralisée des données" + +#: ../../source/ref-api/flwr.common.ReconnectIns.rst:28::1 +msgid ":py:obj:`seconds `\\" +msgstr "" + +#: ../../source/ref-api/flwr.common.RecordSet.rst:2 +msgid "RecordSet" +msgstr "" + +#: flwr.common.RecordSet.configs_records:1::1 of +msgid ":py:obj:`configs_records `\\" +msgstr "" + +#: flwr.common.RecordSet.configs_records:1 +#: flwr.common.RecordSet.configs_records:1::1 of +msgid "Dictionary holding ConfigsRecord instances." +msgstr "" + +#: flwr.common.RecordSet.configs_records:1::1 of +msgid ":py:obj:`metrics_records `\\" +msgstr "" + +#: flwr.common.RecordSet.configs_records:1::1 +#: flwr.common.RecordSet.metrics_records:1 of +msgid "Dictionary holding MetricsRecord instances." +msgstr "" + +#: flwr.common.RecordSet.configs_records:1::1 of +msgid ":py:obj:`parameters_records `\\" +msgstr "" + +#: flwr.common.RecordSet.configs_records:1::1 +#: flwr.common.RecordSet.parameters_records:1 of +msgid "Dictionary holding ParametersRecord instances." +msgstr "" + +#: ../../source/ref-api/flwr.common.ServerMessage.rst:2 +#, fuzzy +msgid "ServerMessage" +msgstr "Côté serveur" + +#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 +msgid ":py:obj:`evaluate_ins `\\" +msgstr "" + +#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 +msgid ":py:obj:`fit_ins `\\" +msgstr "" + +#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 +msgid "" +":py:obj:`get_parameters_ins " +"`\\" +msgstr "" + +#: ../../source/ref-api/flwr.common.ServerMessage.rst:31::1 +msgid "" +":py:obj:`get_properties_ins " +"`\\" +msgstr "" + +#: ../../source/ref-api/flwr.common.Status.rst:2 +#, fuzzy +msgid "Status" +msgstr "Statut du client." + +#: ../../source/ref-api/flwr.common.Status.rst:29::1 +msgid ":py:obj:`code `\\" +msgstr "" + +#: ../../source/ref-api/flwr.common.Status.rst:29::1 +msgid ":py:obj:`message `\\" +msgstr "" + +#: ../../source/ref-api/flwr.common.array_from_numpy.rst:2 +msgid "array\\_from\\_numpy" +msgstr "" + +#: ../../source/ref-api/flwr.common.bytes_to_ndarray.rst:2 +msgid "bytes\\_to\\_ndarray" +msgstr "" + +#: ../../source/ref-api/flwr.common.configure.rst:2 +#, fuzzy +msgid "configure" +msgstr "Configurer les clients" + +#: ../../source/ref-api/flwr.common.event.rst:2 +msgid "event" +msgstr "" + +#: ../../source/ref-api/flwr.common.log.rst:2 +msgid "log" +msgstr "" + +#: logging.Logger.log:3 of +msgid "" +"To pass exception information, use the keyword argument exc_info with a " +"true value, e.g." +msgstr "" +"Pour transmettre des informations sur les exceptions, utilise l'argument " +"mot-clé exc_info avec une valeur vraie, par ex." + +#: logging.Logger.log:6 of +#, python-format +msgid "logger.log(level, \"We have a %s\", \"mysterious problem\", exc_info=1)" +msgstr "logger.log(level, \"We have a %s\", \"mysterious problem\", exc_info=1)" + +#: ../../source/ref-api/flwr.common.ndarray_to_bytes.rst:2 +msgid "ndarray\\_to\\_bytes" +msgstr "" + +#: ../../source/ref-api/flwr.common.ndarrays_to_parameters.rst:2 +msgid "ndarrays\\_to\\_parameters" +msgstr "" + +#: ../../source/ref-api/flwr.common.now.rst:2 +msgid "now" +msgstr "" + +#: ../../source/ref-api/flwr.common.parameters_to_ndarrays.rst:2 +msgid "parameters\\_to\\_ndarrays" +msgstr "" + +#: ../../source/ref-api/flwr.server.rst:2 +msgid "server" +msgstr "serveur" + +#: ../../source/ref-api/flwr.server.rst:27::1 +msgid ":py:obj:`run_driver_api `\\ \\(\\)" +msgstr "" + +#: ../../source/ref-api/flwr.server.rst:27::1 +#: flwr.server.app.run_driver_api:1 of +#, fuzzy +msgid "Run Flower server (Driver API)." +msgstr "flower-driver-api" + +#: ../../source/ref-api/flwr.server.rst:27::1 +msgid ":py:obj:`run_fleet_api `\\ \\(\\)" +msgstr "" + +#: ../../source/ref-api/flwr.server.rst:27::1 +#: flwr.server.app.run_fleet_api:1 of +#, fuzzy +msgid "Run Flower server (Fleet API)." +msgstr "flower-fleet-api" + +#: ../../source/ref-api/flwr.server.rst:27::1 +msgid ":py:obj:`run_server_app `\\ \\(\\)" +msgstr "" + +#: ../../source/ref-api/flwr.server.rst:27::1 +#: flwr.server.run_serverapp.run_server_app:1 of +#, fuzzy +msgid "Run Flower server app." +msgstr "Serveur de Flower" + +#: ../../source/ref-api/flwr.server.rst:27::1 +msgid ":py:obj:`run_superlink `\\ \\(\\)" +msgstr "" + +#: ../../source/ref-api/flwr.server.rst:27::1 +#: flwr.server.app.run_superlink:1 of +msgid "Run Flower server (Driver API and Fleet API)." +msgstr "" + +#: ../../source/ref-api/flwr.server.rst:27::1 +msgid "" +":py:obj:`start_driver `\\ \\(\\*\\[\\, " +"server\\_address\\, server\\, ...\\]\\)" +msgstr "" + +#: ../../source/ref-api/flwr.server.rst:27::1 +#: flwr.server.compat.app.start_driver:1 of +#, fuzzy +msgid "Start a Flower Driver API server." +msgstr "Tout d'abord, démarre un serveur Flower :" + +#: ../../source/ref-api/flwr.server.rst:27::1 +msgid "" +":py:obj:`start_server `\\ \\(\\*\\[\\, " +"server\\_address\\, server\\, ...\\]\\)" +msgstr "" + +#: ../../source/ref-api/flwr.server.rst:27::1 +#: flwr.server.app.start_server:1 of +msgid "Start a Flower server using the gRPC transport layer." +msgstr "" + +#: ../../source/ref-api/flwr.server.rst:41::1 +msgid ":py:obj:`ClientManager `\\ \\(\\)" +msgstr "" + +#: ../../source/ref-api/flwr.server.rst:41::1 +#: flwr.server.client_manager.ClientManager:1 of +msgid "Abstract base class for managing Flower clients." +msgstr "" + +#: ../../source/ref-api/flwr.server.rst:41::1 +#, fuzzy +msgid "" +":py:obj:`Driver `\\ " +"\\(\\[driver\\_service\\_address\\, ...\\]\\)" +msgstr "" +"Flower 1.0 : ``start_server(..., " +"config=flwr.server.ServerConfig(num_rounds=3, round_timeout=600.0), " +"...)``" + +#: ../../source/ref-api/flwr.server.rst:41::1 +#: flwr.server.driver.driver.Driver:1 of +msgid "`Driver` class provides an interface to the Driver API." +msgstr "" + +#: ../../source/ref-api/flwr.server.rst:41::1 +msgid ":py:obj:`History `\\ \\(\\)" +msgstr "" + +#: ../../source/ref-api/flwr.server.rst:41::1 +#: flwr.server.history.History:1 of msgid "History class for training and/or evaluation metrics collection." msgstr "" -#: ../../source/ref-api/flwr.server.rst:37::1 -msgid "" -":py:obj:`Server `\\ \\(\\*\\, client\\_manager\\[\\, " -"strategy\\]\\)" -msgstr "" +#: ../../source/ref-api/flwr.server.rst:41::1 +msgid "" +":py:obj:`LegacyContext `\\ \\(state\\[\\, " +"config\\, strategy\\, ...\\]\\)" +msgstr "" + +#: ../../source/ref-api/flwr.server.rst:41::1 +#: flwr.server.compat.legacy_context.LegacyContext:1 of +msgid "Legacy Context." +msgstr "" + +#: ../../source/ref-api/flwr.server.rst:41::1 +msgid "" +":py:obj:`Server `\\ \\(\\*\\, client\\_manager\\[\\, " +"strategy\\]\\)" +msgstr "" + +#: ../../source/ref-api/flwr.server.rst:41::1 +#, fuzzy +msgid "" +":py:obj:`ServerApp `\\ \\(\\[server\\, config\\, " +"strategy\\, ...\\]\\)" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.server.rst:41::1 +#: flwr.server.server_app.ServerApp:1 of +#, fuzzy +msgid "Flower ServerApp." +msgstr "Serveur de Flower" + +#: ../../source/ref-api/flwr.server.rst:41::1 +#, fuzzy +msgid "" +":py:obj:`ServerConfig `\\ \\(\\[num\\_rounds\\," +" round\\_timeout\\]\\)" +msgstr "" +"Flower 1.0 : ``start_server(..., " +"config=flwr.server.ServerConfig(num_rounds=3, round_timeout=600.0), " +"...)``" + +#: ../../source/ref-api/flwr.server.rst:41::1 +#: flwr.server.server_config.ServerConfig:1 of +#, fuzzy +msgid "Flower server config." +msgstr "Serveur de Flower" + +#: ../../source/ref-api/flwr.server.rst:41::1 +msgid ":py:obj:`SimpleClientManager `\\ \\(\\)" +msgstr "" + +#: ../../source/ref-api/flwr.server.rst:41::1 +#: flwr.server.client_manager.SimpleClientManager:1 of +msgid "Provides a pool of available clients." +msgstr "" + +#: ../../source/ref-api/flwr.server.rst:60::1 +#, fuzzy +msgid ":py:obj:`flwr.server.strategy `\\" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.server.rst:60::1 +#: flwr.server.strategy:1 of +msgid "Contains the strategy abstraction and different implementations." +msgstr "" + +#: ../../source/ref-api/flwr.server.rst:60::1 +#, fuzzy +msgid ":py:obj:`flwr.server.workflow `\\" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.server.rst:60::1 +#: flwr.server.workflow:1 of +#, fuzzy +msgid "Workflows." +msgstr "Flux de travail" + +#: ../../source/ref-api/flwr.server.ClientManager.rst:2 +#, fuzzy +msgid "ClientManager" +msgstr "client" + +#: flwr.server.client_manager.ClientManager.all:1::1 of +msgid ":py:obj:`all `\\ \\(\\)" +msgstr "" + +#: flwr.server.client_manager.ClientManager.all:1 +#: flwr.server.client_manager.ClientManager.all:1::1 +#: flwr.server.client_manager.SimpleClientManager.all:1 +#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +msgid "Return all available clients." +msgstr "" + +#: flwr.server.client_manager.ClientManager.all:1::1 of +msgid ":py:obj:`num_available `\\ \\(\\)" +msgstr "" + +#: flwr.server.client_manager.ClientManager.all:1::1 +#: flwr.server.client_manager.ClientManager.num_available:1 +#: flwr.server.client_manager.SimpleClientManager.all:1::1 +#: flwr.server.client_manager.SimpleClientManager.num_available:1 of +msgid "Return the number of available clients." +msgstr "" + +#: flwr.server.client_manager.ClientManager.all:1::1 of +msgid ":py:obj:`register `\\ \\(client\\)" +msgstr "" + +#: flwr.server.client_manager.ClientManager.all:1::1 +#: flwr.server.client_manager.ClientManager.register:1 +#: flwr.server.client_manager.SimpleClientManager.all:1::1 +#: flwr.server.client_manager.SimpleClientManager.register:1 of +msgid "Register Flower ClientProxy instance." +msgstr "" + +#: flwr.server.client_manager.ClientManager.all:1::1 of +msgid "" +":py:obj:`sample `\\ " +"\\(num\\_clients\\[\\, min\\_num\\_clients\\, criterion\\]\\)" +msgstr "" + +#: flwr.server.client_manager.ClientManager.all:1::1 +#: flwr.server.client_manager.ClientManager.sample:1 +#: flwr.server.client_manager.SimpleClientManager.all:1::1 +#: flwr.server.client_manager.SimpleClientManager.sample:1 of +msgid "Sample a number of Flower ClientProxy instances." +msgstr "" + +#: flwr.server.client_manager.ClientManager.all:1::1 of +msgid ":py:obj:`unregister `\\ \\(client\\)" +msgstr "" + +#: flwr.server.client_manager.ClientManager.all:1::1 +#: flwr.server.client_manager.ClientManager.unregister:1 +#: flwr.server.client_manager.SimpleClientManager.all:1::1 +#: flwr.server.client_manager.SimpleClientManager.unregister:1 of +msgid "Unregister Flower ClientProxy instance." +msgstr "" + +#: flwr.server.client_manager.ClientManager.all:1::1 of +msgid "" +":py:obj:`wait_for `\\ " +"\\(num\\_clients\\, timeout\\)" +msgstr "" + +#: flwr.server.client_manager.ClientManager.all:1::1 +#: flwr.server.client_manager.ClientManager.wait_for:1 +#: flwr.server.client_manager.SimpleClientManager.all:1::1 +#: flwr.server.client_manager.SimpleClientManager.wait_for:1 of +msgid "Wait until at least `num_clients` are available." +msgstr "" + +#: flwr.server.client_manager.ClientManager.num_available:3 +#: flwr.server.client_manager.SimpleClientManager.num_available:3 of +msgid "**num_available** -- The number of currently available clients." +msgstr "" + +#: flwr.server.client_manager.ClientManager.register:6 +#: flwr.server.client_manager.SimpleClientManager.register:6 of +msgid "" +"**success** -- Indicating if registration was successful. False if " +"ClientProxy is already registered or can not be registered for any " +"reason." +msgstr "" + +#: flwr.server.client_manager.ClientManager.unregister:3 +#: flwr.server.client_manager.SimpleClientManager.unregister:3 of +msgid "This method is idempotent." +msgstr "" + +#: ../../source/ref-api/flwr.server.Driver.rst:2 +#, fuzzy +msgid "Driver" +msgstr "serveur" + +#: flwr.server.driver.driver.Driver:3 of +msgid "" +"The IPv4 or IPv6 address of the Driver API server. Defaults to " +"`\"[::]:9091\"`." +msgstr "" + +#: flwr.server.app.start_server:28 flwr.server.driver.driver.Driver:6 of +msgid "" +"Tuple containing root certificate, server certificate, and private key to" +" start a secure SSL-enabled server. The tuple is expected to have three " +"bytes elements in the following order: * CA certificate. * " +"server certificate. * server private key." +msgstr "" + +#: flwr.server.app.start_server:28 flwr.server.driver.driver.Driver:6 of +msgid "" +"Tuple containing root certificate, server certificate, and private key to" +" start a secure SSL-enabled server. The tuple is expected to have three " +"bytes elements in the following order:" +msgstr "" + +#: flwr.server.app.start_server:32 flwr.server.driver.driver.Driver:10 of +#, fuzzy +msgid "CA certificate." +msgstr "Certificats" + +#: flwr.server.app.start_server:33 flwr.server.driver.driver.Driver:11 of +#, fuzzy +msgid "server certificate." +msgstr "Certificats" + +#: flwr.server.app.start_server:34 flwr.server.driver.driver.Driver:12 of +#, fuzzy +msgid "server private key." +msgstr "stratégie.du.serveur" -#: ../../source/ref-api/flwr.server.rst:37::1 +#: flwr.server.driver.driver.Driver.close:1::1 of #, fuzzy +msgid ":py:obj:`close `\\ \\(\\)" +msgstr "serveur.stratégie.Stratégie" + +#: flwr.server.driver.driver.Driver.close:1 +#: flwr.server.driver.driver.Driver.close:1::1 of +msgid "Disconnect from the SuperLink if connected." +msgstr "" + +#: flwr.server.driver.driver.Driver.close:1::1 of msgid "" -":py:obj:`ServerConfig `\\ \\(\\[num\\_rounds\\," -" round\\_timeout\\]\\)" +":py:obj:`create_message `\\ " +"\\(content\\, message\\_type\\, ...\\)" msgstr "" -"Flower 1.0 : ``start_server(..., " -"config=flwr.server.ServerConfig(num_rounds=3, round_timeout=600.0), " -"...)``" -#: ../../source/ref-api/flwr.server.rst:37::1 -#: flwr.server.app.ServerConfig:1 of -#, fuzzy -msgid "Flower server config." -msgstr "Serveur de Flower" +#: flwr.server.driver.driver.Driver.close:1::1 +#: flwr.server.driver.driver.Driver.create_message:1 of +msgid "Create a new message with specified parameters." +msgstr "" -#: ../../source/ref-api/flwr.server.rst:37::1 -msgid ":py:obj:`SimpleClientManager `\\ \\(\\)" +#: flwr.server.driver.driver.Driver.close:1::1 of +msgid ":py:obj:`get_node_ids `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.server.rst:37::1 -#: flwr.server.client_manager.SimpleClientManager:1 of -msgid "Provides a pool of available clients." +#: flwr.server.driver.driver.Driver.close:1::1 +#: flwr.server.driver.driver.Driver.get_node_ids:1 of +msgid "Get node IDs." msgstr "" -#: ../../source/ref-api/flwr.server.rst:56::1 -msgid ":py:obj:`flwr.server.driver `\\" +#: flwr.server.driver.driver.Driver.close:1::1 of +msgid "" +":py:obj:`pull_messages `\\ " +"\\(message\\_ids\\)" msgstr "" -#: ../../source/ref-api/flwr.server.rst:56::1 flwr.server.driver:1 -#: of -#, fuzzy -msgid "Flower driver SDK." -msgstr "Serveur de Flower" +#: flwr.server.driver.driver.Driver.close:1::1 +#: flwr.server.driver.driver.Driver.pull_messages:1 of +msgid "Pull messages based on message IDs." +msgstr "" -#: ../../source/ref-api/flwr.server.rst:56::1 -#, fuzzy -msgid ":py:obj:`flwr.server.strategy `\\" -msgstr "serveur.stratégie.Stratégie" +#: flwr.server.driver.driver.Driver.close:1::1 of +msgid "" +":py:obj:`push_messages `\\ " +"\\(messages\\)" +msgstr "" -#: ../../source/ref-api/flwr.server.rst:56::1 -#: flwr.server.strategy:1 of -msgid "Contains the strategy abstraction and different implementations." +#: flwr.server.driver.driver.Driver.close:1::1 +#: flwr.server.driver.driver.Driver.push_messages:1 of +msgid "Push messages to specified node IDs." msgstr "" -#: ../../source/ref-api/flwr.server.ClientManager.rst:2 +#: flwr.server.driver.driver.Driver.close:1::1 of #, fuzzy -msgid "ClientManager" -msgstr "client" +msgid "" +":py:obj:`send_and_receive `\\ " +"\\(messages\\, \\*\\[\\, timeout\\]\\)" +msgstr "" +"Flower 1.0 : ``start_server(..., " +"config=flwr.server.ServerConfig(num_rounds=3, round_timeout=600.0), " +"...)``" -#: flwr.server.client_manager.ClientManager.all:1::1 of -msgid ":py:obj:`all `\\ \\(\\)" +#: flwr.server.driver.driver.Driver.close:1::1 +#: flwr.server.driver.driver.Driver.send_and_receive:1 of +msgid "Push messages to specified node IDs and pull the reply messages." msgstr "" -#: flwr.server.client_manager.ClientManager.all:1 -#: flwr.server.client_manager.ClientManager.all:1::1 -#: flwr.server.client_manager.SimpleClientManager.all:1 -#: flwr.server.client_manager.SimpleClientManager.all:1::1 of -msgid "Return all available clients." +#: flwr.server.driver.driver.Driver.create_message:3 of +msgid "" +"This method constructs a new `Message` with given content and metadata. " +"The `run_id` and `src_node_id` will be set automatically." msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 of -msgid ":py:obj:`num_available `\\ \\(\\)" +#: flwr.server.driver.driver.Driver.create_message:6 of +msgid "" +"The content for the new message. This holds records that are to be sent " +"to the destination node." msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 -#: flwr.server.client_manager.ClientManager.num_available:1 -#: flwr.server.client_manager.SimpleClientManager.all:1::1 -#: flwr.server.client_manager.SimpleClientManager.num_available:1 of -msgid "Return the number of available clients." +#: flwr.server.driver.driver.Driver.create_message:9 of +msgid "" +"The type of the message, defining the action to be executed on the " +"receiving end." msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 of -msgid ":py:obj:`register `\\ \\(client\\)" +#: flwr.server.driver.driver.Driver.create_message:12 of +msgid "The ID of the destination node to which the message is being sent." msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 -#: flwr.server.client_manager.ClientManager.register:1 -#: flwr.server.client_manager.SimpleClientManager.all:1::1 -#: flwr.server.client_manager.SimpleClientManager.register:1 of -msgid "Register Flower ClientProxy instance." +#: flwr.server.driver.driver.Driver.create_message:14 of +msgid "" +"The ID of the group to which this message is associated. In some " +"settings, this is used as the FL round." msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 of +#: flwr.server.driver.driver.Driver.create_message:17 of msgid "" -":py:obj:`sample `\\ " -"\\(num\\_clients\\[\\, min\\_num\\_clients\\, criterion\\]\\)" +"Time-to-live for the round trip of this message, i.e., the time from " +"sending this message to receiving a reply. It specifies the duration for " +"which the message and its potential reply are considered valid." msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 -#: flwr.server.client_manager.ClientManager.sample:1 -#: flwr.server.client_manager.SimpleClientManager.all:1::1 -#: flwr.server.client_manager.SimpleClientManager.sample:1 of -msgid "Sample a number of Flower ClientProxy instances." +#: flwr.server.driver.driver.Driver.create_message:22 of +msgid "" +"**message** -- A new `Message` instance with the specified content and " +"metadata." msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 of -msgid ":py:obj:`unregister `\\ \\(client\\)" +#: flwr.server.driver.driver.Driver.pull_messages:3 of +msgid "" +"This method is used to collect messages from the SuperLink that " +"correspond to a set of given message IDs." msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 -#: flwr.server.client_manager.ClientManager.unregister:1 -#: flwr.server.client_manager.SimpleClientManager.all:1::1 -#: flwr.server.client_manager.SimpleClientManager.unregister:1 of -msgid "Unregister Flower ClientProxy instance." +#: flwr.server.driver.driver.Driver.pull_messages:6 of +msgid "An iterable of message IDs for which reply messages are to be retrieved." msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 of +#: flwr.server.driver.driver.Driver.pull_messages:9 of +msgid "**messages** -- An iterable of messages received." +msgstr "" + +#: flwr.server.driver.driver.Driver.push_messages:3 of msgid "" -":py:obj:`wait_for `\\ " -"\\(num\\_clients\\, timeout\\)" +"This method takes an iterable of messages and sends each message to the " +"node specified in `dst_node_id`." msgstr "" -#: flwr.server.client_manager.ClientManager.all:1::1 -#: flwr.server.client_manager.ClientManager.wait_for:1 -#: flwr.server.client_manager.SimpleClientManager.all:1::1 -#: flwr.server.client_manager.SimpleClientManager.wait_for:1 of -msgid "Wait until at least `num_clients` are available." +#: flwr.server.driver.driver.Driver.push_messages:6 +#: flwr.server.driver.driver.Driver.send_and_receive:7 of +msgid "An iterable of messages to be sent." msgstr "" -#: flwr.server.client_manager.ClientManager.num_available:3 -#: flwr.server.client_manager.SimpleClientManager.num_available:3 of -msgid "**num_available** -- The number of currently available clients." +#: flwr.server.driver.driver.Driver.push_messages:9 of +msgid "" +"**message_ids** -- An iterable of IDs for the messages that were sent, " +"which can be used to pull replies." msgstr "" -#: flwr.server.client_manager.ClientManager.register:6 -#: flwr.server.client_manager.SimpleClientManager.register:6 of +#: flwr.server.driver.driver.Driver.send_and_receive:3 of msgid "" -"**success** -- Indicating if registration was successful. False if " -"ClientProxy is already registered or can not be registered for any " -"reason." +"This method sends a list of messages to their destination node IDs and " +"then waits for the replies. It continues to pull replies until either all" +" replies are received or the specified timeout duration is exceeded." msgstr "" -#: flwr.server.client_manager.ClientManager.unregister:3 -#: flwr.server.client_manager.SimpleClientManager.unregister:3 of -msgid "This method is idempotent." +#: flwr.server.driver.driver.Driver.send_and_receive:9 of +msgid "" +"The timeout duration in seconds. If specified, the method will wait for " +"replies for this duration. If `None`, there is no time limit and the " +"method will wait until replies for all messages are received." +msgstr "" + +#: flwr.server.driver.driver.Driver.send_and_receive:14 of +msgid "**replies** -- An iterable of reply messages received from the SuperLink." +msgstr "" + +#: flwr.server.driver.driver.Driver.send_and_receive:18 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:53 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:60 +#: of +#, fuzzy +msgid "Notes" +msgstr "Aucun" + +#: flwr.server.driver.driver.Driver.send_and_receive:19 of +msgid "" +"This method uses `push_messages` to send the messages and `pull_messages`" +" to collect the replies. If `timeout` is set, the method may not return " +"replies for all sent messages. A message remains valid until its TTL, " +"which is not affected by `timeout`." msgstr "" #: ../../source/ref-api/flwr.server.History.rst:2 @@ -9859,6 +10925,38 @@ msgstr "" msgid "Add metrics entries (from distributed fit)." msgstr "" +#: ../../source/ref-api/flwr.server.LegacyContext.rst:2 +msgid "LegacyContext" +msgstr "" + +#: flwr.server.compat.legacy_context.LegacyContext:1 of +msgid "Bases: :py:class:`~flwr.common.context.Context`" +msgstr "" + +#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 +#, fuzzy +msgid ":py:obj:`config `\\" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 +#, fuzzy +msgid ":py:obj:`strategy `\\" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 +msgid ":py:obj:`client_manager `\\" +msgstr "" + +#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 +#, fuzzy +msgid ":py:obj:`history `\\" +msgstr "serveur.stratégie.Stratégie" + +#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 +#, fuzzy +msgid ":py:obj:`state `\\" +msgstr "serveur.stratégie.Stratégie" + #: flwr.server.server.Server.client_manager:1::1 of msgid ":py:obj:`client_manager `\\ \\(\\)" msgstr "" @@ -9931,324 +11029,181 @@ msgstr "" msgid "Replace server strategy." msgstr "stratégie.du.serveur" -#: ../../source/ref-api/flwr.server.ServerConfig.rst:2 -#, fuzzy -msgid "ServerConfig" -msgstr "serveur" - -#: flwr.server.app.ServerConfig:3 of -msgid "" -"All attributes have default values which allows users to configure just " -"the ones they care about." -msgstr "" - -#: ../../source/ref-api/flwr.server.ServerConfig.rst:29::1 -msgid ":py:obj:`num_rounds `\\" -msgstr "" - -#: ../../source/ref-api/flwr.server.ServerConfig.rst:29::1 -msgid ":py:obj:`round_timeout `\\" -msgstr "" - -#: ../../source/ref-api/flwr.server.SimpleClientManager.rst:2 -msgid "SimpleClientManager" -msgstr "" - -#: flwr.server.client_manager.SimpleClientManager:1 of -msgid "Bases: :py:class:`~flwr.server.client_manager.ClientManager`" -msgstr "" - -#: flwr.server.client_manager.SimpleClientManager.all:1::1 of -msgid ":py:obj:`all `\\ \\(\\)" -msgstr "" - -#: flwr.server.client_manager.SimpleClientManager.all:1::1 of -msgid "" -":py:obj:`num_available `\\" -" \\(\\)" -msgstr "" - -#: flwr.server.client_manager.SimpleClientManager.all:1::1 of -msgid "" -":py:obj:`register `\\ " -"\\(client\\)" -msgstr "" - -#: flwr.server.client_manager.SimpleClientManager.all:1::1 of -msgid "" -":py:obj:`sample `\\ " -"\\(num\\_clients\\[\\, min\\_num\\_clients\\, criterion\\]\\)" -msgstr "" - -#: flwr.server.client_manager.SimpleClientManager.all:1::1 of -msgid "" -":py:obj:`unregister `\\ " -"\\(client\\)" -msgstr "" - -#: flwr.server.client_manager.SimpleClientManager.all:1::1 of -msgid "" -":py:obj:`wait_for `\\ " -"\\(num\\_clients\\[\\, timeout\\]\\)" -msgstr "" - -#: flwr.server.client_manager.SimpleClientManager.wait_for:3 of -msgid "" -"Blocks until the requested number of clients is available or until a " -"timeout is reached. Current timeout default: 1 day." -msgstr "" - -#: flwr.server.client_manager.SimpleClientManager.wait_for:6 of -msgid "The number of clients to wait for." -msgstr "" - -#: flwr.server.client_manager.SimpleClientManager.wait_for:8 of -msgid "The time in seconds to wait for, defaults to 86400 (24h)." -msgstr "" - -#: flwr.server.client_manager.SimpleClientManager.wait_for:11 of -msgid "**success**" -msgstr "" - -#: ../../source/ref-api/flwr.server.driver.rst:2 +#: ../../source/ref-api/flwr.server.ServerApp.rst:2 #, fuzzy -msgid "driver" +msgid "ServerApp" msgstr "serveur" -#: ../../source/ref-api/flwr.server.driver.rst:22::1 -msgid "" -":py:obj:`start_driver `\\ \\(\\*\\[\\, " -"server\\_address\\, server\\, ...\\]\\)" -msgstr "" - -#: ../../source/ref-api/flwr.server.driver.rst:22::1 -#: flwr.server.driver.app.start_driver:1 of -#, fuzzy -msgid "Start a Flower Driver API server." -msgstr "Tout d'abord, démarre un serveur Flower :" - -#: ../../source/ref-api/flwr.server.driver.rst:30::1 -msgid "" -":py:obj:`Driver `\\ " -"\\(\\[driver\\_service\\_address\\, ...\\]\\)" -msgstr "" - -#: ../../source/ref-api/flwr.server.driver.rst:30::1 -#: flwr.server.driver.driver.Driver:1 of -msgid "`Driver` class provides an interface to the Driver API." -msgstr "" - -#: ../../source/ref-api/flwr.server.driver.rst:30::1 -msgid "" -":py:obj:`GrpcDriver `\\ " -"\\(\\[driver\\_service\\_address\\, ...\\]\\)" -msgstr "" - -#: ../../source/ref-api/flwr.server.driver.rst:30::1 -#: flwr.server.driver.grpc_driver.GrpcDriver:1 of -msgid "`GrpcDriver` provides access to the gRPC Driver API/service." -msgstr "" - -#: ../../source/ref-api/flwr.server.driver.Driver.rst:2 +#: flwr.server.server_app.ServerApp:5 of #, fuzzy -msgid "Driver" -msgstr "serveur" - -#: flwr.server.driver.driver.Driver:3 of -msgid "" -"The IPv4 or IPv6 address of the Driver API server. Defaults to " -"`\"[::]:9091\"`." -msgstr "" - -#: flwr.server.app.start_server:28 flwr.server.driver.driver.Driver:6 of -msgid "" -"Tuple containing root certificate, server certificate, and private key to" -" start a secure SSL-enabled server. The tuple is expected to have three " -"bytes elements in the following order: * CA certificate. * " -"server certificate. * server private key." -msgstr "" +msgid "Use the `ServerApp` with an existing `Strategy`:" +msgstr "Utilise une stratégie existante" -#: flwr.server.app.start_server:28 flwr.server.driver.driver.Driver:6 of -msgid "" -"Tuple containing root certificate, server certificate, and private key to" -" start a secure SSL-enabled server. The tuple is expected to have three " -"bytes elements in the following order:" +#: flwr.server.server_app.ServerApp:15 of +msgid "Use the `ServerApp` with a custom main function:" msgstr "" -#: flwr.server.app.start_server:32 flwr.server.driver.driver.Driver:10 of +#: flwr.server.server_app.ServerApp.main:1::1 of #, fuzzy -msgid "CA certificate." -msgstr "Certificats" +msgid ":py:obj:`main `\\ \\(\\)" +msgstr "serveur.stratégie.Stratégie" -#: flwr.server.app.start_server:33 flwr.server.driver.driver.Driver:11 of -#, fuzzy -msgid "server certificate." -msgstr "Certificats" +#: flwr.server.server_app.ServerApp.main:1 +#: flwr.server.server_app.ServerApp.main:1::1 of +msgid "Return a decorator that registers the main fn with the server app." +msgstr "" -#: flwr.server.app.start_server:34 flwr.server.driver.driver.Driver:12 of +#: ../../source/ref-api/flwr.server.ServerConfig.rst:2 #, fuzzy -msgid "server private key." -msgstr "stratégie.du.serveur" +msgid "ServerConfig" +msgstr "serveur" -#: flwr.server.driver.driver.Driver.get_nodes:1::1 of -msgid ":py:obj:`get_nodes `\\ \\(\\)" +#: flwr.server.server_config.ServerConfig:3 of +msgid "" +"All attributes have default values which allows users to configure just " +"the ones they care about." msgstr "" -#: flwr.server.driver.driver.Driver.get_nodes:1 -#: flwr.server.driver.driver.Driver.get_nodes:1::1 of -msgid "Get node IDs." +#: ../../source/ref-api/flwr.server.ServerConfig.rst:29::1 +msgid ":py:obj:`num_rounds `\\" msgstr "" -#: flwr.server.driver.driver.Driver.get_nodes:1::1 of -msgid "" -":py:obj:`pull_task_res `\\ " -"\\(task\\_ids\\)" +#: ../../source/ref-api/flwr.server.ServerConfig.rst:29::1 +msgid ":py:obj:`round_timeout `\\" msgstr "" -#: flwr.server.driver.driver.Driver.get_nodes:1::1 -#: flwr.server.driver.driver.Driver.pull_task_res:1 -#: flwr.server.driver.grpc_driver.GrpcDriver.connect:1::1 -#: flwr.server.driver.grpc_driver.GrpcDriver.pull_task_res:1 of -msgid "Get task results." +#: ../../source/ref-api/flwr.server.SimpleClientManager.rst:2 +msgid "SimpleClientManager" msgstr "" -#: flwr.server.driver.driver.Driver.get_nodes:1::1 of -msgid "" -":py:obj:`push_task_ins `\\ " -"\\(task\\_ins\\_list\\)" +#: flwr.server.client_manager.SimpleClientManager:1 of +msgid "Bases: :py:class:`~flwr.server.client_manager.ClientManager`" +msgstr "" + +#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +msgid ":py:obj:`all `\\ \\(\\)" msgstr "" -#: flwr.server.driver.driver.Driver.get_nodes:1::1 -#: flwr.server.driver.driver.Driver.push_task_ins:1 -#: flwr.server.driver.grpc_driver.GrpcDriver.connect:1::1 -#: flwr.server.driver.grpc_driver.GrpcDriver.push_task_ins:1 of -msgid "Schedule tasks." +#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +msgid "" +":py:obj:`num_available `\\" +" \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.server.driver.GrpcDriver.rst:2 -msgid "GrpcDriver" +#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +msgid "" +":py:obj:`register `\\ " +"\\(client\\)" msgstr "" -#: flwr.server.driver.grpc_driver.GrpcDriver.connect:1::1 of -msgid ":py:obj:`connect `\\ \\(\\)" +#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +msgid "" +":py:obj:`sample `\\ " +"\\(num\\_clients\\[\\, min\\_num\\_clients\\, criterion\\]\\)" msgstr "" -#: flwr.server.driver.grpc_driver.GrpcDriver.connect:1 -#: flwr.server.driver.grpc_driver.GrpcDriver.connect:1::1 of -msgid "Connect to the Driver API." +#: flwr.server.client_manager.SimpleClientManager.all:1::1 of +msgid "" +":py:obj:`unregister `\\ " +"\\(client\\)" msgstr "" -#: flwr.server.driver.grpc_driver.GrpcDriver.connect:1::1 of +#: flwr.server.client_manager.SimpleClientManager.all:1::1 of msgid "" -":py:obj:`create_run `\\ " -"\\(req\\)" +":py:obj:`wait_for `\\ " +"\\(num\\_clients\\[\\, timeout\\]\\)" msgstr "" -#: flwr.server.driver.grpc_driver.GrpcDriver.connect:1::1 -#: flwr.server.driver.grpc_driver.GrpcDriver.create_run:1 of -#, fuzzy -msgid "Request for run ID." -msgstr "Demande pour une nouvelle Flower Baseline" +#: flwr.server.client_manager.SimpleClientManager.wait_for:3 of +msgid "" +"Blocks until the requested number of clients is available or until a " +"timeout is reached. Current timeout default: 1 day." +msgstr "" -#: flwr.server.driver.grpc_driver.GrpcDriver.connect:1::1 of -msgid ":py:obj:`disconnect `\\ \\(\\)" +#: flwr.server.client_manager.SimpleClientManager.wait_for:6 of +msgid "The number of clients to wait for." msgstr "" -#: flwr.server.driver.grpc_driver.GrpcDriver.connect:1::1 -#: flwr.server.driver.grpc_driver.GrpcDriver.disconnect:1 of -msgid "Disconnect from the Driver API." +#: flwr.server.client_manager.SimpleClientManager.wait_for:8 of +msgid "The time in seconds to wait for, defaults to 86400 (24h)." msgstr "" -#: flwr.server.driver.grpc_driver.GrpcDriver.connect:1::1 of -msgid ":py:obj:`get_nodes `\\ \\(req\\)" +#: flwr.server.client_manager.SimpleClientManager.wait_for:11 of +msgid "**success**" msgstr "" -#: flwr.server.driver.grpc_driver.GrpcDriver.connect:1::1 -#: flwr.server.driver.grpc_driver.GrpcDriver.get_nodes:1 of +#: ../../source/ref-api/flwr.server.run_driver_api.rst:2 #, fuzzy -msgid "Get client IDs." -msgstr "Moteur client Edge" +msgid "run\\_driver\\_api" +msgstr "flower-driver-api" -#: flwr.server.driver.grpc_driver.GrpcDriver.connect:1::1 of -msgid "" -":py:obj:`pull_task_res `\\ " -"\\(req\\)" +#: ../../source/ref-api/flwr.server.run_fleet_api.rst:2 +msgid "run\\_fleet\\_api" msgstr "" -#: flwr.server.driver.grpc_driver.GrpcDriver.connect:1::1 of -msgid "" -":py:obj:`push_task_ins `\\ " -"\\(req\\)" +#: ../../source/ref-api/flwr.server.run_server_app.rst:2 +msgid "run\\_server\\_app" msgstr "" -#: ../../source/ref-api/flwr.server.driver.start_driver.rst:2 +#: ../../source/ref-api/flwr.server.run_superlink.rst:2 +#, fuzzy +msgid "run\\_superlink" +msgstr "flower-superlink" + +#: ../../source/ref-api/flwr.server.start_driver.rst:2 #, fuzzy msgid "start\\_driver" msgstr "start_client" -#: flwr.server.driver.app.start_driver:3 of +#: flwr.server.compat.app.start_driver:3 of msgid "" "The IPv4 or IPv6 address of the Driver API server. Defaults to " "`\"[::]:8080\"`." msgstr "" -#: flwr.server.driver.app.start_driver:6 of +#: flwr.server.compat.app.start_driver:6 of msgid "" "A server implementation, either `flwr.server.Server` or a subclass " "thereof. If no instance is provided, then `start_driver` will create one." msgstr "" -#: flwr.server.app.start_server:9 flwr.server.driver.app.start_driver:10 +#: flwr.server.app.start_server:9 flwr.server.compat.app.start_driver:10 #: flwr.simulation.app.start_simulation:28 of msgid "" "Currently supported values are `num_rounds` (int, default: 1) and " "`round_timeout` in seconds (float, default: None)." msgstr "" -#: flwr.server.app.start_server:12 flwr.server.driver.app.start_driver:13 of +#: flwr.server.app.start_server:12 flwr.server.compat.app.start_driver:13 of msgid "" "An implementation of the abstract base class " "`flwr.server.strategy.Strategy`. If no strategy is provided, then " "`start_server` will use `flwr.server.strategy.FedAvg`." msgstr "" -#: flwr.server.driver.app.start_driver:17 of +#: flwr.server.compat.app.start_driver:17 of msgid "" "An implementation of the class `flwr.server.ClientManager`. If no " "implementation is provided, then `start_driver` will use " "`flwr.server.SimpleClientManager`." msgstr "" -#: flwr.server.app.start_server:37 flwr.server.driver.app.start_driver:26 of +#: flwr.server.compat.app.start_driver:25 of +msgid "The Driver object to use." +msgstr "" + +#: flwr.server.app.start_server:37 flwr.server.compat.app.start_driver:28 of msgid "**hist** -- Object containing training and evaluation metrics." msgstr "" -#: flwr.server.driver.app.start_driver:31 of +#: flwr.server.compat.app.start_driver:33 of msgid "Starting a driver that connects to an insecure server:" msgstr "" -#: flwr.server.driver.app.start_driver:35 of +#: flwr.server.compat.app.start_driver:37 of msgid "Starting a driver that connects to an SSL-enabled server:" msgstr "" -#: ../../source/ref-api/flwr.server.run_driver_api.rst:2 -#, fuzzy -msgid "run\\_driver\\_api" -msgstr "flower-driver-api" - -#: ../../source/ref-api/flwr.server.run_fleet_api.rst:2 -msgid "run\\_fleet\\_api" -msgstr "" - -#: ../../source/ref-api/flwr.server.run_server_app.rst:2 -msgid "run\\_server\\_app" -msgstr "" - -#: ../../source/ref-api/flwr.server.run_superlink.rst:2 -#, fuzzy -msgid "run\\_superlink" -msgstr "flower-superlink" - #: ../../source/ref-api/flwr.server.start_server.rst:2 #, fuzzy msgid "start\\_server" @@ -10296,25 +11251,99 @@ msgstr "Démarrer le serveur" msgid "strategy" msgstr "stratégie.du.serveur" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`FaultTolerantFedAvg " -"`\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" +":py:obj:`Bulyan `\\ \\(\\*\\, " +"fraction\\_fit\\, fraction\\_evaluate\\, ...\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg:1 of -msgid "Configurable fault-tolerant FedAvg strategy implementation." +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.bulyan.Bulyan:1 of +#, fuzzy +msgid "Bulyan strategy." +msgstr "Stratégies intégrées" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +msgid "" +":py:obj:`DPFedAvgAdaptive `\\ " +"\\(strategy\\, num\\_sampled\\_clients\\)" +msgstr "" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive:1 of +msgid "Wrapper for configuring a Strategy for DP with Adaptive Clipping." +msgstr "" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +msgid "" +":py:obj:`DPFedAvgFixed `\\ " +"\\(strategy\\, num\\_sampled\\_clients\\, ...\\)" +msgstr "" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed:1 of +msgid "Wrapper for configuring a Strategy for DP with Fixed Clipping." +msgstr "" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +msgid "" +":py:obj:`DifferentialPrivacyClientSideAdaptiveClipping " +"`\\ " +"\\(...\\)" +msgstr "" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:1 +#: of +msgid "Strategy wrapper for central DP with client-side adaptive clipping." +msgstr "" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +msgid "" +":py:obj:`DifferentialPrivacyServerSideAdaptiveClipping " +"`\\ " +"\\(...\\)" +msgstr "" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:1 +#: of +msgid "Strategy wrapper for central DP with server-side adaptive clipping." +msgstr "" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +msgid "" +":py:obj:`DifferentialPrivacyClientSideFixedClipping " +"`\\ " +"\\(...\\)" +msgstr "" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:1 +#: of +msgid "Strategy wrapper for central DP with client-side fixed clipping." +msgstr "" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +msgid "" +":py:obj:`DifferentialPrivacyServerSideFixedClipping " +"`\\ " +"\\(...\\)" +msgstr "" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:1 +#: of +msgid "Strategy wrapper for central DP with server-side fixed clipping." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" ":py:obj:`FedAdagrad `\\ \\(\\*\\[\\, " "fraction\\_fit\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 #: flwr.server.strategy.fedadagrad.FedAdagrad:1 of #, fuzzy msgid "FedAdagrad strategy - Adaptive Federated Optimization using Adagrad." @@ -10322,201 +11351,179 @@ msgstr "" "`FedAdam` et `FedAdam` correspondent à la dernière version de l'article " "sur l'optimisation fédérée adaptative." -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" ":py:obj:`FedAdam `\\ \\(\\*\\[\\, " "fraction\\_fit\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 #: flwr.server.strategy.fedadam.FedAdam:1 of msgid "FedAdam - Adaptive Federated Optimization using Adam." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" ":py:obj:`FedAvg `\\ \\(\\*\\[\\, " "fraction\\_fit\\, fraction\\_evaluate\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 #: flwr.server.strategy.fedavg.FedAvg:1 #: flwr.server.strategy.fedavg_android.FedAvgAndroid:1 of #, fuzzy msgid "Federated Averaging strategy." msgstr "Stratégie de moyenne fédérée." -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 -msgid "" -":py:obj:`FedXgbNnAvg `\\ \\(\\*args\\, " -"\\*\\*kwargs\\)" -msgstr "" - -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 -#: flwr.server.strategy.fedxgb_nn_avg.FedXgbNnAvg:1 of -msgid "Configurable FedXgbNnAvg strategy implementation." -msgstr "" - -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 -msgid "" -":py:obj:`FedXgbBagging `\\ " -"\\(\\[evaluate\\_function\\]\\)" -msgstr "" - -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging:1 of -msgid "Configurable FedXgbBagging strategy implementation." -msgstr "" - -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 -msgid "" -":py:obj:`FedXgbCyclic `\\ " -"\\(\\*\\*kwargs\\)" -msgstr "" - -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic:1 of -msgid "Configurable FedXgbCyclic strategy implementation." -msgstr "" - -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" ":py:obj:`FedAvgAndroid `\\ " "\\(\\*\\[\\, fraction\\_fit\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" ":py:obj:`FedAvgM `\\ \\(\\*\\[\\, " "fraction\\_fit\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 #: flwr.server.strategy.fedavgm.FedAvgM:1 of #, fuzzy msgid "Federated Averaging with Momentum strategy." msgstr "Stratégie de moyenne fédérée." -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +msgid "" +":py:obj:`FedMedian `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" +msgstr "" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedmedian.FedMedian:1 of +#, fuzzy +msgid "Configurable FedMedian strategy implementation." +msgstr "Configuration de l'évaluation fédérée" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" ":py:obj:`FedOpt `\\ \\(\\*\\[\\, " "fraction\\_fit\\, fraction\\_evaluate\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 #: flwr.server.strategy.fedopt.FedOpt:1 of #, fuzzy msgid "Federated Optim strategy." msgstr "Stratégie de moyenne fédérée." -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" ":py:obj:`FedProx `\\ \\(\\*\\[\\, " "fraction\\_fit\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 #: flwr.server.strategy.fedprox.FedProx:1 of #, fuzzy msgid "Federated Optimization strategy." msgstr "Stratégie de moyenne fédérée." -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`FedYogi `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" +":py:obj:`FedTrimmedAvg `\\ " +"\\(\\*\\[\\, fraction\\_fit\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 -#: flwr.server.strategy.fedyogi.FedYogi:1 of -msgid "FedYogi [Reddi et al., 2020] strategy." +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:1 of +msgid "Federated Averaging with Trimmed Mean [Dong Yin, et al., 2021]." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`QFedAvg `\\ \\(\\*\\[\\, " -"q\\_param\\, qffl\\_learning\\_rate\\, ...\\]\\)" +":py:obj:`FedXgbBagging `\\ " +"\\(\\[evaluate\\_function\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 -#: flwr.server.strategy.qfedavg.QFedAvg:1 of -msgid "Configurable QFedAvg strategy implementation." +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging:1 of +msgid "Configurable FedXgbBagging strategy implementation." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`FedMedian `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" +":py:obj:`FedXgbCyclic `\\ " +"\\(\\*\\*kwargs\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 -#: flwr.server.strategy.fedmedian.FedMedian:1 of -#, fuzzy -msgid "Configurable FedMedian strategy implementation." -msgstr "Configuration de l'évaluation fédérée" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic:1 of +msgid "Configurable FedXgbCyclic strategy implementation." +msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`FedTrimmedAvg `\\ " -"\\(\\*\\[\\, fraction\\_fit\\, ...\\]\\)" +":py:obj:`FedXgbNnAvg `\\ \\(\\*args\\, " +"\\*\\*kwargs\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:1 of -msgid "Federated Averaging with Trimmed Mean [Dong Yin, et al., 2021]." +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedxgb_nn_avg.FedXgbNnAvg:1 of +msgid "Configurable FedXgbNnAvg strategy implementation." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`Krum `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, fraction\\_evaluate\\, ...\\]\\)" +":py:obj:`FedYogi `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 -#: flwr.server.strategy.krum.Krum:1 of -msgid "Krum [Blanchard et al., 2017] strategy." +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedyogi.FedYogi:1 of +msgid "FedYogi [Reddi et al., 2020] strategy." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`Bulyan `\\ \\(\\*\\, " -"fraction\\_fit\\, fraction\\_evaluate\\, ...\\)" +":py:obj:`FaultTolerantFedAvg " +"`\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 -#: flwr.server.strategy.bulyan.Bulyan:1 of -#, fuzzy -msgid "Bulyan strategy." -msgstr "Stratégies intégrées" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg:1 of +msgid "Configurable fault-tolerant FedAvg strategy implementation." +msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`DPFedAvgAdaptive `\\ " -"\\(strategy\\, num\\_sampled\\_clients\\)" +":py:obj:`Krum `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, fraction\\_evaluate\\, ...\\]\\)" msgstr "" - -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 -#: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive:1 of -msgid "Wrapper for configuring a Strategy for DP with Adaptive Clipping." + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.krum.Krum:1 of +msgid "Krum [Blanchard et al., 2017] strategy." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`DPFedAvgFixed `\\ " -"\\(strategy\\, num\\_sampled\\_clients\\, ...\\)" +":py:obj:`QFedAvg `\\ \\(\\*\\[\\, " +"q\\_param\\, qffl\\_learning\\_rate\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed:1 of -msgid "Wrapper for configuring a Strategy for DP with Fixed Clipping." +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.qfedavg.QFedAvg:1 of +msgid "Configurable QFedAvg strategy implementation." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 #, fuzzy msgid ":py:obj:`Strategy `\\ \\(\\)" msgstr "serveur.stratégie.Stratégie" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 #: flwr.server.strategy.strategy.Strategy:1 of msgid "Abstract base class for server strategy implementations." msgstr "" @@ -10719,6 +11726,14 @@ msgid "" "parameters\\, ...\\)" msgstr "" +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.configure_evaluate:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.configure_evaluate:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.configure_evaluate:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.configure_evaluate:1 #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 #: flwr.server.strategy.fedavg.FedAvg.configure_evaluate:1 @@ -10741,6 +11756,14 @@ msgid "" "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.configure_fit:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.configure_fit:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.configure_fit:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.configure_fit:1 #: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive.configure_fit:1 #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 @@ -10835,6 +11858,10 @@ msgstr "" msgid "Return the sample size and the required number of available clients." msgstr "" +#: ../../source/ref-api/flwr.server.strategy.DPFedAvgAdaptive.rst:2 +msgid "DPFedAvgAdaptive" +msgstr "DPFedAvgAdaptive" + #: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive:1 of msgid "Bases: :py:class:`~flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed`" msgstr "" @@ -10852,6 +11879,14 @@ msgid "" "\\(server\\_round\\, results\\, ...\\)" msgstr "" +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1 #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 #: of @@ -10901,6 +11936,14 @@ msgid "" "\\(server\\_round\\, parameters\\)" msgstr "" +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.evaluate:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.evaluate:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.evaluate:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.evaluate:1 #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.evaluate:1 of msgid "Evaluate model parameters using an evaluation function from the strategy." @@ -10914,6 +11957,14 @@ msgid "" "\\(client\\_manager\\)" msgstr "" +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.initialize_parameters:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.initialize_parameters:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.initialize_parameters:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.initialize_parameters:1 #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.initialize_parameters:1 of msgid "Initialize global model parameters using given strategy." @@ -10948,6 +11999,14 @@ msgid "" "round of federated evaluation." msgstr "" +#: ../../source/ref-api/flwr.server.strategy.DPFedAvgFixed.rst:2 +msgid "DPFedAvgFixed" +msgstr "DPFedAvgFixed" + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:1 #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed:1 #: flwr.server.strategy.fedavg.FedAvg:1 #: flwr.server.strategy.fedavg_android.FedAvgAndroid:1 of @@ -11005,28 +12064,414 @@ msgid "" "\\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" +msgstr "" + +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit:3 of +msgid "" +"Configuration of the next training round includes information related to " +"DP, such as clip norm and noise stddev." +msgstr "" + +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit:13 +#: flwr.server.strategy.strategy.Strategy.configure_fit:10 of +msgid "" +"**fit_configuration** -- A list of tuples. Each tuple in the list " +"identifies a `ClientProxy` and the `FitIns` for this particular " +"`ClientProxy`. If a particular `ClientProxy` is not included in this " +"list, it means that this `ClientProxy` will not participate in the next " +"round of federated learning." +msgstr "" + +#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyClientSideAdaptiveClipping.rst:2 +msgid "DifferentialPrivacyClientSideAdaptiveClipping" +msgstr "" + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:3 +#: of +msgid "Use `adaptiveclipping_mod` modifier at the client side." +msgstr "" + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:5 +#: of +msgid "" +"In comparison to `DifferentialPrivacyServerSideAdaptiveClipping`, which " +"performs clipping on the server-side, " +"`DifferentialPrivacyClientSideAdaptiveClipping` expects clipping to " +"happen on the client-side, usually by using the built-in " +"`adaptiveclipping_mod`." +msgstr "" + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:10 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:3 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:10 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:3 +#: of +msgid "The strategy to which DP functionalities will be added by this wrapper." +msgstr "" + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:12 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:5 +#: of +msgid "The noise multiplier for the Gaussian mechanism for model updates." +msgstr "" + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:14 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:7 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:17 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:10 +#: of +msgid "The number of clients that are sampled on each round." +msgstr "" + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:16 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:9 +#: of +msgid "" +"The initial value of clipping norm. Defaults to 0.1. Andrew et al. " +"recommends to set to 0.1." +msgstr "" + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:19 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:12 +#: of +msgid "The desired quantile of updates which should be clipped. Defaults to 0.5." +msgstr "" + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:21 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:14 +#: of +msgid "" +"The learning rate for the clipping norm adaptation. Defaults to 0.2. " +"Andrew et al. recommends to set to 0.2." +msgstr "" + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:24 +#: of +msgid "" +"The stddev of the noise added to the count of updates currently below the" +" estimate. Andrew et al. recommends to set to `expected_num_records/20`" +msgstr "" + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:30 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:23 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:22 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:15 +#: of +#, fuzzy +msgid "Create a strategy:" +msgstr "stratégie.du.serveur" + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:34 +#: of +msgid "" +"Wrap the strategy with the " +"`DifferentialPrivacyClientSideAdaptiveClipping` wrapper:" +msgstr "" + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:40 +#: of +msgid "On the client, add the `adaptiveclipping_mod` to the client-side mods:" +msgstr "" + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`aggregate_evaluate " +"`\\" +" \\(server\\_round\\, results\\, ...\\)" +msgstr "" + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`aggregate_fit " +"`\\" +" \\(server\\_round\\, results\\, failures\\)" +msgstr "" + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_fit:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_fit:1 +#: of +#, fuzzy +msgid "Aggregate training results and update clip norms." +msgstr "Résultats globaux de l'évaluation." + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`configure_evaluate " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" +msgstr "" + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`configure_fit " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" +msgstr "" + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`evaluate " +"`\\" +" \\(server\\_round\\, parameters\\)" +msgstr "" + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`initialize_parameters " +"`\\" +" \\(client\\_manager\\)" +msgstr "" + +#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyClientSideFixedClipping.rst:2 +#, fuzzy +msgid "DifferentialPrivacyClientSideFixedClipping" +msgstr "Confidentialité différentielle" + +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:3 +#: of +msgid "Use `fixedclipping_mod` modifier at the client side." +msgstr "" + +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:5 +#: of +msgid "" +"In comparison to `DifferentialPrivacyServerSideFixedClipping`, which " +"performs clipping on the server-side, " +"`DifferentialPrivacyClientSideFixedClipping` expects clipping to happen " +"on the client-side, usually by using the built-in `fixedclipping_mod`." +msgstr "" + +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:12 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:5 +#: of +msgid "" +"The noise multiplier for the Gaussian mechanism for model updates. A " +"value of 1.0 or higher is recommended for strong privacy." +msgstr "" + +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:15 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:8 +#: of +msgid "The value of the clipping norm." +msgstr "" + +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:26 +#: of +msgid "" +"Wrap the strategy with the `DifferentialPrivacyClientSideFixedClipping` " +"wrapper:" +msgstr "" + +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:32 +#: of +msgid "On the client, add the `fixedclipping_mod` to the client-side mods:" +msgstr "" + +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`aggregate_evaluate " +"`\\" +" \\(server\\_round\\, results\\, ...\\)" +msgstr "" + +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`aggregate_fit " +"`\\" +" \\(server\\_round\\, results\\, failures\\)" +msgstr "" + +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_fit:1 +#: of +#, fuzzy +msgid "Add noise to the aggregated parameters." +msgstr "Puis sérialise le résultat agrégé :" + +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`configure_evaluate " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" +msgstr "" + +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`configure_fit " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" +msgstr "" + +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`evaluate " +"`\\" +" \\(server\\_round\\, parameters\\)" +msgstr "" + +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`initialize_parameters " +"`\\" +" \\(client\\_manager\\)" +msgstr "" + +#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyServerSideAdaptiveClipping.rst:2 +msgid "DifferentialPrivacyServerSideAdaptiveClipping" +msgstr "" + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:17 +#: of +msgid "" +"The standard deviation of the noise added to the count of updates below " +"the estimate. Andrew et al. recommends to set to " +"`expected_num_records/20`" +msgstr "" + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:27 +#: of +msgid "" +"Wrap the strategy with the DifferentialPrivacyServerSideAdaptiveClipping " +"wrapper" +msgstr "" + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`aggregate_evaluate " +"`\\" +" \\(server\\_round\\, results\\, ...\\)" +msgstr "" + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`aggregate_fit " +"`\\" +" \\(server\\_round\\, results\\, failures\\)" +msgstr "" + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`configure_evaluate " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" +msgstr "" + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`configure_fit " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" +msgstr "" + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`evaluate " +"`\\" +" \\(server\\_round\\, parameters\\)" +msgstr "" + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`initialize_parameters " +"`\\" +" \\(client\\_manager\\)" +msgstr "" + +#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyServerSideFixedClipping.rst:2 +#, fuzzy +msgid "DifferentialPrivacyServerSideFixedClipping" +msgstr "Confidentialité différentielle" + +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:19 +#: of +msgid "" +"Wrap the strategy with the DifferentialPrivacyServerSideFixedClipping " +"wrapper" +msgstr "" + +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`aggregate_evaluate " +"`\\" +" \\(server\\_round\\, results\\, ...\\)" +msgstr "" + +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`aggregate_fit " +"`\\" +" \\(server\\_round\\, results\\, failures\\)" +msgstr "" + +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_fit:1 +#: of +msgid "Compute the updates, clip, and pass them for aggregation." +msgstr "" + +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`configure_evaluate " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" +msgstr "" + +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`configure_fit " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" +msgstr "" + +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 #: of msgid "" -":py:obj:`initialize_parameters " -"`\\ " -"\\(client\\_manager\\)" +":py:obj:`evaluate " +"`\\" +" \\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit:3 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: of msgid "" -"Configuration of the next training round includes information related to " -"DP, such as clip norm and noise stddev." +":py:obj:`initialize_parameters " +"`\\" +" \\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit:13 -#: flwr.server.strategy.strategy.Strategy.configure_fit:10 of -msgid "" -"**fit_configuration** -- A list of tuples. Each tuple in the list " -"identifies a `ClientProxy` and the `FitIns` for this particular " -"`ClientProxy`. If a particular `ClientProxy` is not included in this " -"list, it means that this `ClientProxy` will not participate in the next " -"round of federated learning." +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_fit:3 +#: of +msgid "Afterward, add noise to the aggregated parameters." msgstr "" #: ../../source/ref-api/flwr.server.strategy.FaultTolerantFedAvg.rst:2 @@ -11312,6 +12757,10 @@ msgid "" "Defaults to 1.0." msgstr "" +#: flwr.server.strategy.fedavg.FedAvg:33 of +msgid "Enable (True) or disable (False) in-place aggregation of model updates." +msgstr "" + #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`aggregate_evaluate " @@ -12359,72 +13808,518 @@ msgid "" "update, there should be an `Exception` in `failures`." msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:13 -#: flwr.server.strategy.strategy.Strategy.aggregate_fit:13 of -msgid "Exceptions that occurred while the server was waiting for client updates." +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:13 +#: flwr.server.strategy.strategy.Strategy.aggregate_fit:13 of +msgid "Exceptions that occurred while the server was waiting for client updates." +msgstr "" + +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:16 of +msgid "" +"**aggregation_result** -- The aggregated evaluation result. Aggregation " +"typically uses some variant of a weighted average." +msgstr "" + +#: flwr.server.strategy.strategy.Strategy.aggregate_fit:5 of +msgid "" +"Successful updates from the previously selected and configured clients. " +"Each pair of `(ClientProxy, FitRes)` constitutes a successful update from" +" one of the previously selected clients. Not that not all previously " +"selected clients are necessarily included in this list: a client might " +"drop out and not submit a result. For each client that did not submit an " +"update, there should be an `Exception` in `failures`." +msgstr "" + +#: flwr.server.strategy.strategy.Strategy.aggregate_fit:17 of +msgid "" +"**parameters** -- If parameters are returned, then the server will treat " +"these as the new global model parameters (i.e., it will replace the " +"previous parameters with the ones returned from this method). If `None` " +"is returned (e.g., because there were only failures and no viable " +"results) then the server will no update the previous model parameters, " +"the updates received in this round are discarded, and the global model " +"parameters remain the same." +msgstr "" + +#: flwr.server.strategy.strategy.Strategy.evaluate:3 of +msgid "" +"This function can be used to perform centralized (i.e., server-side) " +"evaluation of model parameters." +msgstr "" + +#: flwr.server.strategy.strategy.Strategy.evaluate:11 of +msgid "" +"**evaluation_result** -- The evaluation result, usually a Tuple " +"containing loss and a dictionary containing task-specific metrics (e.g., " +"accuracy)." +msgstr "" + +#: flwr.server.strategy.strategy.Strategy.initialize_parameters:6 of +msgid "" +"**parameters** -- If parameters are returned, then the server will treat " +"these as the initial global model parameters." +msgstr "" + +#: ../../source/ref-api/flwr.server.workflow.rst:2 +#, fuzzy +msgid "workflow" +msgstr "Flux de travail" + +#: ../../source/ref-api/flwr.server.workflow.rst:24::1 +msgid "" +":py:obj:`DefaultWorkflow `\\ " +"\\(\\[fit\\_workflow\\, ...\\]\\)" +msgstr "" + +#: ../../source/ref-api/flwr.server.workflow.rst:24::1 +#: flwr.server.workflow.default_workflows.DefaultWorkflow:1 of +msgid "Default workflow in Flower." +msgstr "" + +#: ../../source/ref-api/flwr.server.workflow.rst:24::1 +msgid "" +":py:obj:`SecAggPlusWorkflow `\\ " +"\\(num\\_shares\\, ...\\[\\, ...\\]\\)" +msgstr "" + +#: ../../source/ref-api/flwr.server.workflow.rst:24::1 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:1 +#: of +msgid "The workflow for the SecAgg+ protocol." +msgstr "" + +#: ../../source/ref-api/flwr.server.workflow.rst:24::1 +msgid "" +":py:obj:`SecAggWorkflow `\\ " +"\\(reconstruction\\_threshold\\, \\*\\)" +msgstr "" + +#: ../../source/ref-api/flwr.server.workflow.rst:24::1 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:1 of +msgid "The workflow for the SecAgg protocol." +msgstr "" + +#: ../../source/ref-api/flwr.server.workflow.DefaultWorkflow.rst:2 +#, fuzzy +msgid "DefaultWorkflow" +msgstr "Flux de travail" + +#: ../../source/ref-api/flwr.server.workflow.SecAggPlusWorkflow.rst:2 +#, fuzzy +msgid "SecAggPlusWorkflow" +msgstr "Flux de travail" + +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:3 +#: of +msgid "" +"The SecAgg+ protocol ensures the secure summation of integer vectors " +"owned by multiple parties, without accessing any individual integer " +"vector. This workflow allows the server to compute the weighted average " +"of model parameters across all clients, ensuring individual contributions" +" remain private. This is achieved by clients sending both, a weighting " +"factor and a weighted version of the locally updated parameters, both of " +"which are masked for privacy. Specifically, each client uploads \"[w, w *" +" params]\" with masks, where weighting factor 'w' is the number of " +"examples ('num_examples') and 'params' represents the model parameters " +"('parameters') from the client's `FitRes`. The server then aggregates " +"these contributions to compute the weighted average of model parameters." +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:14 +#: of +msgid "" +"The protocol involves four main stages: - 'setup': Send SecAgg+ " +"configuration to clients and collect their public keys. - 'share keys': " +"Broadcast public keys among clients and collect encrypted secret" +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:17 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:17 +#: of +msgid "key shares." +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:18 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:18 +#: of +msgid "" +"'collect masked vectors': Forward encrypted secret key shares to target " +"clients and collect masked model parameters." +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:20 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:20 +#: of +msgid "" +"'unmask': Collect secret key shares to decrypt and aggregate the model " +"parameters." +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:22 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:22 +#: of +msgid "" +"Only the aggregated model parameters are exposed and passed to " +"`Strategy.aggregate_fit`, ensuring individual data privacy." +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:25 +#: of +msgid "" +"The number of shares into which each client's private key is split under " +"the SecAgg+ protocol. If specified as a float, it represents the " +"proportion of all selected clients, and the number of shares will be set " +"dynamically in the run time. A private key can be reconstructed from " +"these shares, allowing for the secure aggregation of model updates. Each " +"client sends one share to each of its neighbors while retaining one." +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:25 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:32 +#: of +msgid "" +"The minimum number of shares required to reconstruct a client's private " +"key, or, if specified as a float, it represents the proportion of the " +"total number of shares needed for reconstruction. This threshold ensures " +"privacy by allowing for the recovery of contributions from dropped " +"clients during aggregation, without compromising individual client data." +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:31 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:38 +#: of +msgid "" +"The maximum value of the weight that can be assigned to any single " +"client's update during the weighted average calculation on the server " +"side, e.g., in the FedAvg algorithm." +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:35 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:42 +#: of +msgid "" +"The range within which model parameters are clipped before quantization. " +"This parameter ensures each model parameter is bounded within " +"[-clipping_range, clipping_range], facilitating quantization." +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:39 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:46 +#: of +msgid "" +"The size of the range into which floating-point model parameters are " +"quantized, mapping each parameter to an integer in [0, " +"quantization_range-1]. This facilitates cryptographic operations on the " +"model updates." +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:43 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:50 +#: of +msgid "" +"The range of values from which random mask entries are uniformly sampled " +"([0, modulus_range-1]). `modulus_range` must be less than 4294967296. " +"Please use 2**n values for `modulus_range` to prevent overflow issues." +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:47 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:54 +#: of +msgid "" +"The timeout duration in seconds. If specified, the workflow will wait for" +" replies for this duration each time. If `None`, there is no time limit " +"and the workflow will wait until replies for all messages are received." +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:61 +#: of +msgid "" +"Generally, higher `num_shares` means more robust to dropouts while " +"increasing the computational costs; higher `reconstruction_threshold` " +"means better privacy guarantees but less tolerance to dropouts." +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:58 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:64 +#: of +msgid "Too large `max_weight` may compromise the precision of the quantization." +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:59 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:65 +#: of +msgid "`modulus_range` must be 2**n and larger than `quantization_range`." +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:66 +#: of +msgid "" +"When `num_shares` is a float, it is interpreted as the proportion of all " +"selected clients, and hence the number of shares will be determined in " +"the runtime. This allows for dynamic adjustment based on the total number" +" of participating clients." +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:69 +#: of +msgid "" +"Similarly, when `reconstruction_threshold` is a float, it is interpreted " +"as the proportion of the number of shares needed for the reconstruction " +"of a private key. This feature enables flexibility in setting the " +"security threshold relative to the number of distributed shares." +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:73 +#: of +msgid "" +"`num_shares`, `reconstruction_threshold`, and the quantization parameters" +" (`clipping_range`, `quantization_range`, `modulus_range`) play critical " +"roles in balancing privacy, robustness, and efficiency within the SecAgg+" +" protocol." +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of +msgid "" +":py:obj:`collect_masked_vectors_stage " +"`\\" +" \\(driver\\, ...\\)" +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of +msgid "Execute the 'collect masked vectors' stage." +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of +msgid "" +":py:obj:`setup_stage " +"`\\ \\(driver\\, " +"context\\, state\\)" +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.setup_stage:1 +#: of +msgid "Execute the 'setup' stage." +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of +msgid "" +":py:obj:`share_keys_stage " +"`\\ " +"\\(driver\\, context\\, state\\)" +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.share_keys_stage:1 +#: of +msgid "Execute the 'share keys' stage." +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of +msgid "" +":py:obj:`unmask_stage " +"`\\ \\(driver\\, " +"context\\, state\\)" +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.unmask_stage:1 +#: of +msgid "Execute the 'unmask' stage." +msgstr "" + +#: ../../source/ref-api/flwr.server.workflow.SecAggWorkflow.rst:2 +#, fuzzy +msgid "SecAggWorkflow" +msgstr "Flux de travail" + +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:1 of +msgid "" +"Bases: " +":py:class:`~flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow`" +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:3 of +msgid "" +"The SecAgg protocol ensures the secure summation of integer vectors owned" +" by multiple parties, without accessing any individual integer vector. " +"This workflow allows the server to compute the weighted average of model " +"parameters across all clients, ensuring individual contributions remain " +"private. This is achieved by clients sending both, a weighting factor and" +" a weighted version of the locally updated parameters, both of which are " +"masked for privacy. Specifically, each client uploads \"[w, w * params]\"" +" with masks, where weighting factor 'w' is the number of examples " +"('num_examples') and 'params' represents the model parameters " +"('parameters') from the client's `FitRes`. The server then aggregates " +"these contributions to compute the weighted average of model parameters." +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:14 of +msgid "" +"The protocol involves four main stages: - 'setup': Send SecAgg " +"configuration to clients and collect their public keys. - 'share keys': " +"Broadcast public keys among clients and collect encrypted secret" +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:54 of +msgid "" +"Each client's private key is split into N shares under the SecAgg " +"protocol, where N is the number of selected clients." +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:56 of +msgid "" +"Generally, higher `reconstruction_threshold` means better privacy " +"guarantees but less tolerance to dropouts." +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:60 of +msgid "" +"When `reconstruction_threshold` is a float, it is interpreted as the " +"proportion of the number of all selected clients needed for the " +"reconstruction of a private key. This feature enables flexibility in " +"setting the security threshold relative to the number of selected " +"clients." +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:64 of +msgid "" +"`reconstruction_threshold`, and the quantization parameters " +"(`clipping_range`, `quantization_range`, `modulus_range`) play critical " +"roles in balancing privacy, robustness, and efficiency within the SecAgg " +"protocol." +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of +msgid "" +":py:obj:`collect_masked_vectors_stage " +"`\\ " +"\\(driver\\, ...\\)" +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of +msgid "" +":py:obj:`setup_stage `\\" +" \\(driver\\, context\\, state\\)" +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of +msgid "" +":py:obj:`share_keys_stage " +"`\\ \\(driver\\, " +"context\\, state\\)" +msgstr "" + +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of +msgid "" +":py:obj:`unmask_stage " +"`\\ \\(driver\\, " +"context\\, state\\)" +msgstr "" + +#: ../../source/ref-api/flwr.simulation.rst:2 +#, fuzzy +msgid "simulation" +msgstr "Simulation de moniteur" + +#: ../../source/ref-api/flwr.simulation.rst:19::1 +msgid "" +":py:obj:`start_simulation `\\ \\(\\*\\," +" client\\_fn\\[\\, ...\\]\\)" +msgstr "" + +#: ../../source/ref-api/flwr.simulation.rst:19::1 +#: flwr.simulation.app.start_simulation:1 of +#, fuzzy +msgid "Start a Ray-based Flower simulation server." +msgstr "Simulation de moniteur" + +#: ../../source/ref-api/flwr.simulation.rst:19::1 +msgid "" +":py:obj:`run_simulation_from_cli " +"`\\ \\(\\)" +msgstr "" + +#: ../../source/ref-api/flwr.simulation.rst:19::1 +#: flwr.simulation.run_simulation.run_simulation_from_cli:1 of +msgid "Run Simulation Engine from the CLI." msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:16 of +#: ../../source/ref-api/flwr.simulation.rst:19::1 msgid "" -"**aggregation_result** -- The aggregated evaluation result. Aggregation " -"typically uses some variant of a weighted average." +":py:obj:`run_simulation `\\ " +"\\(server\\_app\\, client\\_app\\, ...\\)" msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_fit:5 of -msgid "" -"Successful updates from the previously selected and configured clients. " -"Each pair of `(ClientProxy, FitRes)` constitutes a successful update from" -" one of the previously selected clients. Not that not all previously " -"selected clients are necessarily included in this list: a client might " -"drop out and not submit a result. For each client that did not submit an " -"update, there should be an `Exception` in `failures`." +#: ../../source/ref-api/flwr.simulation.rst:19::1 +#: flwr.simulation.run_simulation.run_simulation:1 of +msgid "Run a Flower App using the Simulation Engine." msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_fit:17 of +#: ../../source/ref-api/flwr.simulation.run_simulation.rst:2 +#, fuzzy +msgid "run\\_simulation" +msgstr "Simulation de moniteur" + +#: flwr.simulation.run_simulation.run_simulation:3 of msgid "" -"**parameters** -- If parameters are returned, then the server will treat " -"these as the new global model parameters (i.e., it will replace the " -"previous parameters with the ones returned from this method). If `None` " -"is returned (e.g., because there were only failures and no viable " -"results) then the server will no update the previous model parameters, " -"the updates received in this round are discarded, and the global model " -"parameters remain the same." +"The `ServerApp` to be executed. It will send messages to different " +"`ClientApp` instances running on different (virtual) SuperNodes." msgstr "" -#: flwr.server.strategy.strategy.Strategy.evaluate:3 of +#: flwr.simulation.run_simulation.run_simulation:6 of msgid "" -"This function can be used to perform centralized (i.e., server-side) " -"evaluation of model parameters." +"The `ClientApp` to be executed by each of the SuperNodes. It will receive" +" messages sent by the `ServerApp`." msgstr "" -#: flwr.server.strategy.strategy.Strategy.evaluate:11 of +#: flwr.simulation.run_simulation.run_simulation:9 of msgid "" -"**evaluation_result** -- The evaluation result, usually a Tuple " -"containing loss and a dictionary containing task-specific metrics (e.g., " -"accuracy)." +"Number of nodes that run a ClientApp. They can be sampled by a Driver in " +"the ServerApp and receive a Message describing what the ClientApp should " +"perform." msgstr "" -#: flwr.server.strategy.strategy.Strategy.initialize_parameters:6 of +#: flwr.simulation.run_simulation.run_simulation:13 of +msgid "A simulation backend that runs `ClientApp`s." +msgstr "" + +#: flwr.simulation.run_simulation.run_simulation:15 of msgid "" -"**parameters** -- If parameters are returned, then the server will treat " -"these as the initial global model parameters." +"'A dictionary, e.g {\"\": , \"\": } to " +"configure a backend. Values supported in are those included by " +"`flwr.common.typing.ConfigsRecordValues`." msgstr "" -#: ../../source/ref-api/flwr.simulation.rst:2 -#, fuzzy -msgid "simulation" -msgstr "Simulation de moniteur" +#: flwr.simulation.run_simulation.run_simulation:19 of +msgid "" +"A boolean to indicate whether to enable GPU growth on the main thread. " +"This is desirable if you make use of a TensorFlow model on your " +"`ServerApp` while having your `ClientApp` running on the same GPU. " +"Without enabling this, you might encounter an out-of-memory error because" +" TensorFlow, by default, allocates all GPU memory. Read more about how " +"`tf.config.experimental.set_memory_growth()` works in the TensorFlow " +"documentation: https://www.tensorflow.org/api/stable." +msgstr "" -#: ../../source/ref-api/flwr.simulation.rst:17::1 +#: flwr.simulation.run_simulation.run_simulation:26 of msgid "" -":py:obj:`start_simulation `\\ \\(\\*\\," -" client\\_fn\\[\\, ...\\]\\)" +"When diabled, only INFO, WARNING and ERROR log messages will be shown. If" +" enabled, DEBUG-level logs will be displayed." msgstr "" -#: ../../source/ref-api/flwr.simulation.rst:17::1 -#: flwr.simulation.app.start_simulation:1 of +#: ../../source/ref-api/flwr.simulation.run_simulation_from_cli.rst:2 #, fuzzy -msgid "Start a Ray-based Flower simulation server." +msgid "run\\_simulation\\_from\\_cli" msgstr "Simulation de moniteur" #: ../../source/ref-api/flwr.simulation.start_simulation.rst:2 @@ -12522,7 +14417,7 @@ msgstr "" msgid "" "Optionally specify the type of actor to use. The actor object, which " "persists throughout the simulation, will be the process in charge of " -"running the clients' jobs (i.e. their `fit()` method)." +"executing a ClientApp wrapping input argument `client_fn`." msgstr "" #: flwr.simulation.app.start_simulation:54 of @@ -13635,9 +15530,9 @@ msgstr "" #: ../../source/ref-changelog.md:220 msgid "" "Much effort went into a completely restructured Flower docs experience. " -"The documentation on [flower.ai/docs](flower.ai/docs) is now divided " -"into Flower Framework, Flower Baselines, Flower Android SDK, Flower iOS " -"SDK, and code example projects." +"The documentation on [flower.ai/docs](https://flower.ai/docs) is now " +"divided into Flower Framework, Flower Baselines, Flower Android SDK, " +"Flower iOS SDK, and code example projects." msgstr "" #: ../../source/ref-changelog.md:222 @@ -13975,15 +15870,15 @@ msgid "" "gradient boosting to improve model accuracy. We added a new `FedXgbNnAvg`" " " "[strategy](https://github.com/adap/flower/tree/main/src/py/flwr/server/strategy/fedxgb_nn_avg.py)," -" and a [code " -"example](https://github.com/adap/flower/tree/main/examples/xgboost-quickstart)" -" that demonstrates the usage of this new strategy in an XGBoost project." +" and a [code example](https://github.com/adap/flower/tree/main/examples" +"/xgboost-quickstart) that demonstrates the usage of this new strategy in " +"an XGBoost project." msgstr "" "Nous avons ajouté une nouvelle [stratégie] `FedXgbNnAvg` " "(https://github.com/adap/flower/tree/main/src/py/flwr/server/strategy/fedxgb_nn_avg.py)," " et un [exemple de code] " -"(https://github.com/adap/flower/tree/main/examples/xgboost-quickstart)" -" qui démontre l'utilisation de cette nouvelle stratégie dans un projet " +"(https://github.com/adap/flower/tree/main/examples/xgboost-quickstart) " +"qui démontre l'utilisation de cette nouvelle stratégie dans un projet " "XGBoost." #: ../../source/ref-changelog.md:300 @@ -14199,12 +16094,14 @@ msgstr "" msgid "" "TabNet is a powerful and flexible framework for training machine learning" " models on tabular data. We now have a federated example using Flower: " -"[quickstart-tabnet](https://github.com/adap/flower/tree/main/examples/quickstart-tabnet)." +"[quickstart-tabnet](https://github.com/adap/flower/tree/main/examples" +"/quickstart-tabnet)." msgstr "" "TabNet est un cadre puissant et flexible pour former des modèles " "d'apprentissage automatique sur des données tabulaires. Nous avons " -"maintenant un exemple fédéré utilisant Flower : " -"[quickstart-tabnet](https://github.com/adap/flower/tree/main/examples/quickstart-tabnet)." +"maintenant un exemple fédéré utilisant Flower : [quickstart-" +"tabnet](https://github.com/adap/flower/tree/main/examples/quickstart-" +"tabnet)." #: ../../source/ref-changelog.md:334 msgid "" @@ -14396,12 +16293,14 @@ msgstr "" msgid "" "A new code example (`quickstart-fastai`) demonstrates federated learning " "with [fastai](https://www.fast.ai/) and Flower. You can find it here: " -"[quickstart-fastai](https://github.com/adap/flower/tree/main/examples/quickstart-fastai)." +"[quickstart-fastai](https://github.com/adap/flower/tree/main/examples" +"/quickstart-fastai)." msgstr "" "Un nouvel exemple de code (`quickstart-fastai`) démontre l'apprentissage " "fédéré avec [fastai](https://www.fast.ai/) et Flower. Tu peux le trouver " -"ici : " -"[quickstart-fastai](https://github.com/adap/flower/tree/main/examples/quickstart-fastai)." +"ici : [quickstart-" +"fastai](https://github.com/adap/flower/tree/main/examples/quickstart-" +"fastai)." #: ../../source/ref-changelog.md:376 msgid "" @@ -14723,8 +16622,8 @@ msgid "" "[Client and NumPyClient](https://flower.ai/docs/framework/tutorial-" "customize-the-client-pytorch.html)" msgstr "" -"[Client et NumPyClient] (https://flower.ai/docs/tutorial/Flower-4" -"-Client-and-NumPyClient-PyTorch.html)" +"[Client et NumPyClient] (https://flower.ai/docs/tutorial/Flower-4-Client-" +"and-NumPyClient-PyTorch.html)" #: ../../source/ref-changelog.md:435 msgid "" @@ -14845,12 +16744,14 @@ msgstr "" #: ../../source/ref-changelog.md:453 msgid "" "A new code example (`quickstart-pandas`) demonstrates federated analytics" -" with Pandas and Flower. You can find it here: " -"[quickstart-pandas](https://github.com/adap/flower/tree/main/examples/quickstart-pandas)." +" with Pandas and Flower. You can find it here: [quickstart-" +"pandas](https://github.com/adap/flower/tree/main/examples/quickstart-" +"pandas)." msgstr "" "Un nouvel exemple de code (`quickstart-pandas`) démontre l'analyse " -"fédérée avec Pandas et Flower. Tu peux le trouver ici : " -"[quickstart-pandas](https://github.com/adap/flower/tree/main/examples/quickstart-pandas)." +"fédérée avec Pandas et Flower. Tu peux le trouver ici : [quickstart-" +"pandas](https://github.com/adap/flower/tree/main/examples/quickstart-" +"pandas)." #: ../../source/ref-changelog.md:455 msgid "" @@ -14949,9 +16850,8 @@ msgid "" "never contributed on GitHub before, this is the perfect place to start!" msgstr "" "L'un des points forts est le nouveau [guide du premier contributeur] " -"(https://flower.ai/docs/first-time-contributors.html) : si tu n'as " -"jamais contribué sur GitHub auparavant, c'est l'endroit idéal pour " -"commencer !" +"(https://flower.ai/docs/first-time-contributors.html) : si tu n'as jamais" +" contribué sur GitHub auparavant, c'est l'endroit idéal pour commencer !" #: ../../source/ref-changelog.md:477 msgid "v1.1.0 (2022-10-31)" @@ -15847,14 +17747,15 @@ msgstr "" "[#914](https://github.com/adap/flower/pull/914))" #: ../../source/ref-changelog.md:660 +#, fuzzy msgid "" "The first preview release of Flower Baselines has arrived! We're " "kickstarting Flower Baselines with implementations of FedOpt (FedYogi, " "FedAdam, FedAdagrad), FedBN, and FedAvgM. Check the documentation on how " "to use [Flower Baselines](https://flower.ai/docs/using-baselines.html). " "With this first preview release we're also inviting the community to " -"[contribute their own baselines](https://flower.ai/docs/contributing-" -"baselines.html)." +"[contribute their own baselines](https://flower.ai/docs/baselines/how-to-" +"contribute-baselines.html)." msgstr "" "La première version préliminaire de Flower Baselines est arrivée ! Nous " "démarrons Flower Baselines avec des implémentations de FedOpt (FedYogi, " @@ -16743,10 +18644,11 @@ msgstr "" "métriques spécifiques à une tâche sur le serveur." #: ../../source/ref-changelog.md:845 +#, fuzzy msgid "" "Custom metric dictionaries are now used in two user-facing APIs: they are" " returned from Strategy methods `aggregate_fit`/`aggregate_evaluate` and " -"they enable evaluation functions passed to build-in strategies (via " +"they enable evaluation functions passed to built-in strategies (via " "`eval_fn`) to return more than two evaluation metrics. Strategies can " "even return *aggregated* metrics dictionaries for the server to keep " "track of." @@ -16760,8 +18662,9 @@ msgstr "" "*agrégées* pour que le serveur puisse en garder la trace." #: ../../source/ref-changelog.md:847 +#, fuzzy msgid "" -"Stratey implementations should migrate their `aggregate_fit` and " +"Strategy implementations should migrate their `aggregate_fit` and " "`aggregate_evaluate` methods to the new return type (e.g., by simply " "returning an empty `{}`), server-side evaluation functions should migrate" " from `return loss, accuracy` to `return loss, {\"accuracy\": accuracy}`." @@ -17294,9 +19197,7 @@ msgstr "" #: ../../source/ref-example-projects.rst:26 #, fuzzy -msgid "" -"`Quickstart TensorFlow (Tutorial) `_" +msgid ":doc:`Quickstart TensorFlow (Tutorial) `" msgstr "" "`Quickstart TensorFlow (Tutorial) `_" @@ -17333,9 +19234,7 @@ msgstr "" #: ../../source/ref-example-projects.rst:37 #, fuzzy -msgid "" -"`Quickstart PyTorch (Tutorial) `_" +msgid ":doc:`Quickstart PyTorch (Tutorial) `" msgstr "" "`Quickstart PyTorch (Tutorial) `_" @@ -17366,9 +19265,8 @@ msgstr "" #: ../../source/ref-example-projects.rst:46 #, fuzzy msgid "" -"`PyTorch: From Centralized To Federated (Tutorial) " -"`_" +":doc:`PyTorch: From Centralized To Federated (Tutorial) `" msgstr "" "`PyTorch : De la centralisation à la fédération (Tutoriel) " "`_." msgstr "" @@ -17514,10 +19413,12 @@ msgid "ImageNet-2012 Image Classification" msgstr "ImageNet-2012 Classification des images" #: ../../source/ref-example-projects.rst:117 +#, fuzzy msgid "" -"`ImageNet-2012 `_ is one of the major computer" -" vision datasets. The Flower ImageNet example uses PyTorch to train a " -"ResNet-18 classifier in a federated learning setup with ten clients." +"`ImageNet-2012 `_ is one of the major " +"computer vision datasets. The Flower ImageNet example uses PyTorch to " +"train a ResNet-18 classifier in a federated learning setup with ten " +"clients." msgstr "" "`ImageNet-2012 `_ est l'un des principaux " "ensembles de données de vision par ordinateur. L'exemple Flower ImageNet " @@ -17589,7 +19490,8 @@ msgstr "" "posées sur l'apprentissage fédéré avec Flower." #: ../../source/ref-faq.rst -msgid ":fa:`eye,mr-1` Can Flower run on Juptyter Notebooks / Google Colab?" +#, fuzzy +msgid ":fa:`eye,mr-1` Can Flower run on Jupyter Notebooks / Google Colab?" msgstr "" ":fa:`eye,mr-1` Flower peut-il fonctionner sur les ordinateurs portables " "Juptyter / Google Colab ?" @@ -17652,13 +19554,13 @@ msgstr "" #, fuzzy msgid "" "Yes, it does. Please take a look at our `blog post " -"`_ or check out the code examples:" +"`_ or check out the code examples:" msgstr "" "Oui. Jetez un coup d'œil à notre `blog post " -"`_ ou consultez l'`exemple de code Android sur GitHub" -" `_." +"`_ ou consultez l'`exemple de code Android sur GitHub " +"`_." #: ../../source/ref-faq.rst:21 msgid "" @@ -17701,8 +19603,9 @@ msgstr "" "`_." #: ../../source/ref-faq.rst:30 +#, fuzzy msgid "" -"`Flower meets KOSMoS `_." msgstr "" "`Flower rencontre KOSMoS `_ ." msgstr "" "Si tu veux voir tout ce qui est mis ensemble, tu devrais consulter " "l'exemple de code complet : " @@ -18261,7 +20162,7 @@ msgstr "" "huggingface](https://github.com/adap/flower/tree/main/examples" "/quickstart-huggingface)." -#: ../../source/tutorial-quickstart-huggingface.rst:227 +#: ../../source/tutorial-quickstart-huggingface.rst:226 msgid "" "Of course, this is a very basic example, and a lot can be added or " "modified, it was just to showcase how simply we could federate a Hugging " @@ -18272,7 +20173,7 @@ msgstr "" "simplicité on pouvait fédérer un flux de travail Hugging Face à l'aide de" " Flower." -#: ../../source/tutorial-quickstart-huggingface.rst:230 +#: ../../source/tutorial-quickstart-huggingface.rst:229 msgid "" "Note that in this example we used :code:`PyTorch`, but we could have very" " well used :code:`TensorFlow`." @@ -18304,9 +20205,9 @@ msgstr "" #, fuzzy msgid "" "First of all, for running the Flower Python server, it is recommended to " -"create a virtual environment and run everything within a `virtualenv " -"`_. For the Flower " -"client implementation in iOS, it is recommended to use Xcode as our IDE." +"create a virtual environment and run everything within a :doc:`virtualenv" +" `. For the Flower client " +"implementation in iOS, it is recommended to use Xcode as our IDE." msgstr "" "Tout d'abord, il est recommandé de créer un environnement virtuel et de " "tout exécuter au sein d'un `virtualenv `_. As a result, we would " -"encourage you to use other ML frameworks alongise Flower, for example, " +"encourage you to use other ML frameworks alongside Flower, for example, " "PyTorch. This tutorial might be removed in future versions of Flower." msgstr "" @@ -18517,10 +20418,10 @@ msgstr "" #: ../../source/tutorial-quickstart-mxnet.rst:14 #: ../../source/tutorial-quickstart-scikitlearn.rst:12 +#, fuzzy msgid "" "It is recommended to create a virtual environment and run everything " -"within this `virtualenv `_." +"within this :doc:`virtualenv `." msgstr "" "Il est recommandé de créer un environnement virtuel et de tout exécuter " "dans ce `virtualenv `_." +"everything within a :doc:`virtualenv `." msgstr "" "Tout d'abord, il est recommandé de créer un environnement virtuel et de " "tout exécuter au sein d'un `virtualenv `_, " -"a popular image classification dataset of handwritten digits for machine " -"learning. The utility :code:`utils.load_mnist()` downloads the training " -"and test data. The training set is split afterwards into 10 partitions " -"with :code:`utils.partition()`." +"We load the MNIST dataset from `OpenML " +"`_, a popular " +"image classification dataset of handwritten digits for machine learning. " +"The utility :code:`utils.load_mnist()` downloads the training and test " +"data. The training set is split afterwards into 10 partitions with " +":code:`utils.partition()`." msgstr "" "Nous chargeons l'ensemble de données MNIST de `OpenML " "`_, un ensemble de données de " @@ -19757,10 +21662,9 @@ msgid "" "`_), we provide more options to define various experimental" " setups, including aggregation strategies, data partitioning and " -"centralised/distributed evaluation. We also support `Flower simulation " -"`_ making " -"it easy to simulate large client cohorts in a resource-aware manner. " -"Let's take a look!" +"centralised/distributed evaluation. We also support :doc:`Flower " +"simulation ` making it easy to simulate large " +"client cohorts in a resource-aware manner. Let's take a look!" msgstr "" #: ../../source/tutorial-quickstart-xgboost.rst:603 @@ -20256,8 +22160,8 @@ msgstr "" "Bienvenue dans la quatrième partie du tutoriel sur l'apprentissage fédéré" " Flower. Dans les parties précédentes de ce tutoriel, nous avons présenté" " l'apprentissage fédéré avec PyTorch et Flower (`partie 1 " -"`__)," -" nous avons appris comment les stratégies peuvent être utilisées pour " +"`__), " +"nous avons appris comment les stratégies peuvent être utilisées pour " "personnaliser l'exécution à la fois sur le serveur et les clients " "(`partie 2 `__), et nous avons construit notre propre stratégie " @@ -20567,8 +22471,8 @@ msgstr "Côté client" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:615 msgid "" -"To be able to serialize our ``ndarray``\\ s into sparse " -"parameters, we will just have to call our custom functions in our " +"To be able to serialize our ``ndarray``\\ s into sparse parameters, we " +"will just have to call our custom functions in our " "``flwr.client.Client``." msgstr "" "Pour pouvoir sérialiser nos ``ndarray`` en paramètres sparse, il nous " @@ -21419,8 +23323,8 @@ msgid "" msgstr "" "Dans ce carnet, nous allons commencer à personnaliser le système " "d'apprentissage fédéré que nous avons construit dans le carnet " -"d'introduction (toujours en utilisant `Flower `__ et" -" `PyTorch `__)." +"d'introduction (toujours en utilisant `Flower `__ et " +"`PyTorch `__)." #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:17 #, fuzzy @@ -21725,9 +23629,9 @@ msgstr "" #, fuzzy msgid "" "The `Flower Federated Learning Tutorial - Part 3 " -"`__ shows how to build a fully custom ``Strategy`` " -"from scratch." +"`__ shows how to build a fully custom ``Strategy`` from " +"scratch." msgstr "" "Le `Tutoriel d'apprentissage fédéré Flower - Partie 3 [WIP] " "`__ browser or the `Signal `__ " "messenger shows that users care about privacy. In fact, they choose the " -"privacy-enhancing version over other alternatives, if such an alternative " -"exists. But what can we do to apply machine learning and data science to " -"these cases to utilize private data? After all, these are all areas that " -"would benefit significantly from recent advances in AI." +"privacy-enhancing version over other alternatives, if such an alternative" +" exists. But what can we do to apply machine learning and data science to" +" these cases to utilize private data? After all, these are all areas that" +" would benefit significantly from recent advances in AI." msgstr "" "La popularité des systèmes améliorant la confidentialité comme le " "navigateur `Brave `__ ou le messager `Signal " @@ -22186,7 +24090,7 @@ msgstr "" "partir d'un point de contrôle précédemment sauvegardé." #: ../../source/tutorial-series-what-is-federated-learning.ipynb:210 -msgid "|ba47ffb421814b0f8f9fa5719093d839|" +msgid "|1d73c61ed0e34484bc5f4cb2b86996c1|" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:307 @@ -22221,7 +24125,7 @@ msgstr "" "rendements décroissants." #: ../../source/tutorial-series-what-is-federated-learning.ipynb:225 -msgid "|aeac5bf79cbf497082e979834717e01b|" +msgid "|ecce7ba27b174ddf906ee9c12cc9c545|" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:309 @@ -22254,7 +24158,7 @@ msgstr "" "données locales, ou même de quelques étapes (mini-batchs)." #: ../../source/tutorial-series-what-is-federated-learning.ipynb:240 -msgid "|ce27ed4bbe95459dba016afc42486ba2|" +msgid "|30eee0b0ca684a8d9187380a5f71d6af|" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:311 @@ -22285,7 +24189,7 @@ msgstr "" " l'entraînement local." #: ../../source/tutorial-series-what-is-federated-learning.ipynb:255 -msgid "|ae94a7f71dda443cbec2385751427d41|" +msgid "|22e8fb88ba204b04b61212b2460e6b48|" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:313 @@ -22344,7 +24248,7 @@ msgstr "" "times as much as each of the 100 examples." #: ../../source/tutorial-series-what-is-federated-learning.ipynb:273 -msgid "|e61fce4d43d243e7bb08bdde97d81ce6|" +msgid "|5d53a3f539644cd5a4ba28696421b01a|" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:315 @@ -22451,11 +24355,6 @@ msgstr "" "empêcher le serveur de voir les résultats soumis par les nœuds clients " "individuels." -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:303 -#, fuzzy -msgid "Differential Privacy" -msgstr "Confidentialité différentielle" - #: ../../source/tutorial-series-what-is-federated-learning.ipynb:305 msgid "" "Differential privacy (DP) is often mentioned in the context of Federated " @@ -22492,7 +24391,7 @@ msgstr "" "quel cadre de ML et n'importe quel langage de programmation." #: ../../source/tutorial-series-what-is-federated-learning.ipynb:334 -msgid "|08cb60859b07461588fe44e55810b050|" +msgid "|6887fea9613d4dff8c9aae62a1f207e2|" msgstr "" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:340 @@ -24629,10 +26528,10 @@ msgstr "" #~ "Flower Python server, it is recommended" #~ " to create a virtual environment and" #~ " run everything within a `virtualenv " -#~ "`_." -#~ " For the Flower client implementation " -#~ "in iOS, it is recommended to use" -#~ " Xcode as our IDE." +#~ "`_. " +#~ "For the Flower client implementation in" +#~ " iOS, it is recommended to use " +#~ "Xcode as our IDE." #~ msgstr "" #~ "Tout d'abord, pour l'exécution du " #~ "serveur Flower Python, il est recommandé" @@ -24795,27 +26694,12 @@ msgstr "" #~ "The implementation can be seen in " #~ ":code:`MLModelInspect`." #~ msgstr "" -#~ "Comme CoreML ne permet pas de voir" -#~ " les paramètres du modèle avant la" -#~ " formation, et que l'accès aux " -#~ "paramètres du modèle pendant ou après" -#~ " la formation ne peut se faire " -#~ "qu'en spécifiant le nom de la " -#~ "couche, nous devons connaître ces " -#~ "informations à l'avance, en regardant " -#~ "les spécifications du modèle, qui sont" -#~ " écrites sous forme de fichiers " -#~ "proto. La mise en œuvre peut être" -#~ " vue dans :code:`MLModelInspect`." #~ msgid "" #~ "After we have all of the necessary" #~ " informations, let's create our Flower " #~ "client." #~ msgstr "" -#~ "Après avoir obtenu toutes les " -#~ "informations nécessaires, créons notre client" -#~ " Flower." #~ msgid "" #~ "Then start the Flower gRPC client " @@ -25474,8 +27358,8 @@ msgstr "" #~ " papers. If you want to add a" #~ " new baseline or experiment, please " #~ "check the `Contributing Baselines " -#~ "`_ " -#~ "section." +#~ "`_ section." #~ msgstr "" #~ msgid "Paper" @@ -25798,3 +27682,880 @@ msgstr "" #~ msgid "|c76452ae1ed84965be7ef23c72b95845|" #~ msgstr "" +#~ msgid "" +#~ "Please follow the first section on " +#~ "`Run Flower using Docker " +#~ "`_ which covers this" +#~ " step in more detail." +#~ msgstr "" + +#~ msgid "" +#~ "Since `Flower 1.5 `_ we have " +#~ "introduced translations to our doc " +#~ "pages, but, as you might have " +#~ "noticed, the translations are often " +#~ "imperfect. If you speak languages other" +#~ " than English, you might be able " +#~ "to help us in our effort to " +#~ "make Federated Learning accessible to as" +#~ " many people as possible by " +#~ "contributing to those translations! This " +#~ "might also be a great opportunity " +#~ "for those wanting to become open " +#~ "source contributors with little prerequistes." +#~ msgstr "" + +#~ msgid "" +#~ "You input your translation in the " +#~ "textbox at the top and then, once" +#~ " you are happy with it, you " +#~ "either press ``Save and continue`` (to" +#~ " save the translation and go to " +#~ "the next untranslated string), ``Save " +#~ "and stay`` (to save the translation " +#~ "and stay on the same page), " +#~ "``Suggest`` (to add your translation to" +#~ " suggestions for other users to " +#~ "view), or ``Skip`` (to go to the" +#~ " next untranslated string without saving" +#~ " anything)." +#~ msgstr "" + +#~ msgid "" +#~ "If the section is completely empty " +#~ "(without any token) or non-existant, " +#~ "the changelog will just contain the " +#~ "title of the PR for the changelog" +#~ " entry, without any description." +#~ msgstr "" + +#~ msgid "" +#~ "Flower provides differential privacy (DP) " +#~ "wrapper classes for the easy integration" +#~ " of the central DP guarantees " +#~ "provided by DP-FedAvg into training " +#~ "pipelines defined in any of the " +#~ "various ML frameworks that Flower is " +#~ "compatible with." +#~ msgstr "" +#~ "Flower fournit des classes d'enveloppe " +#~ "de confidentialité différentielle (DP) pour" +#~ " l'intégration facile des garanties " +#~ "centrales de DP fournies par DP-" +#~ "FedAvg dans les pipelines de formation" +#~ " définis dans n'importe lequel des " +#~ "divers cadres de ML avec lesquels " +#~ "Flower est compatible." + +#~ msgid "" +#~ "Please note that these components are" +#~ " still experimental; the correct " +#~ "configuration of DP for a specific " +#~ "task is still an unsolved problem." +#~ msgstr "" +#~ "Note que ces composants sont encore " +#~ "expérimentaux, la configuration correcte du" +#~ " DP pour une tâche spécifique est " +#~ "encore un problème non résolu." + +#~ msgid "" +#~ "The name DP-FedAvg is misleading " +#~ "since it can be applied on top " +#~ "of any FL algorithm that conforms " +#~ "to the general structure prescribed by" +#~ " the FedOpt family of algorithms." +#~ msgstr "" +#~ "Le nom DP-FedAvg est trompeur car" +#~ " il peut être appliqué à n'importe" +#~ " quel algorithme FL qui se conforme" +#~ " à la structure générale prescrite " +#~ "par la famille d'algorithmes FedOpt." + +#~ msgid "DP-FedAvg" +#~ msgstr "DP-FedAvg" + +#~ msgid "" +#~ "DP-FedAvg, originally proposed by " +#~ "McMahan et al. [mcmahan]_ and extended" +#~ " by Andrew et al. [andrew]_, is " +#~ "essentially FedAvg with the following " +#~ "modifications." +#~ msgstr "" +#~ "DP-FedAvg, proposé à l'origine par " +#~ "McMahan et al. [mcmahan]_ et étendu " +#~ "par Andrew et al. [andrew]_, est " +#~ "essentiellement FedAvg avec les modifications" +#~ " suivantes." + +#~ msgid "" +#~ "**Clipping** : The influence of each " +#~ "client's update is bounded by clipping" +#~ " it. This is achieved by enforcing" +#~ " a cap on the L2 norm of " +#~ "the update, scaling it down if " +#~ "needed." +#~ msgstr "" +#~ "**Clipping** : L'influence de la mise" +#~ " à jour de chaque client est " +#~ "limitée en l'écrêtant. Ceci est réalisé" +#~ " en imposant un plafond à la " +#~ "norme L2 de la mise à jour, " +#~ "en la réduisant si nécessaire." + +#~ msgid "" +#~ "**Noising** : Gaussian noise, calibrated " +#~ "to the clipping threshold, is added " +#~ "to the average computed at the " +#~ "server." +#~ msgstr "" +#~ "**Bruit** : un bruit gaussien, calibré" +#~ " sur le seuil d'écrêtage, est ajouté" +#~ " à la moyenne calculée au niveau " +#~ "du serveur." + +#~ msgid "" +#~ "The distribution of the update norm " +#~ "has been shown to vary from " +#~ "task-to-task and to evolve as " +#~ "training progresses. This variability is " +#~ "crucial in understanding its impact on" +#~ " differential privacy guarantees, emphasizing " +#~ "the need for an adaptive approach " +#~ "[andrew]_ that continuously adjusts the " +#~ "clipping threshold to track a " +#~ "prespecified quantile of the update norm" +#~ " distribution." +#~ msgstr "" +#~ "Il a été démontré que la " +#~ "distribution de la norme de mise à" +#~ " jour varie d'une tâche à l'autre " +#~ "et évolue au fur et à mesure " +#~ "de la formation. C'est pourquoi nous " +#~ "utilisons une approche adaptative [andrew]_" +#~ " qui ajuste continuellement le seuil " +#~ "d'écrêtage pour suivre un quantile " +#~ "prédéfini de la distribution de la " +#~ "norme de mise à jour." + +#~ msgid "Simplifying Assumptions" +#~ msgstr "Simplifier les hypothèses" + +#~ msgid "" +#~ "We make (and attempt to enforce) a" +#~ " number of assumptions that must be" +#~ " satisfied to ensure that the " +#~ "training process actually realizes the " +#~ ":math:`(\\epsilon, \\delta)` guarantees the " +#~ "user has in mind when configuring " +#~ "the setup." +#~ msgstr "" +#~ "Nous formulons (et tentons d'appliquer) " +#~ "un certain nombre d'hypothèses qui " +#~ "doivent être satisfaites pour que le " +#~ "processus de formation réalise réellement " +#~ "les garanties :math:`(\\epsilon, \\delta)` que" +#~ " l'utilisateur a à l'esprit lorsqu'il " +#~ "configure l'installation." + +#~ msgid "" +#~ "**Fixed-size subsampling** :Fixed-size " +#~ "subsamples of the clients must be " +#~ "taken at each round, as opposed to" +#~ " variable-sized Poisson subsamples." +#~ msgstr "" +#~ "**Sous-échantillonnage de taille fixe** " +#~ ":Des sous-échantillons de taille fixe" +#~ " des clients doivent être prélevés à" +#~ " chaque tour, par opposition aux " +#~ "sous-échantillons de Poisson de taille " +#~ "variable." + +#~ msgid "" +#~ "**Unweighted averaging** : The contributions" +#~ " from all the clients must weighted" +#~ " equally in the aggregate to " +#~ "eliminate the requirement for the server" +#~ " to know in advance the sum of" +#~ " the weights of all clients available" +#~ " for selection." +#~ msgstr "" +#~ "**Moyenne non pondérée** : Les " +#~ "contributions de tous les clients " +#~ "doivent être pondérées de façon égale" +#~ " dans l'ensemble afin que le serveur" +#~ " n'ait pas à connaître à l'avance " +#~ "la somme des poids de tous les " +#~ "clients disponibles pour la sélection." + +#~ msgid "" +#~ "**No client failures** : The set " +#~ "of available clients must stay constant" +#~ " across all rounds of training. In" +#~ " other words, clients cannot drop out" +#~ " or fail." +#~ msgstr "" +#~ "**Aucune défaillance de client** : " +#~ "L'ensemble des clients disponibles doit " +#~ "rester constant pendant toutes les " +#~ "séries de formation. En d'autres termes," +#~ " les clients ne peuvent pas " +#~ "abandonner ou échouer." + +#~ msgid "" +#~ "The first two are useful for " +#~ "eliminating a multitude of complications " +#~ "associated with calibrating the noise to" +#~ " the clipping threshold, while the " +#~ "third one is required to comply " +#~ "with the assumptions of the privacy " +#~ "analysis." +#~ msgstr "" +#~ "Les deux premiers sont utiles pour " +#~ "éliminer une multitude de complications " +#~ "liées au calibrage du bruit en " +#~ "fonction du seuil d'écrêtage, tandis que" +#~ " le troisième est nécessaire pour se" +#~ " conformer aux hypothèses de l'analyse " +#~ "de la vie privée." + +#~ msgid "" +#~ "These restrictions are in line with " +#~ "constraints imposed by Andrew et al. " +#~ "[andrew]_." +#~ msgstr "" +#~ "Ces restrictions sont conformes aux " +#~ "contraintes imposées par Andrew et al." +#~ " [andrew]_." + +#~ msgid "Customizable Responsibility for Noise injection" +#~ msgstr "Responsabilité personnalisable pour l'injection de bruit" + +#~ msgid "" +#~ "In contrast to other implementations " +#~ "where the addition of noise is " +#~ "performed at the server, you can " +#~ "configure the site of noise injection" +#~ " to better match your threat model." +#~ " We provide users with the " +#~ "flexibility to set up the training " +#~ "such that each client independently adds" +#~ " a small amount of noise to the" +#~ " clipped update, with the result that" +#~ " simply aggregating the noisy updates " +#~ "is equivalent to the explicit addition" +#~ " of noise to the non-noisy " +#~ "aggregate at the server." +#~ msgstr "" +#~ "Contrairement à d'autres implémentations où" +#~ " l'ajout de bruit est effectué au " +#~ "niveau du serveur, tu peux configurer" +#~ " le site d'injection de bruit pour" +#~ " qu'il corresponde mieux à ton modèle" +#~ " de menace. Nous offrons aux " +#~ "utilisateurs la possibilité de configurer " +#~ "l'entraînement de telle sorte que chaque" +#~ " client ajoute indépendamment une petite" +#~ " quantité de bruit à la mise à" +#~ " jour écrêtée, ce qui fait que " +#~ "le simple fait d'agréger les mises " +#~ "à jour bruyantes équivaut à l'ajout " +#~ "explicite de bruit à l'agrégat non " +#~ "bruyant au niveau du serveur." + +#~ msgid "" +#~ "To be precise, if we let :math:`m`" +#~ " be the number of clients sampled " +#~ "each round and :math:`\\sigma_\\Delta` be " +#~ "the scale of the total Gaussian " +#~ "noise that needs to be added to" +#~ " the sum of the model updates, " +#~ "we can use simple maths to show" +#~ " that this is equivalent to each " +#~ "client adding noise with scale " +#~ ":math:`\\sigma_\\Delta/\\sqrt{m}`." +#~ msgstr "" +#~ "Pour être précis, si nous laissons " +#~ ":math:`m` être le nombre de clients " +#~ "échantillonnés à chaque tour et " +#~ ":math:\\sigma_\\Delta` être l'échelle du bruit" +#~ " gaussien total qui doit être ajouté" +#~ " à la somme des mises à jour" +#~ " du modèle, nous pouvons utiliser des" +#~ " mathématiques simples pour montrer que " +#~ "cela équivaut à ce que chaque " +#~ "client ajoute du bruit avec l'échelle" +#~ " :math:\\sigma_\\Delta/\\sqrt{m}`." + +#~ msgid "Wrapper-based approach" +#~ msgstr "Approche basée sur l'enveloppe" + +#~ msgid "" +#~ "Introducing DP to an existing workload" +#~ " can be thought of as adding an" +#~ " extra layer of security around it." +#~ " This inspired us to provide the " +#~ "additional server and client-side logic" +#~ " needed to make the training process" +#~ " differentially private as wrappers for " +#~ "instances of the :code:`Strategy` and " +#~ ":code:`NumPyClient` abstract classes respectively." +#~ " This wrapper-based approach has the" +#~ " advantage of being easily composable " +#~ "with other wrappers that someone might" +#~ " contribute to the Flower library in" +#~ " the future, e.g., for secure " +#~ "aggregation. Using Inheritance instead can " +#~ "be tedious because that would require" +#~ " the creation of new sub- classes " +#~ "every time a new class implementing " +#~ ":code:`Strategy` or :code:`NumPyClient` is " +#~ "defined." +#~ msgstr "" +#~ "L'introduction du DP dans une charge " +#~ "de travail existante peut être " +#~ "considérée comme l'ajout d'une couche de" +#~ " sécurité supplémentaire autour d'elle. " +#~ "Cela nous a incités à fournir la" +#~ " logique supplémentaire côté serveur et " +#~ "côté client nécessaire pour rendre le" +#~ " processus de formation différentiellement " +#~ "privé en tant qu'enveloppes pour les " +#~ "instances des classes abstraites " +#~ ":code:`Strategy` et :code:`NumPyClient` " +#~ "respectivement. Cette approche basée sur " +#~ "l'enveloppe a l'avantage d'être facilement " +#~ "composable avec d'autres enveloppes que " +#~ "quelqu'un pourrait contribuer à la " +#~ "bibliothèque Flower à l'avenir, par " +#~ "exemple, pour l'agrégation sécurisée. " +#~ "L'utilisation de l'héritage à la place" +#~ " peut être fastidieuse car cela " +#~ "nécessiterait la création de nouvelles " +#~ "sous-classes chaque fois qu'une nouvelle" +#~ " classe mettant en œuvre :code:`Strategy`" +#~ " ou :code:`NumPyClient` est définie." + +#~ msgid "" +#~ "The first version of our solution " +#~ "was to define a decorator whose " +#~ "constructor accepted, among other things, " +#~ "a boolean-valued variable indicating " +#~ "whether adaptive clipping was to be " +#~ "enabled or not. We quickly realized " +#~ "that this would clutter its " +#~ ":code:`__init__()` function with variables " +#~ "corresponding to hyperparameters of adaptive" +#~ " clipping that would remain unused " +#~ "when it was disabled. A cleaner " +#~ "implementation could be achieved by " +#~ "splitting the functionality into two " +#~ "decorators, :code:`DPFedAvgFixed` and " +#~ ":code:`DPFedAvgAdaptive`, with the latter sub-" +#~ " classing the former. The constructors " +#~ "for both classes accept a boolean " +#~ "parameter :code:`server_side_noising`, which, as " +#~ "the name suggests, determines where " +#~ "noising is to be performed." +#~ msgstr "" +#~ "La première version de notre solution" +#~ " consistait à définir un décorateur " +#~ "dont le constructeur acceptait, entre " +#~ "autres, une variable à valeur booléenne" +#~ " indiquant si l'écrêtage adaptatif devait" +#~ " être activé ou non. Nous nous " +#~ "sommes rapidement rendu compte que cela" +#~ " encombrerait sa fonction :code:`__init__()` " +#~ "avec des variables correspondant aux " +#~ "hyperparamètres de l'écrêtage adaptatif qui" +#~ " resteraient inutilisées lorsque celui-ci" +#~ " était désactivé. Une implémentation plus" +#~ " propre pourrait être obtenue en " +#~ "divisant la fonctionnalité en deux " +#~ "décorateurs, :code:`DPFedAvgFixed` et " +#~ ":code:`DPFedAvgAdaptive`, le second sous-" +#~ "classant le premier. Les constructeurs " +#~ "des deux classes acceptent un paramètre" +#~ " booléen :code:`server_side_noising` qui, comme" +#~ " son nom l'indique, détermine l'endroit " +#~ "où le noising doit être effectué." + +#~ msgid "" +#~ "The server-side capabilities required " +#~ "for the original version of DP-" +#~ "FedAvg, i.e., the one which performed" +#~ " fixed clipping, can be completely " +#~ "captured with the help of wrapper " +#~ "logic for just the following two " +#~ "methods of the :code:`Strategy` abstract " +#~ "class." +#~ msgstr "" +#~ "Les capacités côté serveur requises pour" +#~ " la version originale de DP-FedAvg," +#~ " c'est-à-dire celle qui effectue un " +#~ "écrêtage fixe, peuvent être entièrement " +#~ "capturées à l'aide d'une logique " +#~ "d'enveloppement pour les deux méthodes " +#~ "suivantes de la classe abstraite " +#~ ":code:`Strategy`." + +#~ msgid "" +#~ ":code:`configure_fit()` : The config " +#~ "dictionary being sent by the wrapped " +#~ ":code:`Strategy` to each client needs to" +#~ " be augmented with an additional " +#~ "value equal to the clipping threshold" +#~ " (keyed under :code:`dpfedavg_clip_norm`) and," +#~ " if :code:`server_side_noising=true`, another one" +#~ " equal to the scale of the " +#~ "Gaussian noise that needs to be " +#~ "added at the client (keyed under " +#~ ":code:`dpfedavg_noise_stddev`). This entails " +#~ "*post*-processing of the results returned " +#~ "by the wrappee's implementation of " +#~ ":code:`configure_fit()`." +#~ msgstr "" +#~ ":code:`configure_fit()` : Le dictionnaire de" +#~ " configuration envoyé par la " +#~ ":code:`Strategy` enveloppée à chaque client" +#~ " doit être augmenté d'une valeur " +#~ "supplémentaire égale au seuil d'écrêtage " +#~ "(indiqué sous :code:`dpfedavg_clip_norm`) et, " +#~ "si :code:`server_side_noising=true`, d'une autre " +#~ "égale à l'échelle du bruit gaussien " +#~ "qui doit être ajouté au client " +#~ "(indiqué sous :code:`dpfedavg_noise_stddev`)." + +#~ msgid "" +#~ ":code:`aggregate_fit()`: We check whether any" +#~ " of the sampled clients dropped out" +#~ " or failed to upload an update " +#~ "before the round timed out. In " +#~ "that case, we need to abort the" +#~ " current round, discarding any successful" +#~ " updates that were received, and move" +#~ " on to the next one. On the " +#~ "other hand, if all clients responded " +#~ "successfully, we must force the " +#~ "averaging of the updates to happen " +#~ "in an unweighted manner by intercepting" +#~ " the :code:`parameters` field of " +#~ ":code:`FitRes` for each received update " +#~ "and setting it to 1. Furthermore, " +#~ "if :code:`server_side_noising=true`, each update " +#~ "is perturbed with an amount of " +#~ "noise equal to what it would have" +#~ " been subjected to had client-side" +#~ " noising being enabled. This entails " +#~ "*pre*-processing of the arguments to " +#~ "this method before passing them on " +#~ "to the wrappee's implementation of " +#~ ":code:`aggregate_fit()`." +#~ msgstr "" +#~ ":code:`aggregate_fit()`: We check whether any" +#~ " of the sampled clients dropped out" +#~ " or failed to upload an update " +#~ "before the round timed out. In " +#~ "that case, we need to abort the" +#~ " current round, discarding any successful" +#~ " updates that were received, and move" +#~ " on to the next one. On the " +#~ "other hand, if all clients responded " +#~ "successfully, we must force the " +#~ "averaging of the updates to happen " +#~ "in an unweighted manner by intercepting" +#~ " the :code:`parameters` field of " +#~ ":code:`FitRes` for each received update " +#~ "and setting it to 1. Furthermore, " +#~ "if :code:`server_side_noising=true`, each update " +#~ "is perturbed with an amount of " +#~ "noise equal to what it would have" +#~ " been subjected to had client-side" +#~ " noising being enabled. This entails " +#~ "*pre*-processing of the arguments to " +#~ "this method before passing them on " +#~ "to the wrappee's implementation of " +#~ ":code:`aggregate_fit()`." + +#~ msgid "" +#~ "We can't directly change the aggregation" +#~ " function of the wrapped strategy to" +#~ " force it to add noise to the" +#~ " aggregate, hence we simulate client-" +#~ "side noising to implement server-side" +#~ " noising." +#~ msgstr "" +#~ "Nous ne pouvons pas modifier directement" +#~ " la fonction d'agrégation de la " +#~ "stratégie enveloppée pour la forcer à" +#~ " ajouter du bruit à l'agrégat, c'est" +#~ " pourquoi nous simulons le bruit côté" +#~ " client pour mettre en œuvre le " +#~ "bruit côté serveur." + +#~ msgid "" +#~ "These changes have been put together " +#~ "into a class called :code:`DPFedAvgFixed`, " +#~ "whose constructor accepts the strategy " +#~ "being decorated, the clipping threshold " +#~ "and the number of clients sampled " +#~ "every round as compulsory arguments. The" +#~ " user is expected to specify the " +#~ "clipping threshold since the order of" +#~ " magnitude of the update norms is " +#~ "highly dependent on the model being " +#~ "trained and providing a default value" +#~ " would be misleading. The number of" +#~ " clients sampled at every round is" +#~ " required to calculate the amount of" +#~ " noise that must be added to " +#~ "each individual update, either by the" +#~ " server or the clients." +#~ msgstr "" +#~ "Ces modifications ont été regroupées " +#~ "dans une classe appelée :code:`DPFedAvgFixed`," +#~ " dont le constructeur accepte la " +#~ "stratégie décorée, le seuil d'écrêtage " +#~ "et le nombre de clients échantillonnés" +#~ " à chaque tour comme arguments " +#~ "obligatoires. L'utilisateur est censé " +#~ "spécifier le seuil d'écrêtage car " +#~ "l'ordre de grandeur des normes de " +#~ "mise à jour dépend fortement du " +#~ "modèle formé et fournir une valeur " +#~ "par défaut serait trompeur. Le nombre" +#~ " de clients échantillonnés à chaque " +#~ "tour est nécessaire pour calculer la " +#~ "quantité de bruit qui doit être " +#~ "ajoutée à chaque mise à jour " +#~ "individuelle, que ce soit par le " +#~ "serveur ou par les clients." + +#~ msgid "" +#~ "The additional functionality required to " +#~ "facilitate adaptive clipping has been " +#~ "provided in :code:`DPFedAvgAdaptive`, a " +#~ "subclass of :code:`DPFedAvgFixed`. It " +#~ "overrides the above-mentioned methods to" +#~ " do the following." +#~ msgstr "" +#~ "La fonctionnalité supplémentaire nécessaire " +#~ "pour faciliter l'écrêtage adaptatif a " +#~ "été fournie dans :code:`DPFedAvgAdaptive`, une" +#~ " sous-classe de :code:`DPFedAvgFixed`. Elle" +#~ " remplace les méthodes mentionnées ci-" +#~ "dessus pour effectuer les opérations " +#~ "suivantes." + +#~ msgid "" +#~ ":code:`configure_fit()` : It intercepts the" +#~ " config dict returned by " +#~ ":code:`super.configure_fit()` to add the " +#~ "key-value pair " +#~ ":code:`dpfedavg_adaptive_clip_enabled:True` to it, " +#~ "which the client interprets as an " +#~ "instruction to include an indicator bit" +#~ " (1 if update norm <= clipping " +#~ "threshold, 0 otherwise) in the results" +#~ " returned by it." +#~ msgstr "" +#~ ":code:`configure_fit()` : Il intercepte le " +#~ "dict de configuration renvoyé par " +#~ ":code:`super.configure_fit()` pour y ajouter " +#~ "la paire clé-valeur " +#~ ":code:`dpfedavg_adaptive_clip_enabled:True`, que le " +#~ "client interprète comme une instruction " +#~ "d'inclure un bit indicateur (1 si " +#~ "la norme de mise à jour <= " +#~ "seuil d'écrêtage, 0 sinon) dans les " +#~ "résultats qu'il renvoie." + +#~ msgid "" +#~ ":code:`aggregate_fit()` : It follows a " +#~ "call to :code:`super.aggregate_fit()` with one" +#~ " to :code:`__update_clip_norm__()`, a procedure" +#~ " which adjusts the clipping threshold " +#~ "on the basis of the indicator bits" +#~ " received from the sampled clients." +#~ msgstr "" +#~ ":code:`aggregate_fit()` : Il fait suivre " +#~ "un appel à :code:`super.aggregate_fit()` d'un" +#~ " appel à :code:`__update_clip_norm__()`, une " +#~ "procédure qui ajuste le seuil d'écrêtage" +#~ " sur la base des bits indicateurs " +#~ "reçus des clients échantillonnés." + +#~ msgid "" +#~ "The client-side capabilities required " +#~ "can be completely captured through " +#~ "wrapper logic for just the :code:`fit()`" +#~ " method of the :code:`NumPyClient` abstract" +#~ " class. To be precise, we need " +#~ "to *post-process* the update computed" +#~ " by the wrapped client to clip " +#~ "it, if necessary, to the threshold " +#~ "value supplied by the server as " +#~ "part of the config dictionary. In " +#~ "addition to this, it may need to" +#~ " perform some extra work if either" +#~ " (or both) of the following keys " +#~ "are also present in the dict." +#~ msgstr "" +#~ "Les capacités requises côté client " +#~ "peuvent être entièrement capturées par " +#~ "une logique de wrapper pour la " +#~ "seule méthode :code:`fit()` de la classe" +#~ " abstraite :code:`NumPyClient`. Pour être " +#~ "précis, nous devons *post-traiter* la" +#~ " mise à jour calculée par le " +#~ "client wrapped pour l'écrêter, si " +#~ "nécessaire, à la valeur seuil fournie" +#~ " par le serveur dans le cadre " +#~ "du dictionnaire de configuration. En " +#~ "plus de cela, il peut avoir besoin" +#~ " d'effectuer un travail supplémentaire si" +#~ " l'une des clés suivantes (ou les " +#~ "deux) est également présente dans le " +#~ "dict." + +#~ msgid "" +#~ ":code:`dpfedavg_noise_stddev` : Generate and " +#~ "add the specified amount of noise " +#~ "to the clipped update." +#~ msgstr "" +#~ ":code:`dpfedavg_noise_stddev` : Génère et " +#~ "ajoute la quantité de bruit spécifiée" +#~ " à la mise à jour de " +#~ "l'écrêtage." + +#~ msgid "" +#~ ":code:`dpfedavg_adaptive_clip_enabled` : Augment the" +#~ " metrics dict in the :code:`FitRes` " +#~ "object being returned to the server " +#~ "with an indicator bit, calculated as " +#~ "described earlier." +#~ msgstr "" +#~ ":code:`dpfedavg_adaptive_clip_enabled` : Complète " +#~ "les métriques dict dans l'objet " +#~ ":code:`FitRes` renvoyé au serveur avec " +#~ "un bit indicateur, calculé comme décrit" +#~ " précédemment." + +#~ msgid "Performing the :math:`(\\epsilon, \\delta)` analysis" +#~ msgstr "Effectuer l'analyse :math:`(\\epsilon, \\delta)`" + +#~ msgid "" +#~ "Assume you have trained for :math:`n`" +#~ " rounds with sampling fraction :math:`q`" +#~ " and noise multiplier :math:`z`. In " +#~ "order to calculate the :math:`\\epsilon` " +#~ "value this would result in for a" +#~ " particular :math:`\\delta`, the following " +#~ "script may be used." +#~ msgstr "" +#~ "Supposons que tu te sois entraîné " +#~ "pendant :math:`n` tours avec la fraction" +#~ " d'échantillonnage :math:`q` et le " +#~ "multiplicateur de bruit :math:`z`. Afin " +#~ "de calculer la valeur :math:`epsilon` " +#~ "qui en résulterait pour un " +#~ ":math:`\\delta` particulier, le script suivant" +#~ " peut être utilisé." + +#~ msgid "" +#~ "`How to run Flower using Docker " +#~ "`_" +#~ msgstr "" + +#~ msgid "Enjoy building more robust and flexible ``ClientApp``s with mods!" +#~ msgstr "" + +#~ msgid "" +#~ ":py:obj:`ClientApp `\\ " +#~ "\\(client\\_fn\\[\\, mods\\]\\)" +#~ msgstr "" + +#~ msgid ":py:obj:`flwr.server.driver `\\" +#~ msgstr "" + +#~ msgid "Flower driver SDK." +#~ msgstr "Serveur de Flower" + +#~ msgid "driver" +#~ msgstr "serveur" + +#~ msgid "" +#~ ":py:obj:`start_driver `\\ " +#~ "\\(\\*\\[\\, server\\_address\\, server\\, ...\\]\\)" +#~ msgstr "" + +#~ msgid "" +#~ ":py:obj:`Driver `\\ " +#~ "\\(\\[driver\\_service\\_address\\, ...\\]\\)" +#~ msgstr "" + +#~ msgid "" +#~ ":py:obj:`GrpcDriver `\\ " +#~ "\\(\\[driver\\_service\\_address\\, ...\\]\\)" +#~ msgstr "" + +#~ msgid "`GrpcDriver` provides access to the gRPC Driver API/service." +#~ msgstr "" + +#~ msgid ":py:obj:`get_nodes `\\ \\(\\)" +#~ msgstr "" + +#~ msgid "" +#~ ":py:obj:`pull_task_res " +#~ "`\\ \\(task\\_ids\\)" +#~ msgstr "" + +#~ msgid "Get task results." +#~ msgstr "" + +#~ msgid "" +#~ ":py:obj:`push_task_ins " +#~ "`\\ " +#~ "\\(task\\_ins\\_list\\)" +#~ msgstr "" + +#~ msgid "Schedule tasks." +#~ msgstr "" + +#~ msgid "GrpcDriver" +#~ msgstr "" + +#~ msgid ":py:obj:`connect `\\ \\(\\)" +#~ msgstr "" + +#~ msgid "Connect to the Driver API." +#~ msgstr "" + +#~ msgid "" +#~ ":py:obj:`create_run " +#~ "`\\ \\(req\\)" +#~ msgstr "" + +#~ msgid "Request for run ID." +#~ msgstr "Demande pour une nouvelle Flower Baseline" + +#~ msgid "" +#~ ":py:obj:`disconnect " +#~ "`\\ \\(\\)" +#~ msgstr "" + +#~ msgid "Disconnect from the Driver API." +#~ msgstr "" + +#~ msgid "" +#~ ":py:obj:`get_nodes `\\" +#~ " \\(req\\)" +#~ msgstr "" + +#~ msgid "Get client IDs." +#~ msgstr "Moteur client Edge" + +#~ msgid "" +#~ ":py:obj:`pull_task_res " +#~ "`\\ \\(req\\)" +#~ msgstr "" + +#~ msgid "" +#~ ":py:obj:`push_task_ins " +#~ "`\\ \\(req\\)" +#~ msgstr "" + +#~ msgid "" +#~ "Optionally specify the type of actor " +#~ "to use. The actor object, which " +#~ "persists throughout the simulation, will " +#~ "be the process in charge of " +#~ "running the clients' jobs (i.e. their" +#~ " `fit()` method)." +#~ msgstr "" + +#~ msgid "" +#~ "Much effort went into a completely " +#~ "restructured Flower docs experience. The " +#~ "documentation on [flower.ai/docs](flower.ai/docs) is" +#~ " now divided into Flower Framework, " +#~ "Flower Baselines, Flower Android SDK, " +#~ "Flower iOS SDK, and code example " +#~ "projects." +#~ msgstr "" + +#~ msgid "" +#~ "MXNet is no longer maintained and " +#~ "has been moved into `Attic " +#~ "`_. As a " +#~ "result, we would encourage you to " +#~ "use other ML frameworks alongise Flower," +#~ " for example, PyTorch. This tutorial " +#~ "might be removed in future versions " +#~ "of Flower." +#~ msgstr "" + +#~ msgid "" +#~ "Now that you have known how " +#~ "federated XGBoost work with Flower, it's" +#~ " time to run some more comprehensive" +#~ " experiments by customising the " +#~ "experimental settings. In the xgboost-" +#~ "comprehensive example (`full code " +#~ "`_), we provide more options " +#~ "to define various experimental setups, " +#~ "including aggregation strategies, data " +#~ "partitioning and centralised/distributed evaluation." +#~ " We also support `Flower simulation " +#~ "`_ making it easy to " +#~ "simulate large client cohorts in a " +#~ "resource-aware manner. Let's take a " +#~ "look!" +#~ msgstr "" + +#~ msgid "|31e4b1afa87c4b968327bbeafbf184d4|" +#~ msgstr "" + +#~ msgid "|c9d935b4284e4c389a33d86b33e07c0a|" +#~ msgstr "" + +#~ msgid "|00727b5faffb468f84dd1b03ded88638|" +#~ msgstr "" + +#~ msgid "|daf0cf0ff4c24fd29439af78416cf47b|" +#~ msgstr "" + +#~ msgid "|9f093007080d471d94ca90d3e9fde9b6|" +#~ msgstr "" + +#~ msgid "|46a26e6150e0479fbd3dfd655f36eb13|" +#~ msgstr "" + +#~ msgid "|3daba297595c4c7fb845d90404a6179a|" +#~ msgstr "" + +#~ msgid "|5769874fa9c4455b80b2efda850d39d7|" +#~ msgstr "" + +#~ msgid "|ba47ffb421814b0f8f9fa5719093d839|" +#~ msgstr "" + +#~ msgid "|aeac5bf79cbf497082e979834717e01b|" +#~ msgstr "" + +#~ msgid "|ce27ed4bbe95459dba016afc42486ba2|" +#~ msgstr "" + +#~ msgid "|ae94a7f71dda443cbec2385751427d41|" +#~ msgstr "" + +#~ msgid "|e61fce4d43d243e7bb08bdde97d81ce6|" +#~ msgstr "" + +#~ msgid "|08cb60859b07461588fe44e55810b050|" +#~ msgstr "" + diff --git a/doc/locales/pt_BR/LC_MESSAGES/framework-docs.po b/doc/locales/pt_BR/LC_MESSAGES/framework-docs.po index 5a5d736ece38..ea3cbc414d3b 100644 --- a/doc/locales/pt_BR/LC_MESSAGES/framework-docs.po +++ b/doc/locales/pt_BR/LC_MESSAGES/framework-docs.po @@ -8,7 +8,7 @@ msgid "" msgstr "" "Project-Id-Version: Flower main\n" "Report-Msgid-Bugs-To: \n" -"POT-Creation-Date: 2024-02-13 11:23+0100\n" +"POT-Creation-Date: 2024-03-15 14:32+0000\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language: pt_BR\n" @@ -17,7 +17,7 @@ msgstr "" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=utf-8\n" "Content-Transfer-Encoding: 8bit\n" -"Generated-By: Babel 2.13.1\n" +"Generated-By: Babel 2.14.0\n" #: ../../source/contributor-explanation-architecture.rst:2 msgid "Flower Architecture" @@ -83,9 +83,8 @@ msgstr "" #: ../../source/contributor-how-to-build-docker-images.rst:19 msgid "" -"Please follow the first section on `Run Flower using Docker " -"`_ " -"which covers this step in more detail." +"Please follow the first section on :doc:`Run Flower using Docker ` which covers this step in more detail." msgstr "" #: ../../source/contributor-how-to-build-docker-images.rst:23 @@ -287,7 +286,7 @@ msgid "" "to help us in our effort to make Federated Learning accessible to as many" " people as possible by contributing to those translations! This might " "also be a great opportunity for those wanting to become open source " -"contributors with little prerequistes." +"contributors with little prerequisites." msgstr "" #: ../../source/contributor-how-to-contribute-translations.rst:13 @@ -338,7 +337,7 @@ msgstr "" #: ../../source/contributor-how-to-contribute-translations.rst:47 msgid "" -"You input your translation in the textbox at the top and then, once you " +"You input your translation in the text box at the top and then, once you " "are happy with it, you either press ``Save and continue`` (to save the " "translation and go to the next untranslated string), ``Save and stay`` " "(to save the translation and stay on the same page), ``Suggest`` (to add " @@ -376,8 +375,8 @@ msgstr "" #: ../../source/contributor-how-to-contribute-translations.rst:69 msgid "" "If you want to add a new language, you will first have to contact us, " -"either on `Slack `_, or by opening an " -"issue on our `GitHub repo `_." +"either on `Slack `_, or by opening an issue" +" on our `GitHub repo `_." msgstr "" #: ../../source/contributor-how-to-create-new-messages.rst:2 @@ -419,8 +418,8 @@ msgid "" "The first thing we need to do is to define a message type for the RPC " "system in :code:`transport.proto`. Note that we have to do it for both " "the request and response messages. For more details on the syntax of " -"proto3, please see the `official documentation " -"`_." +"proto3, please see the `official documentation `_." msgstr "" #: ../../source/contributor-how-to-create-new-messages.rst:35 @@ -530,7 +529,7 @@ msgstr "" #: ../../source/contributor-how-to-develop-in-vscode-dev-containers.rst:11 msgid "" "Source: `Official VSCode documentation " -"`_" +"`_" msgstr "" #: ../../source/contributor-how-to-develop-in-vscode-dev-containers.rst:15 @@ -567,14 +566,14 @@ msgstr "" #: ../../source/contributor-how-to-develop-in-vscode-dev-containers.rst:23 msgid "" "`Developing inside a Container " -"`_" msgstr "" #: ../../source/contributor-how-to-develop-in-vscode-dev-containers.rst:24 msgid "" "`Remote development in Containers " -"`_" +"`_" msgstr "" #: ../../source/contributor-how-to-install-development-versions.rst:2 @@ -823,8 +822,8 @@ msgstr "" #: ../../source/contributor-how-to-release-flower.rst:25 msgid "" -"Merge the pull request on the same day (i.e., before a new nightly release" -" gets published to PyPI)." +"Merge the pull request on the same day (i.e., before a new nightly " +"release gets published to PyPI)." msgstr "" #: ../../source/contributor-how-to-release-flower.rst:28 @@ -837,8 +836,8 @@ msgstr "" #: ../../source/contributor-how-to-release-flower.rst:33 msgid "" -"PyPI supports pre-releases (alpha, beta, release candidate). Pre-releases " -"MUST use one of the following naming patterns:" +"PyPI supports pre-releases (alpha, beta, release candidate). Pre-releases" +" MUST use one of the following naming patterns:" msgstr "" #: ../../source/contributor-how-to-release-flower.rst:35 @@ -1114,8 +1113,8 @@ msgstr "" #: ../../source/contributor-ref-good-first-contributions.rst:25 msgid "" "If you are not familiar with Flower Baselines, you should probably check-" -"out our `contributing guide for baselines `_." +"out our `contributing guide for baselines " +"`_." msgstr "" #: ../../source/contributor-ref-good-first-contributions.rst:27 @@ -1123,7 +1122,7 @@ msgid "" "You should then check out the open `issues " "`_" " for baseline requests. If you find a baseline that you'd like to work on" -" and that has no assignes, feel free to assign it to yourself and start " +" and that has no assignees, feel free to assign it to yourself and start " "working on it!" msgstr "" @@ -1208,42 +1207,41 @@ msgstr "" #: ../../source/contributor-tutorial-contribute-on-github.rst:6 msgid "" "If you're familiar with how contributing on GitHub works, you can " -"directly checkout our `getting started guide for contributors " -"`_." +"directly checkout our :doc:`getting started guide for contributors " +"`." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:11 +#: ../../source/contributor-tutorial-contribute-on-github.rst:10 msgid "Setting up the repository" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:22 +#: ../../source/contributor-tutorial-contribute-on-github.rst:21 msgid "**Create a GitHub account and setup Git**" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:14 +#: ../../source/contributor-tutorial-contribute-on-github.rst:13 msgid "" "Git is a distributed version control tool. This allows for an entire " "codebase's history to be stored and every developer's machine. It is a " "software that will need to be installed on your local machine, you can " -"follow this `guide `_ to set it up." +"follow this `guide `_ to set it up." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:17 +#: ../../source/contributor-tutorial-contribute-on-github.rst:16 msgid "" "GitHub, itself, is a code hosting platform for version control and " "collaboration. It allows for everyone to collaborate and work from " "anywhere on remote repositories." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:19 +#: ../../source/contributor-tutorial-contribute-on-github.rst:18 msgid "" "If you haven't already, you will need to create an account on `GitHub " "`_." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:21 +#: ../../source/contributor-tutorial-contribute-on-github.rst:20 msgid "" "The idea behind the generic Git and GitHub workflow boils down to this: " "you download code from a remote repository on GitHub, make changes " @@ -1251,19 +1249,19 @@ msgid "" "history back to GitHub." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:33 +#: ../../source/contributor-tutorial-contribute-on-github.rst:32 msgid "**Forking the Flower repository**" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:25 +#: ../../source/contributor-tutorial-contribute-on-github.rst:24 msgid "" "A fork is a personal copy of a GitHub repository. To create one for " -"Flower, you must navigate to https://github.com/adap/flower (while " +"Flower, you must navigate to ``_ (while " "connected to your GitHub account) and click the ``Fork`` button situated " "on the top right of the page." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:30 +#: ../../source/contributor-tutorial-contribute-on-github.rst:29 msgid "" "You can change the name if you want, but this is not necessary as this " "version of Flower will be yours and will sit inside your own account " @@ -1271,11 +1269,11 @@ msgid "" " the top left corner that you are looking at your own version of Flower." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:48 +#: ../../source/contributor-tutorial-contribute-on-github.rst:47 msgid "**Cloning your forked repository**" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:36 +#: ../../source/contributor-tutorial-contribute-on-github.rst:35 msgid "" "The next step is to download the forked repository on your machine to be " "able to make changes to it. On your forked repository page, you should " @@ -1283,27 +1281,27 @@ msgid "" "ability to copy the HTTPS link of the repository." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:42 +#: ../../source/contributor-tutorial-contribute-on-github.rst:41 msgid "" "Once you copied the \\, you can open a terminal on your machine, " "navigate to the place you want to download the repository to and type:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:48 +#: ../../source/contributor-tutorial-contribute-on-github.rst:47 msgid "" "This will create a ``flower/`` (or the name of your fork if you renamed " "it) folder in the current working directory." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:67 +#: ../../source/contributor-tutorial-contribute-on-github.rst:66 msgid "**Add origin**" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:51 +#: ../../source/contributor-tutorial-contribute-on-github.rst:50 msgid "You can then go into the repository folder:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:57 +#: ../../source/contributor-tutorial-contribute-on-github.rst:56 msgid "" "And here we will need to add an origin to our repository. The origin is " "the \\ of the remote fork repository. To obtain it, we can do as " @@ -1311,27 +1309,27 @@ msgid "" "account and copying the link." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:62 +#: ../../source/contributor-tutorial-contribute-on-github.rst:61 msgid "" "Once the \\ is copied, we can type the following command in our " "terminal:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:91 +#: ../../source/contributor-tutorial-contribute-on-github.rst:90 msgid "**Add upstream**" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:70 +#: ../../source/contributor-tutorial-contribute-on-github.rst:69 msgid "" "Now we will add an upstream address to our repository. Still in the same " -"directroy, we must run the following command:" +"directory, we must run the following command:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:77 +#: ../../source/contributor-tutorial-contribute-on-github.rst:76 msgid "The following diagram visually explains what we did in the previous steps:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:81 +#: ../../source/contributor-tutorial-contribute-on-github.rst:80 msgid "" "The upstream is the GitHub remote address of the parent repository (in " "this case Flower), i.e. the one we eventually want to contribute to and " @@ -1340,169 +1338,169 @@ msgid "" "in our own account." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:85 +#: ../../source/contributor-tutorial-contribute-on-github.rst:84 msgid "" "To make sure our local version of the fork is up-to-date with the latest " "changes from the Flower repository, we can execute the following command:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:94 +#: ../../source/contributor-tutorial-contribute-on-github.rst:93 msgid "Setting up the coding environment" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:96 +#: ../../source/contributor-tutorial-contribute-on-github.rst:95 msgid "" -"This can be achieved by following this `getting started guide for " -"contributors`_ (note that you won't need to clone the repository). Once " -"you are able to write code and test it, you can finally start making " -"changes!" +"This can be achieved by following this :doc:`getting started guide for " +"contributors ` (note " +"that you won't need to clone the repository). Once you are able to write " +"code and test it, you can finally start making changes!" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:101 +#: ../../source/contributor-tutorial-contribute-on-github.rst:100 msgid "Making changes" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:103 +#: ../../source/contributor-tutorial-contribute-on-github.rst:102 msgid "" "Before making any changes make sure you are up-to-date with your " "repository:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:109 +#: ../../source/contributor-tutorial-contribute-on-github.rst:108 msgid "And with Flower's repository:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:123 +#: ../../source/contributor-tutorial-contribute-on-github.rst:122 msgid "**Create a new branch**" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:116 +#: ../../source/contributor-tutorial-contribute-on-github.rst:115 msgid "" "To make the history cleaner and easier to work with, it is good practice " "to create a new branch for each feature/project that needs to be " "implemented." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:119 +#: ../../source/contributor-tutorial-contribute-on-github.rst:118 msgid "" "To do so, just run the following command inside the repository's " "directory:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:126 +#: ../../source/contributor-tutorial-contribute-on-github.rst:125 msgid "**Make changes**" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:126 +#: ../../source/contributor-tutorial-contribute-on-github.rst:125 msgid "Write great code and create wonderful changes using your favorite editor!" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:139 +#: ../../source/contributor-tutorial-contribute-on-github.rst:138 msgid "**Test and format your code**" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:129 +#: ../../source/contributor-tutorial-contribute-on-github.rst:128 msgid "" "Don't forget to test and format your code! Otherwise your code won't be " "able to be merged into the Flower repository. This is done so the " "codebase stays consistent and easy to understand." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:132 +#: ../../source/contributor-tutorial-contribute-on-github.rst:131 msgid "To do so, we have written a few scripts that you can execute:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:151 +#: ../../source/contributor-tutorial-contribute-on-github.rst:150 msgid "**Stage changes**" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:142 +#: ../../source/contributor-tutorial-contribute-on-github.rst:141 msgid "" "Before creating a commit that will update your history, you must specify " "to Git which files it needs to take into account." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:144 +#: ../../source/contributor-tutorial-contribute-on-github.rst:143 msgid "This can be done with:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:150 +#: ../../source/contributor-tutorial-contribute-on-github.rst:149 msgid "" "To check which files have been modified compared to the last version " "(last commit) and to see which files are staged for commit, you can use " "the :code:`git status` command." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:161 +#: ../../source/contributor-tutorial-contribute-on-github.rst:160 msgid "**Commit changes**" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:154 +#: ../../source/contributor-tutorial-contribute-on-github.rst:153 msgid "" "Once you have added all the files you wanted to commit using :code:`git " "add`, you can finally create your commit using this command:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:160 +#: ../../source/contributor-tutorial-contribute-on-github.rst:159 msgid "" "The \\ is there to explain to others what the commit " "does. It should be written in an imperative style and be concise. An " "example would be :code:`git commit -m \"Add images to README\"`." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:172 +#: ../../source/contributor-tutorial-contribute-on-github.rst:171 msgid "**Push the changes to the fork**" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:164 +#: ../../source/contributor-tutorial-contribute-on-github.rst:163 msgid "" "Once we have committed our changes, we have effectively updated our local" " history, but GitHub has no way of knowing this unless we push our " "changes to our origin's remote address:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:171 +#: ../../source/contributor-tutorial-contribute-on-github.rst:170 msgid "" "Once this is done, you will see on the GitHub that your forked repo was " "updated with the changes you have made." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:175 +#: ../../source/contributor-tutorial-contribute-on-github.rst:174 msgid "Creating and merging a pull request (PR)" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:206 +#: ../../source/contributor-tutorial-contribute-on-github.rst:205 msgid "**Create the PR**" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:178 +#: ../../source/contributor-tutorial-contribute-on-github.rst:177 msgid "" "Once you have pushed changes, on the GitHub webpage of your repository " "you should see the following message:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:182 +#: ../../source/contributor-tutorial-contribute-on-github.rst:181 msgid "Otherwise you can always find this option in the ``Branches`` page." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:184 +#: ../../source/contributor-tutorial-contribute-on-github.rst:183 msgid "" "Once you click the ``Compare & pull request`` button, you should see " "something similar to this:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:188 +#: ../../source/contributor-tutorial-contribute-on-github.rst:187 msgid "At the top you have an explanation of which branch will be merged where:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:192 +#: ../../source/contributor-tutorial-contribute-on-github.rst:191 msgid "" "In this example you can see that the request is to merge the branch " "``doc-fixes`` from my forked repository to branch ``main`` from the " "Flower repository." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:194 +#: ../../source/contributor-tutorial-contribute-on-github.rst:193 msgid "" "The input box in the middle is there for you to describe what your PR " "does and to link it to existing issues. We have placed comments (that " @@ -1510,7 +1508,7 @@ msgid "" "process." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:197 +#: ../../source/contributor-tutorial-contribute-on-github.rst:196 msgid "" "It is important to follow the instructions described in comments. For " "instance, in order to not break how our changelog system works, you " @@ -1519,163 +1517,163 @@ msgid "" ":ref:`changelogentry` appendix." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:201 +#: ../../source/contributor-tutorial-contribute-on-github.rst:200 msgid "" "At the bottom you will find the button to open the PR. This will notify " "reviewers that a new PR has been opened and that they should look over it" " to merge or to request changes." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:204 +#: ../../source/contributor-tutorial-contribute-on-github.rst:203 msgid "" "If your PR is not yet ready for review, and you don't want to notify " "anyone, you have the option to create a draft pull request:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:209 +#: ../../source/contributor-tutorial-contribute-on-github.rst:208 msgid "**Making new changes**" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:209 +#: ../../source/contributor-tutorial-contribute-on-github.rst:208 msgid "" "Once the PR has been opened (as draft or not), you can still push new " "commits to it the same way we did before, by making changes to the branch" " associated with the PR." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:231 +#: ../../source/contributor-tutorial-contribute-on-github.rst:230 msgid "**Review the PR**" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:212 +#: ../../source/contributor-tutorial-contribute-on-github.rst:211 msgid "" "Once the PR has been opened or once the draft PR has been marked as " "ready, a review from code owners will be automatically requested:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:216 +#: ../../source/contributor-tutorial-contribute-on-github.rst:215 msgid "" "Code owners will then look into the code, ask questions, request changes " "or validate the PR." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:218 +#: ../../source/contributor-tutorial-contribute-on-github.rst:217 msgid "Merging will be blocked if there are ongoing requested changes." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:222 +#: ../../source/contributor-tutorial-contribute-on-github.rst:221 msgid "" "To resolve them, just push the necessary changes to the branch associated" " with the PR:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:226 +#: ../../source/contributor-tutorial-contribute-on-github.rst:225 msgid "And resolve the conversation:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:230 +#: ../../source/contributor-tutorial-contribute-on-github.rst:229 msgid "" "Once all the conversations have been resolved, you can re-request a " "review." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:251 +#: ../../source/contributor-tutorial-contribute-on-github.rst:250 msgid "**Once the PR is merged**" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:234 +#: ../../source/contributor-tutorial-contribute-on-github.rst:233 msgid "" "If all the automatic tests have passed and reviewers have no more changes" " to request, they can approve the PR and merge it." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:238 +#: ../../source/contributor-tutorial-contribute-on-github.rst:237 msgid "" "Once it is merged, you can delete the branch on GitHub (a button should " "appear to do so) and also delete it locally by doing:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:245 +#: ../../source/contributor-tutorial-contribute-on-github.rst:244 msgid "Then you should update your forked repository by doing:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:254 +#: ../../source/contributor-tutorial-contribute-on-github.rst:253 msgid "Example of first contribution" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:257 +#: ../../source/contributor-tutorial-contribute-on-github.rst:256 msgid "Problem" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:259 +#: ../../source/contributor-tutorial-contribute-on-github.rst:258 msgid "" -"For our documentation, we’ve started to use the `Diàtaxis framework " +"For our documentation, we've started to use the `Diàtaxis framework " "`_." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:261 +#: ../../source/contributor-tutorial-contribute-on-github.rst:260 msgid "" -"Our “How to” guides should have titles that continue the sencence “How to" -" …”, for example, “How to upgrade to Flower 1.0”." +"Our \"How to\" guides should have titles that continue the sentence \"How" +" to …\", for example, \"How to upgrade to Flower 1.0\"." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:263 +#: ../../source/contributor-tutorial-contribute-on-github.rst:262 msgid "" "Most of our guides do not follow this new format yet, and changing their " "title is (unfortunately) more involved than one might think." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:265 +#: ../../source/contributor-tutorial-contribute-on-github.rst:264 msgid "" -"This issue is about changing the title of a doc from present continious " +"This issue is about changing the title of a doc from present continuous " "to present simple." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:267 +#: ../../source/contributor-tutorial-contribute-on-github.rst:266 msgid "" -"Let's take the example of “Saving Progress” which we changed to “Save " -"Progress”. Does this pass our check?" +"Let's take the example of \"Saving Progress\" which we changed to \"Save " +"Progress\". Does this pass our check?" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:269 -msgid "Before: ”How to saving progress” ❌" +#: ../../source/contributor-tutorial-contribute-on-github.rst:268 +msgid "Before: \"How to saving progress\" ❌" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:271 -msgid "After: ”How to save progress” ✅" +#: ../../source/contributor-tutorial-contribute-on-github.rst:270 +msgid "After: \"How to save progress\" ✅" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:274 +#: ../../source/contributor-tutorial-contribute-on-github.rst:273 msgid "Solution" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:276 +#: ../../source/contributor-tutorial-contribute-on-github.rst:275 msgid "" -"This is a tiny change, but it’ll allow us to test your end-to-end setup. " -"After cloning and setting up the Flower repo, here’s what you should do:" +"This is a tiny change, but it'll allow us to test your end-to-end setup. " +"After cloning and setting up the Flower repo, here's what you should do:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:278 +#: ../../source/contributor-tutorial-contribute-on-github.rst:277 msgid "Find the source file in ``doc/source``" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:279 +#: ../../source/contributor-tutorial-contribute-on-github.rst:278 msgid "" "Make the change in the ``.rst`` file (beware, the dashes under the title " "should be the same length as the title itself)" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:280 +#: ../../source/contributor-tutorial-contribute-on-github.rst:279 msgid "" -"Build the docs and check the result: ``_" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:283 +#: ../../source/contributor-tutorial-contribute-on-github.rst:282 msgid "Rename file" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:285 +#: ../../source/contributor-tutorial-contribute-on-github.rst:284 msgid "" "You might have noticed that the file name still reflects the old wording." " If we just change the file, then we break all existing links to it - it " @@ -1683,77 +1681,77 @@ msgid "" "engine ranking." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:288 -msgid "Here’s how to change the file name:" +#: ../../source/contributor-tutorial-contribute-on-github.rst:287 +msgid "Here's how to change the file name:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:290 +#: ../../source/contributor-tutorial-contribute-on-github.rst:289 msgid "Change the file name to ``save-progress.rst``" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:291 +#: ../../source/contributor-tutorial-contribute-on-github.rst:290 msgid "Add a redirect rule to ``doc/source/conf.py``" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:293 +#: ../../source/contributor-tutorial-contribute-on-github.rst:292 msgid "" "This will cause a redirect from ``saving-progress.html`` to ``save-" "progress.html``, old links will continue to work." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:296 +#: ../../source/contributor-tutorial-contribute-on-github.rst:295 msgid "Apply changes in the index file" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:298 +#: ../../source/contributor-tutorial-contribute-on-github.rst:297 msgid "" "For the lateral navigation bar to work properly, it is very important to " "update the ``index.rst`` file as well. This is where we define the whole " "arborescence of the navbar." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:301 +#: ../../source/contributor-tutorial-contribute-on-github.rst:300 msgid "Find and modify the file name in ``index.rst``" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:304 +#: ../../source/contributor-tutorial-contribute-on-github.rst:303 msgid "Open PR" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:306 +#: ../../source/contributor-tutorial-contribute-on-github.rst:305 msgid "" -"Commit the changes (commit messages are always imperative: “Do " -"something”, in this case “Change …”)" +"Commit the changes (commit messages are always imperative: \"Do " +"something\", in this case \"Change …\")" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:307 +#: ../../source/contributor-tutorial-contribute-on-github.rst:306 msgid "Push the changes to your fork" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:308 +#: ../../source/contributor-tutorial-contribute-on-github.rst:307 msgid "Open a PR (as shown above)" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:309 +#: ../../source/contributor-tutorial-contribute-on-github.rst:308 msgid "Wait for it to be approved!" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:310 +#: ../../source/contributor-tutorial-contribute-on-github.rst:309 msgid "Congrats! 🥳 You're now officially a Flower contributor!" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:314 +#: ../../source/contributor-tutorial-contribute-on-github.rst:313 msgid "How to write a good PR title" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:316 +#: ../../source/contributor-tutorial-contribute-on-github.rst:315 msgid "" "A well-crafted PR title helps team members quickly understand the purpose" " and scope of the changes being proposed. Here's a guide to help you " "write a good GitHub PR title:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:318 +#: ../../source/contributor-tutorial-contribute-on-github.rst:317 msgid "" "1. Be Clear and Concise: Provide a clear summary of the changes in a " "concise manner. 1. Use Actionable Verbs: Start with verbs like \"Add,\" " @@ -1763,62 +1761,62 @@ msgid "" "Capitalization and Punctuation: Follow grammar rules for clarity." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:324 +#: ../../source/contributor-tutorial-contribute-on-github.rst:323 msgid "" "Let's start with a few examples for titles that should be avoided because" " they do not provide meaningful information:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:326 +#: ../../source/contributor-tutorial-contribute-on-github.rst:325 msgid "Implement Algorithm" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:327 +#: ../../source/contributor-tutorial-contribute-on-github.rst:326 msgid "Database" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:328 +#: ../../source/contributor-tutorial-contribute-on-github.rst:327 msgid "Add my_new_file.py to codebase" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:329 +#: ../../source/contributor-tutorial-contribute-on-github.rst:328 msgid "Improve code in module" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:330 +#: ../../source/contributor-tutorial-contribute-on-github.rst:329 msgid "Change SomeModule" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:332 +#: ../../source/contributor-tutorial-contribute-on-github.rst:331 msgid "" "Here are a few positive examples which provide helpful information " "without repeating how they do it, as that is already visible in the " "\"Files changed\" section of the PR:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:334 +#: ../../source/contributor-tutorial-contribute-on-github.rst:333 msgid "Update docs banner to mention Flower Summit 2023" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:335 +#: ../../source/contributor-tutorial-contribute-on-github.rst:334 msgid "Remove unnecessary XGBoost dependency" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:336 +#: ../../source/contributor-tutorial-contribute-on-github.rst:335 msgid "Remove redundant attributes in strategies subclassing FedAvg" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:337 +#: ../../source/contributor-tutorial-contribute-on-github.rst:336 msgid "Add CI job to deploy the staging system when the ``main`` branch changes" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:338 +#: ../../source/contributor-tutorial-contribute-on-github.rst:337 msgid "" "Add new amazing library which will be used to improve the simulation " "engine" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:342 +#: ../../source/contributor-tutorial-contribute-on-github.rst:341 #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:548 #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:946 #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:727 @@ -1827,150 +1825,150 @@ msgstr "" msgid "Next steps" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:344 +#: ../../source/contributor-tutorial-contribute-on-github.rst:343 msgid "" "Once you have made your first PR, and want to contribute more, be sure to" " check out the following :" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:346 +#: ../../source/contributor-tutorial-contribute-on-github.rst:345 msgid "" -"`Good first contributions `_, where you should particularly look " -"into the :code:`baselines` contributions." +":doc:`Good first contributions `, where you should particularly look into the " +":code:`baselines` contributions." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:350 +#: ../../source/contributor-tutorial-contribute-on-github.rst:349 #: ../../source/fed/0000-20200102-fed-template.md:60 msgid "Appendix" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:355 +#: ../../source/contributor-tutorial-contribute-on-github.rst:354 msgid "Changelog entry" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:357 +#: ../../source/contributor-tutorial-contribute-on-github.rst:356 msgid "" "When opening a new PR, inside its description, there should be a " "``Changelog entry`` header." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:359 +#: ../../source/contributor-tutorial-contribute-on-github.rst:358 msgid "" "Above this header you should see the following comment that explains how " "to write your changelog entry:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:361 +#: ../../source/contributor-tutorial-contribute-on-github.rst:360 msgid "" "Inside the following 'Changelog entry' section, you should put the " "description of your changes that will be added to the changelog alongside" " your PR title." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:364 +#: ../../source/contributor-tutorial-contribute-on-github.rst:363 msgid "" -"If the section is completely empty (without any token) or non-existant, " +"If the section is completely empty (without any token) or non-existent, " "the changelog will just contain the title of the PR for the changelog " "entry, without any description." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:367 +#: ../../source/contributor-tutorial-contribute-on-github.rst:366 msgid "" "If the section contains some text other than tokens, it will use it to " "add a description to the change." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:369 +#: ../../source/contributor-tutorial-contribute-on-github.rst:368 msgid "" "If the section contains one of the following tokens it will ignore any " "other text and put the PR under the corresponding section of the " "changelog:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:371 +#: ../../source/contributor-tutorial-contribute-on-github.rst:370 msgid " is for classifying a PR as a general improvement." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:373 +#: ../../source/contributor-tutorial-contribute-on-github.rst:372 msgid " is to not add the PR to the changelog" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:375 +#: ../../source/contributor-tutorial-contribute-on-github.rst:374 msgid " is to add a general baselines change to the PR" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:377 +#: ../../source/contributor-tutorial-contribute-on-github.rst:376 msgid " is to add a general examples change to the PR" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:379 +#: ../../source/contributor-tutorial-contribute-on-github.rst:378 msgid " is to add a general sdk change to the PR" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:381 +#: ../../source/contributor-tutorial-contribute-on-github.rst:380 msgid " is to add a general simulations change to the PR" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:383 +#: ../../source/contributor-tutorial-contribute-on-github.rst:382 msgid "Note that only one token should be used." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:385 +#: ../../source/contributor-tutorial-contribute-on-github.rst:384 msgid "" "Its content must have a specific format. We will break down what each " "possibility does:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:387 +#: ../../source/contributor-tutorial-contribute-on-github.rst:386 msgid "" "If the ``### Changelog entry`` section contains nothing or doesn't exist," " the following text will be added to the changelog::" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:391 +#: ../../source/contributor-tutorial-contribute-on-github.rst:390 msgid "" "If the ``### Changelog entry`` section contains a description (and no " "token), the following text will be added to the changelog::" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:397 +#: ../../source/contributor-tutorial-contribute-on-github.rst:396 msgid "" "If the ``### Changelog entry`` section contains ````, nothing will " "change in the changelog." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:399 +#: ../../source/contributor-tutorial-contribute-on-github.rst:398 msgid "" "If the ``### Changelog entry`` section contains ````, the " "following text will be added to the changelog::" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:403 +#: ../../source/contributor-tutorial-contribute-on-github.rst:402 msgid "" "If the ``### Changelog entry`` section contains ````, the " "following text will be added to the changelog::" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:407 +#: ../../source/contributor-tutorial-contribute-on-github.rst:406 msgid "" "If the ``### Changelog entry`` section contains ````, the " "following text will be added to the changelog::" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:411 +#: ../../source/contributor-tutorial-contribute-on-github.rst:410 msgid "" "If the ``### Changelog entry`` section contains ````, the following " "text will be added to the changelog::" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:415 +#: ../../source/contributor-tutorial-contribute-on-github.rst:414 msgid "" "If the ``### Changelog entry`` section contains ````, the " "following text will be added to the changelog::" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:419 +#: ../../source/contributor-tutorial-contribute-on-github.rst:418 msgid "" "Note that only one token must be provided, otherwise, only the first " "action (in the order listed above), will be performed." @@ -2004,7 +2002,7 @@ msgstr "" msgid "" "Flower uses :code:`pyproject.toml` to manage dependencies and configure " "development tools (the ones which support it). Poetry is a build tool " -"which supports `PEP 517 `_." +"which supports `PEP 517 `_." msgstr "" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:18 @@ -2172,9 +2170,9 @@ msgid "" "`_, a federated training strategy " "designed for non-iid data. We are using PyTorch to train a Convolutional " "Neural Network(with Batch Normalization layers) on the CIFAR-10 dataset. " -"When applying FedBN, only few changes needed compared to `Example: " -"PyTorch - From Centralized To Federated `_." +"When applying FedBN, only few changes needed compared to :doc:`Example: " +"PyTorch - From Centralized To Federated `." msgstr "" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:9 @@ -2184,10 +2182,10 @@ msgstr "" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:10 msgid "" -"All files are revised based on `Example: PyTorch - From Centralized To " -"Federated `_. The only thing to do is modifying the file called " -":code:`cifar.py`, revised part is shown below:" +"All files are revised based on :doc:`Example: PyTorch - From Centralized " +"To Federated `. The only " +"thing to do is modifying the file called :code:`cifar.py`, revised part " +"is shown below:" msgstr "" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:13 @@ -2205,8 +2203,8 @@ msgstr "" msgid "" "So far this should all look fairly familiar if you've used PyTorch " "before. Let's take the next step and use what we've built to create a " -"federated learning system within FedBN, the sytstem consists of one " -"server and two clients." +"federated learning system within FedBN, the system consists of one server" +" and two clients." msgstr "" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:51 @@ -2216,13 +2214,12 @@ msgstr "" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:53 msgid "" -"If you have read `Example: PyTorch - From Centralized To Federated " -"`_, the following parts are easy to follow, onyl " -":code:`get_parameters` and :code:`set_parameters` function in " -":code:`client.py` needed to revise. If not, please read the `Example: " -"PyTorch - From Centralized To Federated `_. first." +"If you have read :doc:`Example: PyTorch - From Centralized To Federated " +"`, the following parts are" +" easy to follow, only :code:`get_parameters` and :code:`set_parameters` " +"function in :code:`client.py` needed to revise. If not, please read the " +":doc:`Example: PyTorch - From Centralized To Federated `. first." msgstr "" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:56 @@ -2730,8 +2727,8 @@ msgid "" "Implementing a Flower *client* basically means implementing a subclass of" " either :code:`flwr.client.Client` or :code:`flwr.client.NumPyClient`. " "Our implementation will be based on :code:`flwr.client.NumPyClient` and " -"we'll call it :code:`MNISTClient`. :code:`NumPyClient` is slightly easier " -"to implement than :code:`Client` if you use a framework with good NumPy " +"we'll call it :code:`MNISTClient`. :code:`NumPyClient` is slightly easier" +" to implement than :code:`Client` if you use a framework with good NumPy " "interoperability (like PyTorch or MXNet) because it avoids some of the " "boilerplate that would otherwise be necessary. :code:`MNISTClient` needs " "to implement four methods, two methods for getting/setting model " @@ -2911,8 +2908,8 @@ msgid "" "Implementing a Flower *client* basically means implementing a subclass of" " either :code:`flwr.client.Client` or :code:`flwr.client.NumPyClient`. " "Our implementation will be based on :code:`flwr.client.NumPyClient` and " -"we'll call it :code:`CifarClient`. :code:`NumPyClient` is slightly easier " -"to implement than :code:`Client` if you use a framework with good NumPy " +"we'll call it :code:`CifarClient`. :code:`NumPyClient` is slightly easier" +" to implement than :code:`Client` if you use a framework with good NumPy " "interoperability (like PyTorch or TensorFlow/Keras) because it avoids " "some of the boilerplate that would otherwise be necessary. " ":code:`CifarClient` needs to implement four methods, two methods for " @@ -3061,9 +3058,10 @@ msgid "" "We can go a bit deeper and see that :code:`server.py` simply launches a " "server that will coordinate three rounds of training. Flower Servers are " "very customizable, but for simple workloads, we can start a server using " -"the :ref:`start_server ` function and " -"leave all the configuration possibilities at their default values, as " -"seen below." +"the `start_server `_ function " +"and leave all the configuration possibilities at their default values, as" +" seen below." msgstr "" #: ../../source/example-walkthrough-pytorch-mnist.rst:89 @@ -3214,317 +3212,290 @@ msgid "You are ready now. Enjoy learning in a federated way!" msgstr "" #: ../../source/explanation-differential-privacy.rst:2 -msgid "Differential privacy" +#: ../../source/explanation-differential-privacy.rst:11 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:303 +msgid "Differential Privacy" msgstr "" -#: ../../source/explanation-differential-privacy.rst:4 +#: ../../source/explanation-differential-privacy.rst:3 msgid "" -"Flower provides differential privacy (DP) wrapper classes for the easy " -"integration of the central DP guarantees provided by DP-FedAvg into " -"training pipelines defined in any of the various ML frameworks that " -"Flower is compatible with." +"The information in datasets like healthcare, financial transactions, user" +" preferences, etc., is valuable and has the potential for scientific " +"breakthroughs and provides important business insights. However, such " +"data is also sensitive and there is a risk of compromising individual " +"privacy." msgstr "" -#: ../../source/explanation-differential-privacy.rst:7 +#: ../../source/explanation-differential-privacy.rst:6 msgid "" -"Please note that these components are still experimental; the correct " -"configuration of DP for a specific task is still an unsolved problem." +"Traditional methods like anonymization alone would not work because of " +"attacks like Re-identification and Data Linkage. That's where " +"differential privacy comes in. It provides the possibility of analyzing " +"data while ensuring the privacy of individuals." msgstr "" -#: ../../source/explanation-differential-privacy.rst:10 +#: ../../source/explanation-differential-privacy.rst:12 msgid "" -"The name DP-FedAvg is misleading since it can be applied on top of any FL" -" algorithm that conforms to the general structure prescribed by the " -"FedOpt family of algorithms." +"Imagine two datasets that are identical except for a single record (for " +"instance, Alice's data). Differential Privacy (DP) guarantees that any " +"analysis (M), like calculating the average income, will produce nearly " +"identical results for both datasets (O and O' would be similar). This " +"preserves group patterns while obscuring individual details, ensuring the" +" individual's information remains hidden in the crowd." msgstr "" -#: ../../source/explanation-differential-privacy.rst:13 -msgid "DP-FedAvg" +#: ../../source/explanation-differential-privacy.rst:-1 +msgid "DP Intro" msgstr "" -#: ../../source/explanation-differential-privacy.rst:15 +#: ../../source/explanation-differential-privacy.rst:22 msgid "" -"DP-FedAvg, originally proposed by McMahan et al. [mcmahan]_ and extended " -"by Andrew et al. [andrew]_, is essentially FedAvg with the following " -"modifications." +"One of the most commonly used mechanisms to achieve DP is adding enough " +"noise to the output of the analysis to mask the contribution of each " +"individual in the data while preserving the overall accuracy of the " +"analysis." msgstr "" -#: ../../source/explanation-differential-privacy.rst:17 -msgid "" -"**Clipping** : The influence of each client's update is bounded by " -"clipping it. This is achieved by enforcing a cap on the L2 norm of the " -"update, scaling it down if needed." +#: ../../source/explanation-differential-privacy.rst:25 +msgid "Formal Definition" msgstr "" -#: ../../source/explanation-differential-privacy.rst:18 +#: ../../source/explanation-differential-privacy.rst:26 msgid "" -"**Noising** : Gaussian noise, calibrated to the clipping threshold, is " -"added to the average computed at the server." +"Differential Privacy (DP) provides statistical guarantees against the " +"information an adversary can infer through the output of a randomized " +"algorithm. It provides an unconditional upper bound on the influence of a" +" single individual on the output of the algorithm by adding noise [1]. A " +"randomized mechanism M provides (:math:`\\epsilon`, " +":math:`\\delta`)-differential privacy if for any two neighboring " +"databases, D :sub:`1` and D :sub:`2`, that differ in only a single " +"record, and for all possible outputs S ⊆ Range(A):" msgstr "" -#: ../../source/explanation-differential-privacy.rst:20 +#: ../../source/explanation-differential-privacy.rst:32 msgid "" -"The distribution of the update norm has been shown to vary from task-to-" -"task and to evolve as training progresses. This variability is crucial in" -" understanding its impact on differential privacy guarantees, emphasizing" -" the need for an adaptive approach [andrew]_ that continuously adjusts " -"the clipping threshold to track a prespecified quantile of the update " -"norm distribution." -msgstr "" - -#: ../../source/explanation-differential-privacy.rst:23 -msgid "Simplifying Assumptions" +"\\small\n" +"P[M(D_{1} \\in A)] \\leq e^{\\delta} P[M(D_{2} \\in A)] + \\delta" msgstr "" -#: ../../source/explanation-differential-privacy.rst:25 +#: ../../source/explanation-differential-privacy.rst:38 msgid "" -"We make (and attempt to enforce) a number of assumptions that must be " -"satisfied to ensure that the training process actually realizes the " -":math:`(\\epsilon, \\delta)` guarantees the user has in mind when " -"configuring the setup." +"The :math:`\\epsilon` parameter, also known as the privacy budget, is a " +"metric of privacy loss. It also controls the privacy-utility trade-off; " +"lower :math:`\\epsilon` values indicate higher levels of privacy but are " +"likely to reduce utility as well. The :math:`\\delta` parameter accounts " +"for a small probability on which the upper bound :math:`\\epsilon` does " +"not hold. The amount of noise needed to achieve differential privacy is " +"proportional to the sensitivity of the output, which measures the maximum" +" change in the output due to the inclusion or removal of a single record." msgstr "" -#: ../../source/explanation-differential-privacy.rst:27 -msgid "" -"**Fixed-size subsampling** :Fixed-size subsamples of the clients must be " -"taken at each round, as opposed to variable-sized Poisson subsamples." +#: ../../source/explanation-differential-privacy.rst:45 +msgid "Differential Privacy in Machine Learning" msgstr "" -#: ../../source/explanation-differential-privacy.rst:28 +#: ../../source/explanation-differential-privacy.rst:46 msgid "" -"**Unweighted averaging** : The contributions from all the clients must " -"weighted equally in the aggregate to eliminate the requirement for the " -"server to know in advance the sum of the weights of all clients available" -" for selection." +"DP can be utilized in machine learning to preserve the privacy of the " +"training data. Differentially private machine learning algorithms are " +"designed in a way to prevent the algorithm to learn any specific " +"information about any individual data points and subsequently prevent the" +" model from revealing sensitive information. Depending on the stage at " +"which noise is introduced, various methods exist for applying DP to " +"machine learning algorithms. One approach involves adding noise to the " +"training data (either to the features or labels), while another method " +"entails injecting noise into the gradients of the loss function during " +"model training. Additionally, such noise can be incorporated into the " +"model's output." msgstr "" -#: ../../source/explanation-differential-privacy.rst:29 -msgid "" -"**No client failures** : The set of available clients must stay constant " -"across all rounds of training. In other words, clients cannot drop out or" -" fail." +#: ../../source/explanation-differential-privacy.rst:53 +msgid "Differential Privacy in Federated Learning" msgstr "" -#: ../../source/explanation-differential-privacy.rst:31 +#: ../../source/explanation-differential-privacy.rst:54 msgid "" -"The first two are useful for eliminating a multitude of complications " -"associated with calibrating the noise to the clipping threshold, while " -"the third one is required to comply with the assumptions of the privacy " -"analysis." +"Federated learning is a data minimization approach that allows multiple " +"parties to collaboratively train a model without sharing their raw data. " +"However, federated learning also introduces new privacy challenges. The " +"model updates between parties and the central server can leak information" +" about the local data. These leaks can be exploited by attacks such as " +"membership inference and property inference attacks, or model inversion " +"attacks." msgstr "" -#: ../../source/explanation-differential-privacy.rst:34 +#: ../../source/explanation-differential-privacy.rst:58 msgid "" -"These restrictions are in line with constraints imposed by Andrew et al. " -"[andrew]_." +"DP can play a crucial role in federated learning to provide privacy for " +"the clients' data." msgstr "" -#: ../../source/explanation-differential-privacy.rst:37 -msgid "Customizable Responsibility for Noise injection" +#: ../../source/explanation-differential-privacy.rst:60 +msgid "" +"Depending on the granularity of privacy provision or the location of " +"noise addition, different forms of DP exist in federated learning. In " +"this explainer, we focus on two approaches of DP utilization in federated" +" learning based on where the noise is added: at the server (also known as" +" the center) or at the client (also known as the local)." msgstr "" -#: ../../source/explanation-differential-privacy.rst:38 +#: ../../source/explanation-differential-privacy.rst:63 msgid "" -"In contrast to other implementations where the addition of noise is " -"performed at the server, you can configure the site of noise injection to" -" better match your threat model. We provide users with the flexibility to" -" set up the training such that each client independently adds a small " -"amount of noise to the clipped update, with the result that simply " -"aggregating the noisy updates is equivalent to the explicit addition of " -"noise to the non-noisy aggregate at the server." +"**Central Differential Privacy**: DP is applied by the server and the " +"goal is to prevent the aggregated model from leaking information about " +"each client's data." msgstr "" -#: ../../source/explanation-differential-privacy.rst:41 +#: ../../source/explanation-differential-privacy.rst:65 msgid "" -"To be precise, if we let :math:`m` be the number of clients sampled each " -"round and :math:`\\sigma_\\Delta` be the scale of the total Gaussian " -"noise that needs to be added to the sum of the model updates, we can use " -"simple maths to show that this is equivalent to each client adding noise " -"with scale :math:`\\sigma_\\Delta/\\sqrt{m}`." +"**Local Differential Privacy**: DP is applied on the client side before " +"sending any information to the server and the goal is to prevent the " +"updates that are sent to the server from leaking any information about " +"the client's data." msgstr "" -#: ../../source/explanation-differential-privacy.rst:44 -msgid "Wrapper-based approach" +#: ../../source/explanation-differential-privacy.rst:-1 +#: ../../source/explanation-differential-privacy.rst:68 +#: ../../source/how-to-use-differential-privacy.rst:11 +msgid "Central Differential Privacy" msgstr "" -#: ../../source/explanation-differential-privacy.rst:46 +#: ../../source/explanation-differential-privacy.rst:69 msgid "" -"Introducing DP to an existing workload can be thought of as adding an " -"extra layer of security around it. This inspired us to provide the " -"additional server and client-side logic needed to make the training " -"process differentially private as wrappers for instances of the " -":code:`Strategy` and :code:`NumPyClient` abstract classes respectively. " -"This wrapper-based approach has the advantage of being easily composable " -"with other wrappers that someone might contribute to the Flower library " -"in the future, e.g., for secure aggregation. Using Inheritance instead " -"can be tedious because that would require the creation of new sub- " -"classes every time a new class implementing :code:`Strategy` or " -":code:`NumPyClient` is defined." +"In this approach, which is also known as user-level DP, the central " +"server is responsible for adding noise to the globally aggregated " +"parameters. It should be noted that trust in the server is required." msgstr "" -#: ../../source/explanation-differential-privacy.rst:49 -msgid "Server-side logic" -msgstr "" - -#: ../../source/explanation-differential-privacy.rst:51 +#: ../../source/explanation-differential-privacy.rst:76 msgid "" -"The first version of our solution was to define a decorator whose " -"constructor accepted, among other things, a boolean-valued variable " -"indicating whether adaptive clipping was to be enabled or not. We quickly" -" realized that this would clutter its :code:`__init__()` function with " -"variables corresponding to hyperparameters of adaptive clipping that " -"would remain unused when it was disabled. A cleaner implementation could " -"be achieved by splitting the functionality into two decorators, " -":code:`DPFedAvgFixed` and :code:`DPFedAvgAdaptive`, with the latter sub- " -"classing the former. The constructors for both classes accept a boolean " -"parameter :code:`server_side_noising`, which, as the name suggests, " -"determines where noising is to be performed." +"While there are various ways to implement central DP in federated " +"learning, we concentrate on the algorithms proposed by [2] and [3]. The " +"overall approach is to clip the model updates sent by the clients and add" +" some amount of noise to the aggregated model. In each iteration, a " +"random set of clients is chosen with a specific probability for training." +" Each client performs local training on its own data. The update of each " +"client is then clipped by some value `S` (sensitivity `S`). This would " +"limit the impact of any individual client which is crucial for privacy " +"and often beneficial for robustness. A common approach to achieve this is" +" by restricting the `L2` norm of the clients' model updates, ensuring " +"that larger updates are scaled down to fit within the norm `S`." msgstr "" -#: ../../source/explanation-differential-privacy.rst:54 -#: ../../source/ref-api/flwr.server.strategy.DPFedAvgFixed.rst:2 -msgid "DPFedAvgFixed" +#: ../../source/explanation-differential-privacy.rst:-1 +msgid "clipping" msgstr "" -#: ../../source/explanation-differential-privacy.rst:56 +#: ../../source/explanation-differential-privacy.rst:89 msgid "" -"The server-side capabilities required for the original version of DP-" -"FedAvg, i.e., the one which performed fixed clipping, can be completely " -"captured with the help of wrapper logic for just the following two " -"methods of the :code:`Strategy` abstract class." +"Afterwards, the Gaussian mechanism is used to add noise in order to " +"distort the sum of all clients' updates. The amount of noise is scaled to" +" the sensitivity value to obtain a privacy guarantee. The Gaussian " +"mechanism is used with a noise sampled from `N (0, σ²)` where `σ = ( " +"noise_scale * S ) / (number of sampled clients)`." msgstr "" -#: ../../source/explanation-differential-privacy.rst:58 +#: ../../source/explanation-differential-privacy.rst:94 +msgid "Clipping" +msgstr "" + +#: ../../source/explanation-differential-privacy.rst:96 msgid "" -":code:`configure_fit()` : The config dictionary being sent by the wrapped" -" :code:`Strategy` to each client needs to be augmented with an additional" -" value equal to the clipping threshold (keyed under " -":code:`dpfedavg_clip_norm`) and, if :code:`server_side_noising=true`, " -"another one equal to the scale of the Gaussian noise that needs to be " -"added at the client (keyed under :code:`dpfedavg_noise_stddev`). This " -"entails *post*-processing of the results returned by the wrappee's " -"implementation of :code:`configure_fit()`." +"There are two forms of clipping commonly used in Central DP: Fixed " +"Clipping and Adaptive Clipping." msgstr "" -#: ../../source/explanation-differential-privacy.rst:59 +#: ../../source/explanation-differential-privacy.rst:98 msgid "" -":code:`aggregate_fit()`: We check whether any of the sampled clients " -"dropped out or failed to upload an update before the round timed out. In " -"that case, we need to abort the current round, discarding any successful " -"updates that were received, and move on to the next one. On the other " -"hand, if all clients responded successfully, we must force the averaging " -"of the updates to happen in an unweighted manner by intercepting the " -":code:`parameters` field of :code:`FitRes` for each received update and " -"setting it to 1. Furthermore, if :code:`server_side_noising=true`, each " -"update is perturbed with an amount of noise equal to what it would have " -"been subjected to had client-side noising being enabled. This entails " -"*pre*-processing of the arguments to this method before passing them on " -"to the wrappee's implementation of :code:`aggregate_fit()`." +"**Fixed Clipping** : A predefined fix threshold is set for the magnitude " +"of clients' updates. Any update exceeding this threshold is clipped back " +"to the threshold value." msgstr "" -#: ../../source/explanation-differential-privacy.rst:62 +#: ../../source/explanation-differential-privacy.rst:100 msgid "" -"We can't directly change the aggregation function of the wrapped strategy" -" to force it to add noise to the aggregate, hence we simulate client-side" -" noising to implement server-side noising." +"**Adaptive Clipping** : The clipping threshold dynamically adjusts based " +"on the observed update distribution [4]. It means that the clipping value" +" is tuned during the rounds with respect to the quantile of the update " +"norm distribution." msgstr "" -#: ../../source/explanation-differential-privacy.rst:64 +#: ../../source/explanation-differential-privacy.rst:102 msgid "" -"These changes have been put together into a class called " -":code:`DPFedAvgFixed`, whose constructor accepts the strategy being " -"decorated, the clipping threshold and the number of clients sampled every" -" round as compulsory arguments. The user is expected to specify the " -"clipping threshold since the order of magnitude of the update norms is " -"highly dependent on the model being trained and providing a default value" -" would be misleading. The number of clients sampled at every round is " -"required to calculate the amount of noise that must be added to each " -"individual update, either by the server or the clients." +"The choice between fixed and adaptive clipping depends on various factors" +" such as privacy requirements, data distribution, model complexity, and " +"others." msgstr "" -#: ../../source/explanation-differential-privacy.rst:67 -#: ../../source/ref-api/flwr.server.strategy.DPFedAvgAdaptive.rst:2 -msgid "DPFedAvgAdaptive" +#: ../../source/explanation-differential-privacy.rst:-1 +#: ../../source/explanation-differential-privacy.rst:105 +#: ../../source/how-to-use-differential-privacy.rst:96 +msgid "Local Differential Privacy" msgstr "" -#: ../../source/explanation-differential-privacy.rst:69 +#: ../../source/explanation-differential-privacy.rst:107 msgid "" -"The additional functionality required to facilitate adaptive clipping has" -" been provided in :code:`DPFedAvgAdaptive`, a subclass of " -":code:`DPFedAvgFixed`. It overrides the above-mentioned methods to do the" -" following." +"In this approach, each client is responsible for performing DP. Local DP " +"avoids the need for a fully trusted aggregator, but it should be noted " +"that local DP leads to a decrease in accuracy but better privacy in " +"comparison to central DP." msgstr "" -#: ../../source/explanation-differential-privacy.rst:71 -msgid "" -":code:`configure_fit()` : It intercepts the config dict returned by " -":code:`super.configure_fit()` to add the key-value pair " -":code:`dpfedavg_adaptive_clip_enabled:True` to it, which the client " -"interprets as an instruction to include an indicator bit (1 if update " -"norm <= clipping threshold, 0 otherwise) in the results returned by it." +#: ../../source/explanation-differential-privacy.rst:116 +msgid "In this explainer, we focus on two forms of achieving Local DP:" msgstr "" -#: ../../source/explanation-differential-privacy.rst:73 +#: ../../source/explanation-differential-privacy.rst:118 msgid "" -":code:`aggregate_fit()` : It follows a call to " -":code:`super.aggregate_fit()` with one to :code:`__update_clip_norm__()`," -" a procedure which adjusts the clipping threshold on the basis of the " -"indicator bits received from the sampled clients." +"Each client adds noise to the local updates before sending them to the " +"server. To achieve (:math:`\\epsilon`, :math:`\\delta`)-DP, considering " +"the sensitivity of the local model to be ∆, Gaussian noise is applied " +"with a noise scale of σ where:" msgstr "" -#: ../../source/explanation-differential-privacy.rst:77 -msgid "Client-side logic" +#: ../../source/explanation-differential-privacy.rst:120 +msgid "" +"\\small\n" +"\\frac{∆ \\times \\sqrt{2 \\times " +"\\log\\left(\\frac{1.25}{\\delta}\\right)}}{\\epsilon}\n" +"\n" msgstr "" -#: ../../source/explanation-differential-privacy.rst:79 +#: ../../source/explanation-differential-privacy.rst:125 msgid "" -"The client-side capabilities required can be completely captured through " -"wrapper logic for just the :code:`fit()` method of the " -":code:`NumPyClient` abstract class. To be precise, we need to *post-" -"process* the update computed by the wrapped client to clip it, if " -"necessary, to the threshold value supplied by the server as part of the " -"config dictionary. In addition to this, it may need to perform some extra" -" work if either (or both) of the following keys are also present in the " -"dict." +"Each client adds noise to the gradients of the model during the local " +"training (DP-SGD). More specifically, in this approach, gradients are " +"clipped and an amount of calibrated noise is injected into the gradients." msgstr "" -#: ../../source/explanation-differential-privacy.rst:81 +#: ../../source/explanation-differential-privacy.rst:128 msgid "" -":code:`dpfedavg_noise_stddev` : Generate and add the specified amount of " -"noise to the clipped update." +"Please note that these two approaches are providing privacy at different " +"levels." msgstr "" -#: ../../source/explanation-differential-privacy.rst:82 -msgid "" -":code:`dpfedavg_adaptive_clip_enabled` : Augment the metrics dict in the " -":code:`FitRes` object being returned to the server with an indicator bit," -" calculated as described earlier." +#: ../../source/explanation-differential-privacy.rst:131 +msgid "**References:**" msgstr "" -#: ../../source/explanation-differential-privacy.rst:86 -msgid "Performing the :math:`(\\epsilon, \\delta)` analysis" +#: ../../source/explanation-differential-privacy.rst:133 +msgid "[1] Dwork et al. The Algorithmic Foundations of Differential Privacy." msgstr "" -#: ../../source/explanation-differential-privacy.rst:88 +#: ../../source/explanation-differential-privacy.rst:135 msgid "" -"Assume you have trained for :math:`n` rounds with sampling fraction " -":math:`q` and noise multiplier :math:`z`. In order to calculate the " -":math:`\\epsilon` value this would result in for a particular " -":math:`\\delta`, the following script may be used." +"[2] McMahan et al. Learning Differentially Private Recurrent Language " +"Models." msgstr "" -#: ../../source/explanation-differential-privacy.rst:98 +#: ../../source/explanation-differential-privacy.rst:137 msgid "" -"McMahan et al. \"Learning Differentially Private Recurrent Language " -"Models.\" International Conference on Learning Representations (ICLR), " -"2017." +"[3] Geyer et al. Differentially Private Federated Learning: A Client " +"Level Perspective." msgstr "" -#: ../../source/explanation-differential-privacy.rst:100 -msgid "" -"Andrew, Galen, et al. \"Differentially Private Learning with Adaptive " -"Clipping.\" Advances in Neural Information Processing Systems (NeurIPS), " -"2021." +#: ../../source/explanation-differential-privacy.rst:139 +msgid "[4] Galen et al. Differentially Private Learning with Adaptive Clipping." msgstr "" #: ../../source/explanation-federated-evaluation.rst:2 @@ -3947,6 +3918,7 @@ msgid "As a reference, this document follows the above structure." msgstr "" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:90 +#: ../../source/ref-api/flwr.common.Metadata.rst:2 msgid "Metadata" msgstr "" @@ -4259,13 +4231,12 @@ msgstr "" #: ../../source/how-to-configure-clients.rst:89 msgid "" "This can be achieved by customizing an existing strategy or by " -"`implementing a custom strategy from scratch " -"`_. " -"Here's a nonsensical example that customizes :code:`FedAvg` by adding a " -"custom ``\"hello\": \"world\"`` configuration key/value pair to the " -"config dict of a *single client* (only the first client in the list, the " -"other clients in this round to not receive this \"special\" config " -"value):" +":doc:`implementing a custom strategy from scratch `. Here's a nonsensical example that customizes :code:`FedAvg`" +" by adding a custom ``\"hello\": \"world\"`` configuration key/value pair" +" to the config dict of a *single client* (only the first client in the " +"list, the other clients in this round to not receive this \"special\" " +"config value):" msgstr "" #: ../../source/how-to-configure-logging.rst:2 @@ -4602,7 +4573,7 @@ msgid "" "More sophisticated implementations can use :code:`configure_fit` to " "implement custom client selection logic. A client will only participate " "in a round if the corresponding :code:`ClientProxy` is included in the " -"the list returned from :code:`configure_fit`." +"list returned from :code:`configure_fit`." msgstr "" #: ../../source/how-to-implement-strategies.rst:240 @@ -4673,7 +4644,7 @@ msgid "" "More sophisticated implementations can use :code:`configure_evaluate` to " "implement custom client selection logic. A client will only participate " "in a round if the corresponding :code:`ClientProxy` is included in the " -"the list returned from :code:`configure_evaluate`." +"list returned from :code:`configure_evaluate`." msgstr "" #: ../../source/how-to-implement-strategies.rst:287 @@ -4805,9 +4776,7 @@ msgid "Install via Docker" msgstr "" #: ../../source/how-to-install-flower.rst:60 -msgid "" -"`How to run Flower using Docker `_" +msgid ":doc:`How to run Flower using Docker `" msgstr "" #: ../../source/how-to-install-flower.rst:63 @@ -5069,14 +5038,12 @@ msgstr "" #: ../../source/how-to-monitor-simulation.rst:234 msgid "" -"Ray Dashboard: ``_" +"Ray Dashboard: ``_" msgstr "" #: ../../source/how-to-monitor-simulation.rst:236 -msgid "" -"Ray Metrics: ``_" +msgid "Ray Metrics: ``_" msgstr "" #: ../../source/how-to-run-flower-using-docker.rst:2 @@ -5954,7 +5921,8 @@ msgstr "" msgid "" "Remove \"placeholder\" methods from subclasses of ``Client`` or " "``NumPyClient``. If you, for example, use server-side evaluation, then " -"empty placeholder implementations of ``evaluate`` are no longer necessary." +"empty placeholder implementations of ``evaluate`` are no longer " +"necessary." msgstr "" #: ../../source/how-to-upgrade-to-flower-1.0.rst:85 @@ -6093,7 +6061,152 @@ msgid "" msgstr "" #: ../../source/how-to-use-built-in-mods.rst:89 -msgid "Enjoy building more robust and flexible ``ClientApp``s with mods!" +msgid "Enjoy building a more robust and flexible ``ClientApp`` with mods!" +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:2 +msgid "Use Differential Privacy" +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:3 +msgid "" +"This guide explains how you can utilize differential privacy in the " +"Flower framework. If you are not yet familiar with differential privacy, " +"you can refer to :doc:`explanation-differential-privacy`." +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:7 +msgid "" +"Differential Privacy in Flower is in a preview phase. If you plan to use " +"these features in a production environment with sensitive data, feel free" +" contact us to discuss your requirements and to receive guidance on how " +"to best use these features." +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:12 +msgid "" +"This approach consists of two seprate phases: clipping of the updates and" +" adding noise to the aggregated model. For the clipping phase, Flower " +"framework has made it possible to decide whether to perform clipping on " +"the server side or the client side." +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:15 +msgid "" +"**Server-side Clipping**: This approach has the advantage of the server " +"enforcing uniform clipping across all clients' updates and reducing the " +"communication overhead for clipping values. However, it also has the " +"disadvantage of increasing the computational load on the server due to " +"the need to perform the clipping operation for all clients." +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:16 +msgid "" +"**Client-side Clipping**: This approach has the advantage of reducing the" +" computational overhead on the server. However, it also has the " +"disadvantage of lacking centralized control, as the server has less " +"control over the clipping process." +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:21 +msgid "Server-side Clipping" +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:22 +msgid "" +"For central DP with server-side clipping, there are two :code:`Strategy` " +"classes that act as wrappers around the actual :code:`Strategy` instance " +"(for example, :code:`FedAvg`). The two wrapper classes are " +":code:`DifferentialPrivacyServerSideFixedClipping` and " +":code:`DifferentialPrivacyServerSideAdaptiveClipping` for fixed and " +"adaptive clipping." +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:-1 +msgid "server side clipping" +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:31 +msgid "" +"The code sample below enables the :code:`FedAvg` strategy to use server-" +"side fixed clipping using the " +":code:`DifferentialPrivacyServerSideFixedClipping` wrapper class. The " +"same approach can be used with " +":code:`DifferentialPrivacyServerSideAdaptiveClipping` by adjusting the " +"corresponding input parameters." +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:52 +msgid "Client-side Clipping" +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:53 +msgid "" +"For central DP with client-side clipping, the server sends the clipping " +"value to selected clients on each round. Clients can use existing Flower " +":code:`Mods` to perform the clipping. Two mods are available for fixed " +"and adaptive client-side clipping: :code:`fixedclipping_mod` and " +":code:`adaptiveclipping_mod` with corresponding server-side wrappers " +":code:`DifferentialPrivacyClientSideFixedClipping` and " +":code:`DifferentialPrivacyClientSideAdaptiveClipping`." +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:-1 +msgid "client side clipping" +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:63 +msgid "" +"The code sample below enables the :code:`FedAvg` strategy to use " +"differential privacy with client-side fixed clipping using both the " +":code:`DifferentialPrivacyClientSideFixedClipping` wrapper class and, on " +"the client, :code:`fixedclipping_mod`:" +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:80 +msgid "" +"In addition to the server-side strategy wrapper, the :code:`ClientApp` " +"needs to configure the matching :code:`fixedclipping_mod` to perform the " +"client-side clipping:" +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:97 +msgid "" +"To utilize local differential privacy (DP) and add noise to the client " +"model parameters before transmitting them to the server in Flower, you " +"can use the `LocalDpMod`. The following hyperparameters need to be set: " +"clipping norm value, sensitivity, epsilon, and delta." +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:-1 +msgid "local DP mod" +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:104 +msgid "Below is a code example that shows how to use :code:`LocalDpMod`:" +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:122 +msgid "" +"Please note that the order of mods, especially those that modify " +"parameters, is important when using multiple modifiers. Typically, " +"differential privacy (DP) modifiers should be the last to operate on " +"parameters." +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:125 +msgid "Local Training using Privacy Engines" +msgstr "" + +#: ../../source/how-to-use-differential-privacy.rst:126 +msgid "" +"For ensuring data instance-level privacy during local model training on " +"the client side, consider leveraging privacy engines such as Opacus and " +"TensorFlow Privacy. For examples of using Flower with these engines, " +"please refer to the Flower examples directory (`Opacus " +"`_, `Tensorflow" +" Privacy `_)." msgstr "" #: ../../source/how-to-use-strategies.rst:2 @@ -6211,11 +6324,11 @@ msgstr "" msgid "How-to guides" msgstr "" -#: ../../source/index.rst:97 +#: ../../source/index.rst:98 msgid "Legacy example guides" msgstr "" -#: ../../source/index.rst:108 ../../source/index.rst:112 +#: ../../source/index.rst:109 ../../source/index.rst:113 msgid "Explanations" msgstr "" @@ -6223,23 +6336,23 @@ msgstr "" msgid "API reference" msgstr "" -#: ../../source/index.rst:137 +#: ../../source/index.rst:138 msgid "Reference docs" msgstr "" -#: ../../source/index.rst:153 +#: ../../source/index.rst:154 msgid "Contributor tutorials" msgstr "" -#: ../../source/index.rst:160 +#: ../../source/index.rst:161 msgid "Contributor how-to guides" msgstr "" -#: ../../source/index.rst:173 +#: ../../source/index.rst:174 msgid "Contributor explanations" msgstr "" -#: ../../source/index.rst:179 +#: ../../source/index.rst:180 msgid "Contributor references" msgstr "" @@ -6323,33 +6436,33 @@ msgid "" "specific goal." msgstr "" -#: ../../source/index.rst:110 +#: ../../source/index.rst:111 msgid "" "Understanding-oriented concept guides explain and discuss key topics and " "underlying ideas behind Flower and collaborative AI." msgstr "" -#: ../../source/index.rst:120 +#: ../../source/index.rst:121 msgid "References" msgstr "" -#: ../../source/index.rst:122 +#: ../../source/index.rst:123 msgid "Information-oriented API reference and other reference material." msgstr "" -#: ../../source/index.rst:131::1 +#: ../../source/index.rst:132::1 msgid ":py:obj:`flwr `\\" msgstr "" -#: ../../source/index.rst:131::1 flwr:1 of +#: ../../source/index.rst:132::1 flwr:1 of msgid "Flower main package." msgstr "" -#: ../../source/index.rst:148 +#: ../../source/index.rst:149 msgid "Contributor docs" msgstr "" -#: ../../source/index.rst:150 +#: ../../source/index.rst:151 msgid "" "The Flower community welcomes contributions. The following docs are " "intended to help along the way." @@ -6371,11 +6484,19 @@ msgstr "" msgid "flower-fleet-api" msgstr "" +#: ../../source/ref-api-cli.rst:37 +msgid "flower-client-app" +msgstr "" + +#: ../../source/ref-api-cli.rst:47 +msgid "flower-server-app" +msgstr "" + #: ../../source/ref-api/flwr.rst:2 msgid "flwr" msgstr "" -#: ../../source/ref-api/flwr.rst:25 ../../source/ref-api/flwr.server.rst:48 +#: ../../source/ref-api/flwr.rst:25 ../../source/ref-api/flwr.server.rst:52 msgid "Modules" msgstr "" @@ -6400,7 +6521,7 @@ msgid ":py:obj:`flwr.server `\\" msgstr "" #: ../../source/ref-api/flwr.rst:35::1 -#: ../../source/ref-api/flwr.server.rst:37::1 flwr.server:1 +#: ../../source/ref-api/flwr.server.rst:41::1 flwr.server:1 #: flwr.server.server.Server:1 of msgid "Flower server." msgstr "" @@ -6419,7 +6540,6 @@ msgstr "" #: ../../source/ref-api/flwr.client.rst:13 #: ../../source/ref-api/flwr.common.rst:13 -#: ../../source/ref-api/flwr.server.driver.rst:13 #: ../../source/ref-api/flwr.server.rst:13 #: ../../source/ref-api/flwr.simulation.rst:13 msgid "Functions" @@ -6457,10 +6577,10 @@ msgid "Start a Flower NumPyClient which connects to a gRPC server." msgstr "" #: ../../source/ref-api/flwr.client.rst:26 -#: ../../source/ref-api/flwr.common.rst:31 -#: ../../source/ref-api/flwr.server.driver.rst:24 -#: ../../source/ref-api/flwr.server.rst:28 +#: ../../source/ref-api/flwr.common.rst:32 +#: ../../source/ref-api/flwr.server.rst:29 #: ../../source/ref-api/flwr.server.strategy.rst:17 +#: ../../source/ref-api/flwr.server.workflow.rst:17 msgid "Classes" msgstr "" @@ -6475,7 +6595,7 @@ msgstr "" #: ../../source/ref-api/flwr.client.rst:33::1 msgid "" -":py:obj:`ClientApp `\\ \\(client\\_fn\\[\\, " +":py:obj:`ClientApp `\\ \\(\\[client\\_fn\\, " "mods\\]\\)" msgstr "" @@ -6502,8 +6622,12 @@ msgstr "" #: ../../source/ref-api/flwr.client.Client.rst:15 #: ../../source/ref-api/flwr.client.ClientApp.rst:15 #: ../../source/ref-api/flwr.client.NumPyClient.rst:15 +#: ../../source/ref-api/flwr.common.Array.rst:15 #: ../../source/ref-api/flwr.common.ClientMessage.rst:15 +#: ../../source/ref-api/flwr.common.ConfigsRecord.rst:15 +#: ../../source/ref-api/flwr.common.Context.rst:15 #: ../../source/ref-api/flwr.common.DisconnectRes.rst:15 +#: ../../source/ref-api/flwr.common.Error.rst:15 #: ../../source/ref-api/flwr.common.EvaluateIns.rst:15 #: ../../source/ref-api/flwr.common.EvaluateRes.rst:15 #: ../../source/ref-api/flwr.common.FitIns.rst:15 @@ -6512,20 +6636,32 @@ msgstr "" #: ../../source/ref-api/flwr.common.GetParametersRes.rst:15 #: ../../source/ref-api/flwr.common.GetPropertiesIns.rst:15 #: ../../source/ref-api/flwr.common.GetPropertiesRes.rst:15 +#: ../../source/ref-api/flwr.common.Message.rst:15 +#: ../../source/ref-api/flwr.common.MessageType.rst:15 +#: ../../source/ref-api/flwr.common.MessageTypeLegacy.rst:15 +#: ../../source/ref-api/flwr.common.Metadata.rst:15 +#: ../../source/ref-api/flwr.common.MetricsRecord.rst:15 #: ../../source/ref-api/flwr.common.Parameters.rst:15 +#: ../../source/ref-api/flwr.common.ParametersRecord.rst:15 #: ../../source/ref-api/flwr.common.ReconnectIns.rst:15 +#: ../../source/ref-api/flwr.common.RecordSet.rst:15 #: ../../source/ref-api/flwr.common.ServerMessage.rst:15 #: ../../source/ref-api/flwr.common.Status.rst:15 #: ../../source/ref-api/flwr.server.ClientManager.rst:15 +#: ../../source/ref-api/flwr.server.Driver.rst:15 #: ../../source/ref-api/flwr.server.History.rst:15 +#: ../../source/ref-api/flwr.server.LegacyContext.rst:15 #: ../../source/ref-api/flwr.server.Server.rst:15 +#: ../../source/ref-api/flwr.server.ServerApp.rst:15 #: ../../source/ref-api/flwr.server.ServerConfig.rst:15 #: ../../source/ref-api/flwr.server.SimpleClientManager.rst:15 -#: ../../source/ref-api/flwr.server.driver.Driver.rst:15 -#: ../../source/ref-api/flwr.server.driver.GrpcDriver.rst:15 #: ../../source/ref-api/flwr.server.strategy.Bulyan.rst:15 #: ../../source/ref-api/flwr.server.strategy.DPFedAvgAdaptive.rst:15 #: ../../source/ref-api/flwr.server.strategy.DPFedAvgFixed.rst:15 +#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyClientSideAdaptiveClipping.rst:15 +#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyClientSideFixedClipping.rst:15 +#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyServerSideAdaptiveClipping.rst:15 +#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyServerSideFixedClipping.rst:15 #: ../../source/ref-api/flwr.server.strategy.FaultTolerantFedAvg.rst:15 #: ../../source/ref-api/flwr.server.strategy.FedAdagrad.rst:15 #: ../../source/ref-api/flwr.server.strategy.FedAdam.rst:15 @@ -6543,6 +6679,9 @@ msgstr "" #: ../../source/ref-api/flwr.server.strategy.Krum.rst:15 #: ../../source/ref-api/flwr.server.strategy.QFedAvg.rst:15 #: ../../source/ref-api/flwr.server.strategy.Strategy.rst:15 +#: ../../source/ref-api/flwr.server.workflow.DefaultWorkflow.rst:15 +#: ../../source/ref-api/flwr.server.workflow.SecAggPlusWorkflow.rst:15 +#: ../../source/ref-api/flwr.server.workflow.SecAggWorkflow.rst:15 msgid "Methods" msgstr "" @@ -6619,9 +6758,12 @@ msgstr "" #: ../../source/ref-api/flwr.client.Client.rst:46 #: ../../source/ref-api/flwr.client.NumPyClient.rst:46 +#: ../../source/ref-api/flwr.common.Array.rst:28 #: ../../source/ref-api/flwr.common.ClientMessage.rst:25 #: ../../source/ref-api/flwr.common.Code.rst:19 +#: ../../source/ref-api/flwr.common.Context.rst:25 #: ../../source/ref-api/flwr.common.DisconnectRes.rst:25 +#: ../../source/ref-api/flwr.common.Error.rst:25 #: ../../source/ref-api/flwr.common.EvaluateIns.rst:25 #: ../../source/ref-api/flwr.common.EvaluateRes.rst:25 #: ../../source/ref-api/flwr.common.EventType.rst:19 @@ -6631,10 +6773,16 @@ msgstr "" #: ../../source/ref-api/flwr.common.GetParametersRes.rst:25 #: ../../source/ref-api/flwr.common.GetPropertiesIns.rst:25 #: ../../source/ref-api/flwr.common.GetPropertiesRes.rst:25 +#: ../../source/ref-api/flwr.common.Message.rst:37 +#: ../../source/ref-api/flwr.common.MessageType.rst:25 +#: ../../source/ref-api/flwr.common.MessageTypeLegacy.rst:25 +#: ../../source/ref-api/flwr.common.Metadata.rst:25 #: ../../source/ref-api/flwr.common.Parameters.rst:25 #: ../../source/ref-api/flwr.common.ReconnectIns.rst:25 +#: ../../source/ref-api/flwr.common.RecordSet.rst:25 #: ../../source/ref-api/flwr.common.ServerMessage.rst:25 #: ../../source/ref-api/flwr.common.Status.rst:25 +#: ../../source/ref-api/flwr.server.LegacyContext.rst:25 #: ../../source/ref-api/flwr.server.ServerConfig.rst:25 msgid "Attributes" msgstr "" @@ -6652,14 +6800,25 @@ msgstr "" #: flwr.client.numpy_client.NumPyClient.fit #: flwr.client.numpy_client.NumPyClient.get_parameters #: flwr.client.numpy_client.NumPyClient.get_properties -#: flwr.server.app.start_server +#: flwr.common.context.Context flwr.common.message.Error +#: flwr.common.message.Message flwr.common.message.Message.create_error_reply +#: flwr.common.message.Message.create_reply flwr.common.message.Metadata +#: flwr.common.record.parametersrecord.Array flwr.server.app.start_server #: flwr.server.client_manager.ClientManager.register #: flwr.server.client_manager.ClientManager.unregister #: flwr.server.client_manager.SimpleClientManager.register #: flwr.server.client_manager.SimpleClientManager.unregister #: flwr.server.client_manager.SimpleClientManager.wait_for -#: flwr.server.driver.app.start_driver flwr.server.driver.driver.Driver +#: flwr.server.compat.app.start_driver flwr.server.driver.driver.Driver +#: flwr.server.driver.driver.Driver.create_message +#: flwr.server.driver.driver.Driver.pull_messages +#: flwr.server.driver.driver.Driver.push_messages +#: flwr.server.driver.driver.Driver.send_and_receive #: flwr.server.strategy.bulyan.Bulyan +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_evaluate #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit #: flwr.server.strategy.fedadagrad.FedAdagrad @@ -6675,7 +6834,10 @@ msgstr "" #: flwr.server.strategy.strategy.Strategy.configure_fit #: flwr.server.strategy.strategy.Strategy.evaluate #: flwr.server.strategy.strategy.Strategy.initialize_parameters -#: flwr.simulation.app.start_simulation of +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow +#: flwr.simulation.app.start_simulation +#: flwr.simulation.run_simulation.run_simulation of msgid "Parameters" msgstr "" @@ -6693,13 +6855,17 @@ msgstr "" #: flwr.client.numpy_client.NumPyClient.fit #: flwr.client.numpy_client.NumPyClient.get_parameters #: flwr.client.numpy_client.NumPyClient.get_properties -#: flwr.server.app.start_server +#: flwr.common.message.Message.create_reply flwr.server.app.start_server #: flwr.server.client_manager.ClientManager.num_available #: flwr.server.client_manager.ClientManager.register #: flwr.server.client_manager.SimpleClientManager.num_available #: flwr.server.client_manager.SimpleClientManager.register #: flwr.server.client_manager.SimpleClientManager.wait_for -#: flwr.server.driver.app.start_driver +#: flwr.server.compat.app.start_driver +#: flwr.server.driver.driver.Driver.create_message +#: flwr.server.driver.driver.Driver.pull_messages +#: flwr.server.driver.driver.Driver.push_messages +#: flwr.server.driver.driver.Driver.send_and_receive #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_evaluate #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit #: flwr.server.strategy.strategy.Strategy.aggregate_evaluate @@ -6723,13 +6889,17 @@ msgstr "" #: flwr.client.client.Client.get_properties #: flwr.client.numpy_client.NumPyClient.get_parameters #: flwr.client.numpy_client.NumPyClient.get_properties -#: flwr.server.app.start_server +#: flwr.common.message.Message.create_reply flwr.server.app.start_server #: flwr.server.client_manager.ClientManager.num_available #: flwr.server.client_manager.ClientManager.register #: flwr.server.client_manager.SimpleClientManager.num_available #: flwr.server.client_manager.SimpleClientManager.register #: flwr.server.client_manager.SimpleClientManager.wait_for -#: flwr.server.driver.app.start_driver +#: flwr.server.compat.app.start_driver +#: flwr.server.driver.driver.Driver.create_message +#: flwr.server.driver.driver.Driver.pull_messages +#: flwr.server.driver.driver.Driver.push_messages +#: flwr.server.driver.driver.Driver.send_and_receive #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_evaluate #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit #: flwr.server.strategy.strategy.Strategy.aggregate_evaluate @@ -6779,23 +6949,38 @@ msgstr "" msgid "ClientApp" msgstr "" -#: flwr.client.client_app.ClientApp:1 flwr.common.typing.ClientMessage:1 +#: flwr.client.client_app.ClientApp:1 flwr.common.constant.MessageType:1 +#: flwr.common.constant.MessageTypeLegacy:1 flwr.common.context.Context:1 +#: flwr.common.message.Error:1 flwr.common.message.Message:1 +#: flwr.common.message.Metadata:1 flwr.common.record.parametersrecord.Array:1 +#: flwr.common.record.recordset.RecordSet:1 flwr.common.typing.ClientMessage:1 #: flwr.common.typing.DisconnectRes:1 flwr.common.typing.EvaluateIns:1 #: flwr.common.typing.EvaluateRes:1 flwr.common.typing.FitIns:1 #: flwr.common.typing.FitRes:1 flwr.common.typing.GetParametersIns:1 #: flwr.common.typing.GetParametersRes:1 flwr.common.typing.GetPropertiesIns:1 #: flwr.common.typing.GetPropertiesRes:1 flwr.common.typing.Parameters:1 #: flwr.common.typing.ReconnectIns:1 flwr.common.typing.ServerMessage:1 -#: flwr.common.typing.Status:1 flwr.server.app.ServerConfig:1 -#: flwr.server.driver.driver.Driver:1 -#: flwr.server.driver.grpc_driver.GrpcDriver:1 flwr.server.history.History:1 -#: flwr.server.server.Server:1 of +#: flwr.common.typing.Status:1 flwr.server.driver.driver.Driver:1 +#: flwr.server.history.History:1 flwr.server.server.Server:1 +#: flwr.server.server_app.ServerApp:1 flwr.server.server_config.ServerConfig:1 +#: flwr.server.workflow.default_workflows.DefaultWorkflow:1 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:1 +#: of msgid "Bases: :py:class:`object`" msgstr "" -#: flwr.client.app.start_client:33 flwr.client.app.start_numpy_client:36 -#: flwr.client.client_app.ClientApp:4 flwr.server.app.start_server:41 -#: flwr.server.driver.app.start_driver:30 of +#: flwr.client.app.start_client:41 flwr.client.app.start_numpy_client:36 +#: flwr.client.client_app.ClientApp:4 +#: flwr.client.client_app.ClientApp.evaluate:4 +#: flwr.client.client_app.ClientApp.query:4 +#: flwr.client.client_app.ClientApp.train:4 flwr.server.app.start_server:41 +#: flwr.server.compat.app.start_driver:32 flwr.server.server_app.ServerApp:4 +#: flwr.server.server_app.ServerApp.main:4 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:29 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:22 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:21 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:14 +#: of msgid "Examples" msgstr "" @@ -6818,6 +7003,33 @@ msgid "" "global attribute `app` that points to an object of type `ClientApp`." msgstr "" +#: flwr.client.client_app.ClientApp.evaluate:1::1 of +msgid ":py:obj:`evaluate `\\ \\(\\)" +msgstr "" + +#: flwr.client.client_app.ClientApp.evaluate:1 +#: flwr.client.client_app.ClientApp.evaluate:1::1 of +msgid "Return a decorator that registers the evaluate fn with the client app." +msgstr "" + +#: flwr.client.client_app.ClientApp.evaluate:1::1 of +msgid ":py:obj:`query `\\ \\(\\)" +msgstr "" + +#: flwr.client.client_app.ClientApp.evaluate:1::1 +#: flwr.client.client_app.ClientApp.query:1 of +msgid "Return a decorator that registers the query fn with the client app." +msgstr "" + +#: flwr.client.client_app.ClientApp.evaluate:1::1 of +msgid ":py:obj:`train `\\ \\(\\)" +msgstr "" + +#: flwr.client.client_app.ClientApp.evaluate:1::1 +#: flwr.client.client_app.ClientApp.train:1 of +msgid "Return a decorator that registers the train fn with the client app." +msgstr "" + #: ../../source/ref-api/flwr.client.NumPyClient.rst:2 msgid "NumPyClient" msgstr "" @@ -7015,7 +7227,7 @@ msgid "" msgstr "" #: flwr.client.app.start_client:19 flwr.client.app.start_numpy_client:22 -#: flwr.server.driver.app.start_driver:21 of +#: flwr.server.compat.app.start_driver:21 of msgid "" "The PEM-encoded root certificates as a byte string or a path string. If " "provided, a secure connection using the certificates will be established " @@ -7035,15 +7247,29 @@ msgid "" "(experimental) - 'rest': HTTP (experimental)" msgstr "" -#: flwr.client.app.start_client:34 flwr.client.app.start_numpy_client:37 of +#: flwr.client.app.start_client:31 of +msgid "" +"The maximum number of times the client will try to connect to the server " +"before giving up in case of a connection error. If set to None, there is " +"no limit to the number of tries." +msgstr "" + +#: flwr.client.app.start_client:35 of +msgid "" +"The maximum duration before the client stops trying to connect to the " +"server in case of connection error. If set to None, there is no limit to " +"the total time." +msgstr "" + +#: flwr.client.app.start_client:42 flwr.client.app.start_numpy_client:37 of msgid "Starting a gRPC client with an insecure server connection:" msgstr "" -#: flwr.client.app.start_client:41 flwr.client.app.start_numpy_client:44 of +#: flwr.client.app.start_client:49 flwr.client.app.start_numpy_client:44 of msgid "Starting an SSL-enabled gRPC client using system certificates:" msgstr "" -#: flwr.client.app.start_client:52 flwr.client.app.start_numpy_client:52 of +#: flwr.client.app.start_client:60 flwr.client.app.start_numpy_client:52 of msgid "Starting an SSL-enabled gRPC client using provided certificates:" msgstr "" @@ -7067,73 +7293,82 @@ msgstr "" msgid "common" msgstr "" -#: ../../source/ref-api/flwr.common.rst:29::1 +#: ../../source/ref-api/flwr.common.rst:30::1 +msgid ":py:obj:`array_from_numpy `\\ \\(ndarray\\)" +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:30::1 +#: flwr.common.record.conversion_utils.array_from_numpy:1 of +msgid "Create Array from NumPy ndarray." +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:30::1 msgid ":py:obj:`bytes_to_ndarray `\\ \\(tensor\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:29::1 +#: ../../source/ref-api/flwr.common.rst:30::1 #: flwr.common.parameter.bytes_to_ndarray:1 of msgid "Deserialize NumPy ndarray from bytes." msgstr "" -#: ../../source/ref-api/flwr.common.rst:29::1 +#: ../../source/ref-api/flwr.common.rst:30::1 msgid "" ":py:obj:`configure `\\ \\(identifier\\[\\, " "filename\\, host\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:29::1 +#: ../../source/ref-api/flwr.common.rst:30::1 #: flwr.common.logger.configure:1 of msgid "Configure logging to file and/or remote log server." msgstr "" -#: ../../source/ref-api/flwr.common.rst:29::1 +#: ../../source/ref-api/flwr.common.rst:30::1 msgid "" ":py:obj:`event `\\ \\(event\\_type\\[\\, " "event\\_details\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:29::1 +#: ../../source/ref-api/flwr.common.rst:30::1 #: flwr.common.telemetry.event:1 of msgid "Submit create_event to ThreadPoolExecutor to avoid blocking." msgstr "" -#: ../../source/ref-api/flwr.common.rst:29::1 +#: ../../source/ref-api/flwr.common.rst:30::1 msgid "" ":py:obj:`log `\\ \\(level\\, msg\\, \\*args\\, " "\\*\\*kwargs\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:29::1 logging.Logger.log:1 +#: ../../source/ref-api/flwr.common.rst:30::1 logging.Logger.log:1 #: of msgid "Log 'msg % args' with the integer severity 'level'." msgstr "" -#: ../../source/ref-api/flwr.common.rst:29::1 +#: ../../source/ref-api/flwr.common.rst:30::1 msgid ":py:obj:`ndarray_to_bytes `\\ \\(ndarray\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:29::1 +#: ../../source/ref-api/flwr.common.rst:30::1 #: flwr.common.parameter.ndarray_to_bytes:1 of msgid "Serialize NumPy ndarray to bytes." msgstr "" -#: ../../source/ref-api/flwr.common.rst:29::1 +#: ../../source/ref-api/flwr.common.rst:30::1 msgid ":py:obj:`now `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:29::1 +#: ../../source/ref-api/flwr.common.rst:30::1 #: flwr.common.date.now:1 of msgid "Construct a datetime from time.time() with time zone set to UTC." msgstr "" -#: ../../source/ref-api/flwr.common.rst:29::1 +#: ../../source/ref-api/flwr.common.rst:30::1 msgid "" ":py:obj:`ndarrays_to_parameters `\\ " "\\(ndarrays\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:29::1 +#: ../../source/ref-api/flwr.common.rst:30::1 #: flwr.common.parameter.ndarrays_to_parameters:1 #: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 #: flwr.server.strategy.fedavg_android.FedAvgAndroid.ndarrays_to_parameters:1 @@ -7141,187 +7376,358 @@ msgstr "" msgid "Convert NumPy ndarrays to parameters object." msgstr "" -#: ../../source/ref-api/flwr.common.rst:29::1 +#: ../../source/ref-api/flwr.common.rst:30::1 msgid "" ":py:obj:`parameters_to_ndarrays `\\ " "\\(parameters\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:29::1 +#: ../../source/ref-api/flwr.common.rst:30::1 #: flwr.common.parameter.parameters_to_ndarrays:1 of msgid "Convert parameters object to NumPy ndarrays." msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 +msgid "" +":py:obj:`Array `\\ \\(dtype\\, shape\\, stype\\, " +"data\\)" +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +#: flwr.common.record.parametersrecord.Array:1 of +msgid "Array type." +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 msgid "" ":py:obj:`ClientMessage `\\ " "\\(\\[get\\_properties\\_res\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.ClientMessage:1 of msgid "ClientMessage is a container used to hold one result message." msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 msgid ":py:obj:`Code `\\ \\(value\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.Code:1 of msgid "Client status codes." msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 +msgid "" +":py:obj:`ConfigsRecord `\\ " +"\\(\\[configs\\_dict\\, keep\\_input\\]\\)" +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +#: flwr.common.record.configsrecord.ConfigsRecord:1 of +msgid "Configs record." +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +msgid ":py:obj:`Context `\\ \\(state\\)" +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +#: flwr.common.context.Context:1 of +msgid "State of your run." +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 msgid ":py:obj:`DisconnectRes `\\ \\(reason\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.DisconnectRes:1 of msgid "DisconnectRes message from client to server." msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 msgid "" ":py:obj:`EvaluateIns `\\ \\(parameters\\, " "config\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.EvaluateIns:1 of msgid "Evaluate instructions for a client." msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 msgid "" ":py:obj:`EvaluateRes `\\ \\(status\\, loss\\, " "num\\_examples\\, metrics\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.EvaluateRes:1 of msgid "Evaluate response from a client." msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 msgid ":py:obj:`EventType `\\ \\(value\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.telemetry.EventType:1 of msgid "Types of telemetry events." msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 msgid ":py:obj:`FitIns `\\ \\(parameters\\, config\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.FitIns:1 of msgid "Fit instructions for a client." msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 msgid "" ":py:obj:`FitRes `\\ \\(status\\, parameters\\, " "num\\_examples\\, metrics\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.FitRes:1 of msgid "Fit response from a client." msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 +msgid ":py:obj:`Error `\\ \\(code\\[\\, reason\\]\\)" +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +#: flwr.common.message.Error:1 of +msgid "A dataclass that stores information about an error that occurred." +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 msgid ":py:obj:`GetParametersIns `\\ \\(config\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.GetParametersIns:1 of msgid "Parameters request for a client." msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 msgid "" ":py:obj:`GetParametersRes `\\ \\(status\\, " "parameters\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.GetParametersRes:1 of msgid "Response when asked to return parameters." msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 msgid ":py:obj:`GetPropertiesIns `\\ \\(config\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.GetPropertiesIns:1 of msgid "Properties request for a client." msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 msgid "" ":py:obj:`GetPropertiesRes `\\ \\(status\\, " "properties\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.GetPropertiesRes:1 of msgid "Properties response from a client." msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 +msgid "" +":py:obj:`Message `\\ \\(metadata\\[\\, content\\, " +"error\\]\\)" +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +#: flwr.common.message.Message:1 of +msgid "State of your application from the viewpoint of the entity using it." +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +msgid ":py:obj:`MessageType `\\ \\(\\)" +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +#: flwr.common.constant.MessageType:1 of +msgid "Message type." +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +msgid ":py:obj:`MessageTypeLegacy `\\ \\(\\)" +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +#: flwr.common.constant.MessageTypeLegacy:1 of +msgid "Legacy message type." +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +msgid "" +":py:obj:`Metadata `\\ \\(run\\_id\\, " +"message\\_id\\, src\\_node\\_id\\, ...\\)" +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +#: flwr.common.message.Metadata:1 of +msgid "A dataclass holding metadata associated with the current message." +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +msgid "" +":py:obj:`MetricsRecord `\\ " +"\\(\\[metrics\\_dict\\, keep\\_input\\]\\)" +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +#: flwr.common.record.metricsrecord.MetricsRecord:1 of +msgid "Metrics record." +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 msgid ":py:obj:`NDArray `\\" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 msgid "" "alias of :py:class:`~numpy.ndarray`\\ [:py:obj:`~typing.Any`, " ":py:class:`~numpy.dtype`\\ [:py:obj:`~typing.Any`]]" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 msgid "" ":py:obj:`Parameters `\\ \\(tensors\\, " "tensor\\_type\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.Parameters:1 of msgid "Model parameters." msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 +msgid "" +":py:obj:`ParametersRecord `\\ " +"\\(\\[array\\_dict\\, keep\\_input\\]\\)" +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +#: flwr.common.record.parametersrecord.ParametersRecord:1 of +msgid "Parameters record." +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 msgid ":py:obj:`ReconnectIns `\\ \\(seconds\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.ReconnectIns:1 of msgid "ReconnectIns message from server to client." msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 +msgid "" +":py:obj:`RecordSet `\\ " +"\\(\\[parameters\\_records\\, ...\\]\\)" +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 +#: flwr.common.record.recordset.RecordSet:1 of +msgid "RecordSet stores groups of parameters, metrics and configs." +msgstr "" + +#: ../../source/ref-api/flwr.common.rst:64::1 msgid "" ":py:obj:`ServerMessage `\\ " "\\(\\[get\\_properties\\_ins\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.ServerMessage:1 of msgid "ServerMessage is a container used to hold one instruction message." msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 msgid ":py:obj:`Status `\\ \\(code\\, message\\)" msgstr "" -#: ../../source/ref-api/flwr.common.rst:52::1 +#: ../../source/ref-api/flwr.common.rst:64::1 #: flwr.common.typing.Status:1 of msgid "Client status." msgstr "" +#: ../../source/ref-api/flwr.common.Array.rst:2 +msgid "Array" +msgstr "" + +#: flwr.common.record.parametersrecord.Array:3 of +msgid "" +"A dataclass containing serialized data from an array-like or tensor-like " +"object along with some metadata about it." +msgstr "" + +#: flwr.common.record.parametersrecord.Array:6 of +msgid "" +"A string representing the data type of the serialised object (e.g. " +"`np.float32`)" +msgstr "" + +#: flwr.common.record.parametersrecord.Array:8 of +msgid "" +"A list representing the shape of the unserialized array-like object. This" +" is used to deserialize the data (depending on the serialization method) " +"or simply as a metadata field." +msgstr "" + +#: flwr.common.record.parametersrecord.Array:12 of +msgid "" +"A string indicating the type of serialisation mechanism used to generate " +"the bytes in `data` from an array-like or tensor-like object." +msgstr "" + +#: flwr.common.record.parametersrecord.Array:15 of +msgid "A buffer of bytes containing the data." +msgstr "" + +#: ../../source/ref-api/flwr.common.Array.rst:26::1 +msgid ":py:obj:`numpy `\\ \\(\\)" +msgstr "" + +#: ../../source/ref-api/flwr.common.Array.rst:26::1 +#: flwr.common.record.parametersrecord.Array.numpy:1 of +msgid "Return the array as a NumPy array." +msgstr "" + +#: flwr.common.record.parametersrecord.Array.numpy:1::1 of +msgid ":py:obj:`dtype `\\" +msgstr "" + +#: flwr.common.record.parametersrecord.Array.numpy:1::1 of +msgid ":py:obj:`shape `\\" +msgstr "" + +#: flwr.common.record.parametersrecord.Array.numpy:1::1 of +msgid ":py:obj:`stype `\\" +msgstr "" + +#: flwr.common.record.parametersrecord.Array.numpy:1::1 of +msgid ":py:obj:`data `\\" +msgstr "" + #: ../../source/ref-api/flwr.common.ClientMessage.rst:2 msgid "ClientMessage" msgstr "" @@ -7380,20 +7786,146 @@ msgid "" "`\\" msgstr "" -#: ../../source/ref-api/flwr.common.DisconnectRes.rst:2 -msgid "DisconnectRes" +#: ../../source/ref-api/flwr.common.ConfigsRecord.rst:2 +msgid "ConfigsRecord" msgstr "" -#: ../../source/ref-api/flwr.common.DisconnectRes.rst:28::1 -msgid ":py:obj:`reason `\\" +#: flwr.common.record.configsrecord.ConfigsRecord:1 of +msgid "" +"Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ " +"[:py:class:`str`, :py:obj:`~typing.Union`\\ [:py:class:`int`, " +":py:class:`float`, :py:class:`str`, :py:class:`bytes`, :py:class:`bool`, " +":py:class:`~typing.List`\\ [:py:class:`int`], :py:class:`~typing.List`\\ " +"[:py:class:`float`], :py:class:`~typing.List`\\ [:py:class:`str`], " +":py:class:`~typing.List`\\ [:py:class:`bytes`], " +":py:class:`~typing.List`\\ [:py:class:`bool`]]]" msgstr "" -#: ../../source/ref-api/flwr.common.EvaluateIns.rst:2 -msgid "EvaluateIns" +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`clear `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.common.EvaluateIns.rst:29::1 -msgid ":py:obj:`parameters `\\" +#: flwr.common.record.typeddict.TypedDict.clear:1 +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid "Remove all items from R." +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`count_bytes `\\ \\(\\)" +msgstr "" + +#: flwr.common.record.configsrecord.ConfigsRecord.count_bytes:1 +#: flwr.common.record.metricsrecord.MetricsRecord.count_bytes:1 +#: flwr.common.record.parametersrecord.ParametersRecord.count_bytes:1 +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid "Return number of Bytes stored in this object." +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`get `\\ \\(k\\[\\,d\\]\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 +#: flwr.common.record.typeddict.TypedDict.get:1 of +msgid "d defaults to None." +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`items `\\ \\(\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`keys `\\ \\(\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`pop `\\ \\(k\\[\\,d\\]\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 +#: flwr.common.record.typeddict.TypedDict.pop:1 of +msgid "If key is not found, d is returned if given, otherwise KeyError is raised." +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid "" +":py:obj:`update `\\ \\(\\[E\\, " +"\\]\\*\\*F\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 +#: flwr.common.record.typeddict.TypedDict.update:1 of +msgid "Update R from dict/iterable E and F." +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`values `\\ \\(\\)" +msgstr "" + +#: flwr.common.record.configsrecord.ConfigsRecord.count_bytes:3 of +msgid "This function counts booleans as occupying 1 Byte." +msgstr "" + +#: ../../source/ref-api/flwr.common.Context.rst:2 +msgid "Context" +msgstr "" + +#: flwr.common.context.Context:3 of +msgid "" +"Holds records added by the entity in a given run and that will stay " +"local. This means that the data it holds will never leave the system it's" +" running from. This can be used as an intermediate storage or scratchpad " +"when executing mods. It can also be used as a memory to access at " +"different points during the lifecycle of this entity (e.g. across " +"multiple rounds)" +msgstr "" + +#: ../../source/ref-api/flwr.common.Context.rst:28::1 +msgid ":py:obj:`state `\\" +msgstr "" + +#: ../../source/ref-api/flwr.common.DisconnectRes.rst:2 +msgid "DisconnectRes" +msgstr "" + +#: ../../source/ref-api/flwr.common.DisconnectRes.rst:28::1 +msgid ":py:obj:`reason `\\" +msgstr "" + +#: ../../source/ref-api/flwr.common.Error.rst:2 +msgid "Error" +msgstr "" + +#: flwr.common.message.Error:3 of +msgid "An identifier for the error." +msgstr "" + +#: flwr.common.message.Error:5 of +msgid "A reason for why the error arose (e.g. an exception stack-trace)" +msgstr "" + +#: flwr.common.Error.code:1::1 of +msgid ":py:obj:`code `\\" +msgstr "" + +#: flwr.common.Error.code:1 flwr.common.Error.code:1::1 of +msgid "Error code." +msgstr "" + +#: flwr.common.Error.code:1::1 of +msgid ":py:obj:`reason `\\" +msgstr "" + +#: flwr.common.Error.code:1::1 flwr.common.Error.reason:1 of +msgid "Reason reported about the error." +msgstr "" + +#: ../../source/ref-api/flwr.common.EvaluateIns.rst:2 +msgid "EvaluateIns" +msgstr "" + +#: ../../source/ref-api/flwr.common.EvaluateIns.rst:29::1 +msgid ":py:obj:`parameters `\\" msgstr "" #: ../../source/ref-api/flwr.common.EvaluateIns.rst:29::1 @@ -7608,6 +8140,278 @@ msgstr "" msgid ":py:obj:`properties `\\" msgstr "" +#: ../../source/ref-api/flwr.common.Message.rst:2 +msgid "Message" +msgstr "" + +#: flwr.common.Message.content:1::1 flwr.common.Message.metadata:1 +#: flwr.common.message.Message:3 of +msgid "A dataclass including information about the message to be executed." +msgstr "" + +#: flwr.common.message.Message:5 of +msgid "" +"Holds records either sent by another entity (e.g. sent by the server-side" +" logic to a client, or vice-versa) or that will be sent to it." +msgstr "" + +#: flwr.common.message.Message:8 of +msgid "" +"A dataclass that captures information about an error that took place when" +" processing another message." +msgstr "" + +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +msgid "" +":py:obj:`create_error_reply `\\ " +"\\(error\\, ttl\\)" +msgstr "" + +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +#: flwr.common.message.Message.create_error_reply:1 of +msgid "Construct a reply message indicating an error happened." +msgstr "" + +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +msgid "" +":py:obj:`create_reply `\\ \\(content\\," +" ttl\\)" +msgstr "" + +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +#: flwr.common.message.Message.create_reply:1 of +msgid "Create a reply to this message with specified content and TTL." +msgstr "" + +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +msgid ":py:obj:`has_content `\\ \\(\\)" +msgstr "" + +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +#: flwr.common.message.Message.has_content:1 of +msgid "Return True if message has content, else False." +msgstr "" + +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +msgid ":py:obj:`has_error `\\ \\(\\)" +msgstr "" + +#: ../../source/ref-api/flwr.common.Message.rst:35::1 +#: flwr.common.message.Message.has_error:1 of +msgid "Return True if message has an error, else False." +msgstr "" + +#: flwr.common.Message.content:1::1 of +msgid ":py:obj:`content `\\" +msgstr "" + +#: flwr.common.Message.content:1 flwr.common.Message.content:1::1 +#: of +msgid "The content of this message." +msgstr "" + +#: flwr.common.Message.content:1::1 of +msgid ":py:obj:`error `\\" +msgstr "" + +#: flwr.common.Message.content:1::1 flwr.common.Message.error:1 of +msgid "Error captured by this message." +msgstr "" + +#: flwr.common.Message.content:1::1 of +msgid ":py:obj:`metadata `\\" +msgstr "" + +#: flwr.common.message.Message.create_error_reply:3 of +msgid "The error that was encountered." +msgstr "" + +#: flwr.common.Metadata.dst_node_id:1::1 +#: flwr.common.Metadata.ttl:1 flwr.common.message.Message.create_error_reply:5 +#: flwr.common.message.Message.create_reply:9 flwr.common.message.Metadata:16 +#: of +msgid "Time-to-live for this message." +msgstr "" + +#: flwr.common.message.Message.create_reply:3 of +msgid "" +"The method generates a new `Message` as a reply to this message. It " +"inherits 'run_id', 'src_node_id', 'dst_node_id', and 'message_type' from " +"this message and sets 'reply_to_message' to the ID of this message." +msgstr "" + +#: flwr.common.message.Message.create_reply:7 of +msgid "The content for the reply message." +msgstr "" + +#: flwr.common.message.Message.create_reply:12 of +msgid "A new `Message` instance representing the reply." +msgstr "" + +#: ../../source/ref-api/flwr.common.MessageType.rst:2 +msgid "MessageType" +msgstr "" + +#: ../../source/ref-api/flwr.common.MessageType.rst:30::1 +msgid ":py:obj:`EVALUATE `\\" +msgstr "" + +#: ../../source/ref-api/flwr.common.MessageType.rst:30::1 +msgid ":py:obj:`QUERY `\\" +msgstr "" + +#: ../../source/ref-api/flwr.common.MessageType.rst:30::1 +msgid ":py:obj:`TRAIN `\\" +msgstr "" + +#: ../../source/ref-api/flwr.common.MessageTypeLegacy.rst:2 +msgid "MessageTypeLegacy" +msgstr "" + +#: ../../source/ref-api/flwr.common.MessageTypeLegacy.rst:29::1 +msgid ":py:obj:`GET_PARAMETERS `\\" +msgstr "" + +#: ../../source/ref-api/flwr.common.MessageTypeLegacy.rst:29::1 +msgid ":py:obj:`GET_PROPERTIES `\\" +msgstr "" + +#: flwr.common.Metadata.dst_node_id:1::1 +#: flwr.common.Metadata.run_id:1 flwr.common.message.Metadata:3 of +msgid "An identifier for the current run." +msgstr "" + +#: flwr.common.Metadata.dst_node_id:1::1 +#: flwr.common.Metadata.message_id:1 flwr.common.message.Metadata:5 of +msgid "An identifier for the current message." +msgstr "" + +#: flwr.common.Metadata.dst_node_id:1::1 +#: flwr.common.Metadata.src_node_id:1 flwr.common.message.Metadata:7 of +msgid "An identifier for the node sending this message." +msgstr "" + +#: flwr.common.Metadata.dst_node_id:1 +#: flwr.common.Metadata.dst_node_id:1::1 +#: flwr.common.message.Metadata:9 of +msgid "An identifier for the node receiving this message." +msgstr "" + +#: flwr.common.Metadata.dst_node_id:1::1 +#: flwr.common.Metadata.reply_to_message:1 flwr.common.message.Metadata:11 of +msgid "An identifier for the message this message replies to." +msgstr "" + +#: flwr.common.message.Metadata:13 of +msgid "" +"An identifier for grouping messages. In some settings, this is used as " +"the FL round." +msgstr "" + +#: flwr.common.Metadata.dst_node_id:1::1 +#: flwr.common.Metadata.message_type:1 flwr.common.message.Metadata:18 of +msgid "A string that encodes the action to be executed on the receiving end." +msgstr "" + +#: flwr.common.message.Metadata:21 of +msgid "" +"An identifier that can be used when loading a particular data partition " +"for a ClientApp. Making use of this identifier is more relevant when " +"conducting simulations." +msgstr "" + +#: flwr.common.Metadata.dst_node_id:1::1 of +msgid ":py:obj:`dst_node_id `\\" +msgstr "" + +#: flwr.common.Metadata.dst_node_id:1::1 of +msgid ":py:obj:`group_id `\\" +msgstr "" + +#: flwr.common.Metadata.dst_node_id:1::1 +#: flwr.common.Metadata.group_id:1 of +msgid "An identifier for grouping messages." +msgstr "" + +#: flwr.common.Metadata.dst_node_id:1::1 of +msgid ":py:obj:`message_id `\\" +msgstr "" + +#: flwr.common.Metadata.dst_node_id:1::1 of +msgid ":py:obj:`message_type `\\" +msgstr "" + +#: flwr.common.Metadata.dst_node_id:1::1 of +msgid ":py:obj:`partition_id `\\" +msgstr "" + +#: flwr.common.Metadata.dst_node_id:1::1 +#: flwr.common.Metadata.partition_id:1 of +msgid "An identifier telling which data partition a ClientApp should use." +msgstr "" + +#: flwr.common.Metadata.dst_node_id:1::1 of +msgid ":py:obj:`reply_to_message `\\" +msgstr "" + +#: flwr.common.Metadata.dst_node_id:1::1 of +msgid ":py:obj:`run_id `\\" +msgstr "" + +#: flwr.common.Metadata.dst_node_id:1::1 of +msgid ":py:obj:`src_node_id `\\" +msgstr "" + +#: flwr.common.Metadata.dst_node_id:1::1 of +msgid ":py:obj:`ttl `\\" +msgstr "" + +#: ../../source/ref-api/flwr.common.MetricsRecord.rst:2 +msgid "MetricsRecord" +msgstr "" + +#: flwr.common.record.metricsrecord.MetricsRecord:1 of +msgid "" +"Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ " +"[:py:class:`str`, :py:obj:`~typing.Union`\\ [:py:class:`int`, " +":py:class:`float`, :py:class:`~typing.List`\\ [:py:class:`int`], " +":py:class:`~typing.List`\\ [:py:class:`float`]]]" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`clear `\\ \\(\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`count_bytes `\\ \\(\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`get `\\ \\(k\\[\\,d\\]\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`items `\\ \\(\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`keys `\\ \\(\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`pop `\\ \\(k\\[\\,d\\]\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid "" +":py:obj:`update `\\ \\(\\[E\\, " +"\\]\\*\\*F\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`values `\\ \\(\\)" +msgstr "" + #: ../../source/ref-api/flwr.common.NDArray.rst:2 msgid "NDArray" msgstr "" @@ -7620,6 +8424,65 @@ msgstr "" msgid ":py:obj:`tensor_type `\\" msgstr "" +#: ../../source/ref-api/flwr.common.ParametersRecord.rst:2 +msgid "ParametersRecord" +msgstr "" + +#: flwr.common.record.parametersrecord.ParametersRecord:1 of +msgid "" +"Bases: :py:class:`~flwr.common.record.typeddict.TypedDict`\\ " +"[:py:class:`str`, :py:class:`~flwr.common.record.parametersrecord.Array`]" +msgstr "" + +#: flwr.common.record.parametersrecord.ParametersRecord:3 of +msgid "" +"A dataclass storing named Arrays in order. This means that it holds " +"entries as an OrderedDict[str, Array]. ParametersRecord objects can be " +"viewed as an equivalent to PyTorch's state_dict, but holding serialised " +"tensors instead." +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`clear `\\ \\(\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`count_bytes `\\ \\(\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`get `\\ \\(k\\[\\,d\\]\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`items `\\ \\(\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`keys `\\ \\(\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`pop `\\ \\(k\\[\\,d\\]\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid "" +":py:obj:`update `\\ \\(\\[E\\, " +"\\]\\*\\*F\\)" +msgstr "" + +#: flwr.common.record.typeddict.TypedDict.clear:1::1 of +msgid ":py:obj:`values `\\ \\(\\)" +msgstr "" + +#: flwr.common.record.parametersrecord.ParametersRecord.count_bytes:3 of +msgid "" +"Note that a small amount of Bytes might also be included in this counting" +" that correspond to metadata of the serialized object (e.g. of NumPy " +"array) needed for deseralization." +msgstr "" + #: ../../source/ref-api/flwr.common.ReconnectIns.rst:2 msgid "ReconnectIns" msgstr "" @@ -7628,6 +8491,37 @@ msgstr "" msgid ":py:obj:`seconds `\\" msgstr "" +#: ../../source/ref-api/flwr.common.RecordSet.rst:2 +msgid "RecordSet" +msgstr "" + +#: flwr.common.RecordSet.configs_records:1::1 of +msgid ":py:obj:`configs_records `\\" +msgstr "" + +#: flwr.common.RecordSet.configs_records:1 +#: flwr.common.RecordSet.configs_records:1::1 of +msgid "Dictionary holding ConfigsRecord instances." +msgstr "" + +#: flwr.common.RecordSet.configs_records:1::1 of +msgid ":py:obj:`metrics_records `\\" +msgstr "" + +#: flwr.common.RecordSet.configs_records:1::1 +#: flwr.common.RecordSet.metrics_records:1 of +msgid "Dictionary holding MetricsRecord instances." +msgstr "" + +#: flwr.common.RecordSet.configs_records:1::1 of +msgid ":py:obj:`parameters_records `\\" +msgstr "" + +#: flwr.common.RecordSet.configs_records:1::1 +#: flwr.common.RecordSet.parameters_records:1 of +msgid "Dictionary holding ParametersRecord instances." +msgstr "" + #: ../../source/ref-api/flwr.common.ServerMessage.rst:2 msgid "ServerMessage" msgstr "" @@ -7664,6 +8558,10 @@ msgstr "" msgid ":py:obj:`message `\\" msgstr "" +#: ../../source/ref-api/flwr.common.array_from_numpy.rst:2 +msgid "array\\_from\\_numpy" +msgstr "" + #: ../../source/ref-api/flwr.common.bytes_to_ndarray.rst:2 msgid "bytes\\_to\\_ndarray" msgstr "" @@ -7711,115 +8609,159 @@ msgstr "" msgid "server" msgstr "" -#: ../../source/ref-api/flwr.server.rst:26::1 +#: ../../source/ref-api/flwr.server.rst:27::1 msgid ":py:obj:`run_driver_api `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.server.rst:26::1 +#: ../../source/ref-api/flwr.server.rst:27::1 #: flwr.server.app.run_driver_api:1 of msgid "Run Flower server (Driver API)." msgstr "" -#: ../../source/ref-api/flwr.server.rst:26::1 +#: ../../source/ref-api/flwr.server.rst:27::1 msgid ":py:obj:`run_fleet_api `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.server.rst:26::1 +#: ../../source/ref-api/flwr.server.rst:27::1 #: flwr.server.app.run_fleet_api:1 of msgid "Run Flower server (Fleet API)." msgstr "" -#: ../../source/ref-api/flwr.server.rst:26::1 +#: ../../source/ref-api/flwr.server.rst:27::1 msgid ":py:obj:`run_server_app `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.server.rst:26::1 -#: flwr.server.app.run_server_app:1 of +#: ../../source/ref-api/flwr.server.rst:27::1 +#: flwr.server.run_serverapp.run_server_app:1 of msgid "Run Flower server app." msgstr "" -#: ../../source/ref-api/flwr.server.rst:26::1 +#: ../../source/ref-api/flwr.server.rst:27::1 msgid ":py:obj:`run_superlink `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.server.rst:26::1 +#: ../../source/ref-api/flwr.server.rst:27::1 #: flwr.server.app.run_superlink:1 of msgid "Run Flower server (Driver API and Fleet API)." msgstr "" -#: ../../source/ref-api/flwr.server.rst:26::1 +#: ../../source/ref-api/flwr.server.rst:27::1 +msgid "" +":py:obj:`start_driver `\\ \\(\\*\\[\\, " +"server\\_address\\, server\\, ...\\]\\)" +msgstr "" + +#: ../../source/ref-api/flwr.server.rst:27::1 +#: flwr.server.compat.app.start_driver:1 of +msgid "Start a Flower Driver API server." +msgstr "" + +#: ../../source/ref-api/flwr.server.rst:27::1 msgid "" ":py:obj:`start_server `\\ \\(\\*\\[\\, " "server\\_address\\, server\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.rst:26::1 +#: ../../source/ref-api/flwr.server.rst:27::1 #: flwr.server.app.start_server:1 of msgid "Start a Flower server using the gRPC transport layer." msgstr "" -#: ../../source/ref-api/flwr.server.rst:37::1 +#: ../../source/ref-api/flwr.server.rst:41::1 msgid ":py:obj:`ClientManager `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.server.rst:37::1 +#: ../../source/ref-api/flwr.server.rst:41::1 #: flwr.server.client_manager.ClientManager:1 of msgid "Abstract base class for managing Flower clients." msgstr "" -#: ../../source/ref-api/flwr.server.rst:37::1 +#: ../../source/ref-api/flwr.server.rst:41::1 +msgid "" +":py:obj:`Driver `\\ " +"\\(\\[driver\\_service\\_address\\, ...\\]\\)" +msgstr "" + +#: ../../source/ref-api/flwr.server.rst:41::1 +#: flwr.server.driver.driver.Driver:1 of +msgid "`Driver` class provides an interface to the Driver API." +msgstr "" + +#: ../../source/ref-api/flwr.server.rst:41::1 msgid ":py:obj:`History `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.server.rst:37::1 +#: ../../source/ref-api/flwr.server.rst:41::1 #: flwr.server.history.History:1 of msgid "History class for training and/or evaluation metrics collection." msgstr "" -#: ../../source/ref-api/flwr.server.rst:37::1 +#: ../../source/ref-api/flwr.server.rst:41::1 +msgid "" +":py:obj:`LegacyContext `\\ \\(state\\[\\, " +"config\\, strategy\\, ...\\]\\)" +msgstr "" + +#: ../../source/ref-api/flwr.server.rst:41::1 +#: flwr.server.compat.legacy_context.LegacyContext:1 of +msgid "Legacy Context." +msgstr "" + +#: ../../source/ref-api/flwr.server.rst:41::1 msgid "" ":py:obj:`Server `\\ \\(\\*\\, client\\_manager\\[\\, " "strategy\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.rst:37::1 +#: ../../source/ref-api/flwr.server.rst:41::1 msgid "" -":py:obj:`ServerConfig `\\ \\(\\[num\\_rounds\\," -" round\\_timeout\\]\\)" +":py:obj:`ServerApp `\\ \\(\\[server\\, config\\, " +"strategy\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.rst:37::1 -#: flwr.server.app.ServerConfig:1 of -msgid "Flower server config." +#: ../../source/ref-api/flwr.server.rst:41::1 +#: flwr.server.server_app.ServerApp:1 of +msgid "Flower ServerApp." msgstr "" -#: ../../source/ref-api/flwr.server.rst:37::1 -msgid ":py:obj:`SimpleClientManager `\\ \\(\\)" +#: ../../source/ref-api/flwr.server.rst:41::1 +msgid "" +":py:obj:`ServerConfig `\\ \\(\\[num\\_rounds\\," +" round\\_timeout\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.rst:37::1 -#: flwr.server.client_manager.SimpleClientManager:1 of -msgid "Provides a pool of available clients." +#: ../../source/ref-api/flwr.server.rst:41::1 +#: flwr.server.server_config.ServerConfig:1 of +msgid "Flower server config." msgstr "" -#: ../../source/ref-api/flwr.server.rst:56::1 -msgid ":py:obj:`flwr.server.driver `\\" +#: ../../source/ref-api/flwr.server.rst:41::1 +msgid ":py:obj:`SimpleClientManager `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.server.rst:56::1 flwr.server.driver:1 -#: of -msgid "Flower driver SDK." +#: ../../source/ref-api/flwr.server.rst:41::1 +#: flwr.server.client_manager.SimpleClientManager:1 of +msgid "Provides a pool of available clients." msgstr "" -#: ../../source/ref-api/flwr.server.rst:56::1 +#: ../../source/ref-api/flwr.server.rst:60::1 msgid ":py:obj:`flwr.server.strategy `\\" msgstr "" -#: ../../source/ref-api/flwr.server.rst:56::1 +#: ../../source/ref-api/flwr.server.rst:60::1 #: flwr.server.strategy:1 of msgid "Contains the strategy abstraction and different implementations." msgstr "" +#: ../../source/ref-api/flwr.server.rst:60::1 +msgid ":py:obj:`flwr.server.workflow `\\" +msgstr "" + +#: ../../source/ref-api/flwr.server.rst:60::1 +#: flwr.server.workflow:1 of +msgid "Workflows." +msgstr "" + #: ../../source/ref-api/flwr.server.ClientManager.rst:2 msgid "ClientManager" msgstr "" @@ -7912,6 +8854,210 @@ msgstr "" msgid "This method is idempotent." msgstr "" +#: ../../source/ref-api/flwr.server.Driver.rst:2 +msgid "Driver" +msgstr "" + +#: flwr.server.driver.driver.Driver:3 of +msgid "" +"The IPv4 or IPv6 address of the Driver API server. Defaults to " +"`\"[::]:9091\"`." +msgstr "" + +#: flwr.server.app.start_server:28 flwr.server.driver.driver.Driver:6 of +msgid "" +"Tuple containing root certificate, server certificate, and private key to" +" start a secure SSL-enabled server. The tuple is expected to have three " +"bytes elements in the following order: * CA certificate. * " +"server certificate. * server private key." +msgstr "" + +#: flwr.server.app.start_server:28 flwr.server.driver.driver.Driver:6 of +msgid "" +"Tuple containing root certificate, server certificate, and private key to" +" start a secure SSL-enabled server. The tuple is expected to have three " +"bytes elements in the following order:" +msgstr "" + +#: flwr.server.app.start_server:32 flwr.server.driver.driver.Driver:10 of +msgid "CA certificate." +msgstr "" + +#: flwr.server.app.start_server:33 flwr.server.driver.driver.Driver:11 of +msgid "server certificate." +msgstr "" + +#: flwr.server.app.start_server:34 flwr.server.driver.driver.Driver:12 of +msgid "server private key." +msgstr "" + +#: flwr.server.driver.driver.Driver.close:1::1 of +msgid ":py:obj:`close `\\ \\(\\)" +msgstr "" + +#: flwr.server.driver.driver.Driver.close:1 +#: flwr.server.driver.driver.Driver.close:1::1 of +msgid "Disconnect from the SuperLink if connected." +msgstr "" + +#: flwr.server.driver.driver.Driver.close:1::1 of +msgid "" +":py:obj:`create_message `\\ " +"\\(content\\, message\\_type\\, ...\\)" +msgstr "" + +#: flwr.server.driver.driver.Driver.close:1::1 +#: flwr.server.driver.driver.Driver.create_message:1 of +msgid "Create a new message with specified parameters." +msgstr "" + +#: flwr.server.driver.driver.Driver.close:1::1 of +msgid ":py:obj:`get_node_ids `\\ \\(\\)" +msgstr "" + +#: flwr.server.driver.driver.Driver.close:1::1 +#: flwr.server.driver.driver.Driver.get_node_ids:1 of +msgid "Get node IDs." +msgstr "" + +#: flwr.server.driver.driver.Driver.close:1::1 of +msgid "" +":py:obj:`pull_messages `\\ " +"\\(message\\_ids\\)" +msgstr "" + +#: flwr.server.driver.driver.Driver.close:1::1 +#: flwr.server.driver.driver.Driver.pull_messages:1 of +msgid "Pull messages based on message IDs." +msgstr "" + +#: flwr.server.driver.driver.Driver.close:1::1 of +msgid "" +":py:obj:`push_messages `\\ " +"\\(messages\\)" +msgstr "" + +#: flwr.server.driver.driver.Driver.close:1::1 +#: flwr.server.driver.driver.Driver.push_messages:1 of +msgid "Push messages to specified node IDs." +msgstr "" + +#: flwr.server.driver.driver.Driver.close:1::1 of +msgid "" +":py:obj:`send_and_receive `\\ " +"\\(messages\\, \\*\\[\\, timeout\\]\\)" +msgstr "" + +#: flwr.server.driver.driver.Driver.close:1::1 +#: flwr.server.driver.driver.Driver.send_and_receive:1 of +msgid "Push messages to specified node IDs and pull the reply messages." +msgstr "" + +#: flwr.server.driver.driver.Driver.create_message:3 of +msgid "" +"This method constructs a new `Message` with given content and metadata. " +"The `run_id` and `src_node_id` will be set automatically." +msgstr "" + +#: flwr.server.driver.driver.Driver.create_message:6 of +msgid "" +"The content for the new message. This holds records that are to be sent " +"to the destination node." +msgstr "" + +#: flwr.server.driver.driver.Driver.create_message:9 of +msgid "" +"The type of the message, defining the action to be executed on the " +"receiving end." +msgstr "" + +#: flwr.server.driver.driver.Driver.create_message:12 of +msgid "The ID of the destination node to which the message is being sent." +msgstr "" + +#: flwr.server.driver.driver.Driver.create_message:14 of +msgid "" +"The ID of the group to which this message is associated. In some " +"settings, this is used as the FL round." +msgstr "" + +#: flwr.server.driver.driver.Driver.create_message:17 of +msgid "" +"Time-to-live for the round trip of this message, i.e., the time from " +"sending this message to receiving a reply. It specifies the duration for " +"which the message and its potential reply are considered valid." +msgstr "" + +#: flwr.server.driver.driver.Driver.create_message:22 of +msgid "" +"**message** -- A new `Message` instance with the specified content and " +"metadata." +msgstr "" + +#: flwr.server.driver.driver.Driver.pull_messages:3 of +msgid "" +"This method is used to collect messages from the SuperLink that " +"correspond to a set of given message IDs." +msgstr "" + +#: flwr.server.driver.driver.Driver.pull_messages:6 of +msgid "An iterable of message IDs for which reply messages are to be retrieved." +msgstr "" + +#: flwr.server.driver.driver.Driver.pull_messages:9 of +msgid "**messages** -- An iterable of messages received." +msgstr "" + +#: flwr.server.driver.driver.Driver.push_messages:3 of +msgid "" +"This method takes an iterable of messages and sends each message to the " +"node specified in `dst_node_id`." +msgstr "" + +#: flwr.server.driver.driver.Driver.push_messages:6 +#: flwr.server.driver.driver.Driver.send_and_receive:7 of +msgid "An iterable of messages to be sent." +msgstr "" + +#: flwr.server.driver.driver.Driver.push_messages:9 of +msgid "" +"**message_ids** -- An iterable of IDs for the messages that were sent, " +"which can be used to pull replies." +msgstr "" + +#: flwr.server.driver.driver.Driver.send_and_receive:3 of +msgid "" +"This method sends a list of messages to their destination node IDs and " +"then waits for the replies. It continues to pull replies until either all" +" replies are received or the specified timeout duration is exceeded." +msgstr "" + +#: flwr.server.driver.driver.Driver.send_and_receive:9 of +msgid "" +"The timeout duration in seconds. If specified, the method will wait for " +"replies for this duration. If `None`, there is no time limit and the " +"method will wait until replies for all messages are received." +msgstr "" + +#: flwr.server.driver.driver.Driver.send_and_receive:14 of +msgid "**replies** -- An iterable of reply messages received from the SuperLink." +msgstr "" + +#: flwr.server.driver.driver.Driver.send_and_receive:18 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:53 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:60 +#: of +msgid "Notes" +msgstr "" + +#: flwr.server.driver.driver.Driver.send_and_receive:19 of +msgid "" +"This method uses `push_messages` to send the messages and `pull_messages`" +" to collect the replies. If `timeout` is set, the method may not return " +"replies for all sent messages. A message remains valid until its TTL, " +"which is not affected by `timeout`." +msgstr "" + #: ../../source/ref-api/flwr.server.History.rst:2 msgid "History" msgstr "" @@ -7976,6 +9122,34 @@ msgstr "" msgid "Add metrics entries (from distributed fit)." msgstr "" +#: ../../source/ref-api/flwr.server.LegacyContext.rst:2 +msgid "LegacyContext" +msgstr "" + +#: flwr.server.compat.legacy_context.LegacyContext:1 of +msgid "Bases: :py:class:`~flwr.common.context.Context`" +msgstr "" + +#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 +msgid ":py:obj:`config `\\" +msgstr "" + +#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 +msgid ":py:obj:`strategy `\\" +msgstr "" + +#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 +msgid ":py:obj:`client_manager `\\" +msgstr "" + +#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 +msgid ":py:obj:`history `\\" +msgstr "" + +#: ../../source/ref-api/flwr.server.LegacyContext.rst:32::1 +msgid ":py:obj:`state `\\" +msgstr "" + #: flwr.server.server.Server.client_manager:1::1 of msgid ":py:obj:`client_manager `\\ \\(\\)" msgstr "" @@ -8047,11 +9221,32 @@ msgstr "" msgid "Replace server strategy." msgstr "" +#: ../../source/ref-api/flwr.server.ServerApp.rst:2 +msgid "ServerApp" +msgstr "" + +#: flwr.server.server_app.ServerApp:5 of +msgid "Use the `ServerApp` with an existing `Strategy`:" +msgstr "" + +#: flwr.server.server_app.ServerApp:15 of +msgid "Use the `ServerApp` with a custom main function:" +msgstr "" + +#: flwr.server.server_app.ServerApp.main:1::1 of +msgid ":py:obj:`main `\\ \\(\\)" +msgstr "" + +#: flwr.server.server_app.ServerApp.main:1 +#: flwr.server.server_app.ServerApp.main:1::1 of +msgid "Return a decorator that registers the main fn with the server app." +msgstr "" + #: ../../source/ref-api/flwr.server.ServerConfig.rst:2 msgid "ServerConfig" msgstr "" -#: flwr.server.app.ServerConfig:3 of +#: flwr.server.server_config.ServerConfig:3 of msgid "" "All attributes have default values which allows users to configure just " "the ones they care about." @@ -8125,488 +9320,381 @@ msgstr "" msgid "**success**" msgstr "" -#: ../../source/ref-api/flwr.server.driver.rst:2 -msgid "driver" +#: ../../source/ref-api/flwr.server.run_driver_api.rst:2 +msgid "run\\_driver\\_api" msgstr "" -#: ../../source/ref-api/flwr.server.driver.rst:22::1 -msgid "" -":py:obj:`start_driver `\\ \\(\\*\\[\\, " -"server\\_address\\, server\\, ...\\]\\)" +#: ../../source/ref-api/flwr.server.run_fleet_api.rst:2 +msgid "run\\_fleet\\_api" msgstr "" -#: ../../source/ref-api/flwr.server.driver.rst:22::1 -#: flwr.server.driver.app.start_driver:1 of -msgid "Start a Flower Driver API server." +#: ../../source/ref-api/flwr.server.run_server_app.rst:2 +msgid "run\\_server\\_app" msgstr "" -#: ../../source/ref-api/flwr.server.driver.rst:30::1 -msgid "" -":py:obj:`Driver `\\ " -"\\(\\[driver\\_service\\_address\\, ...\\]\\)" +#: ../../source/ref-api/flwr.server.run_superlink.rst:2 +msgid "run\\_superlink" msgstr "" -#: ../../source/ref-api/flwr.server.driver.rst:30::1 -#: flwr.server.driver.driver.Driver:1 of -msgid "`Driver` class provides an interface to the Driver API." +#: ../../source/ref-api/flwr.server.start_driver.rst:2 +msgid "start\\_driver" msgstr "" -#: ../../source/ref-api/flwr.server.driver.rst:30::1 +#: flwr.server.compat.app.start_driver:3 of msgid "" -":py:obj:`GrpcDriver `\\ " -"\\(\\[driver\\_service\\_address\\, ...\\]\\)" -msgstr "" - -#: ../../source/ref-api/flwr.server.driver.rst:30::1 -#: flwr.server.driver.grpc_driver.GrpcDriver:1 of -msgid "`GrpcDriver` provides access to the gRPC Driver API/service." +"The IPv4 or IPv6 address of the Driver API server. Defaults to " +"`\"[::]:8080\"`." msgstr "" -#: ../../source/ref-api/flwr.server.driver.Driver.rst:2 -msgid "Driver" +#: flwr.server.compat.app.start_driver:6 of +msgid "" +"A server implementation, either `flwr.server.Server` or a subclass " +"thereof. If no instance is provided, then `start_driver` will create one." msgstr "" -#: flwr.server.driver.driver.Driver:3 of +#: flwr.server.app.start_server:9 flwr.server.compat.app.start_driver:10 +#: flwr.simulation.app.start_simulation:28 of msgid "" -"The IPv4 or IPv6 address of the Driver API server. Defaults to " -"`\"[::]:9091\"`." +"Currently supported values are `num_rounds` (int, default: 1) and " +"`round_timeout` in seconds (float, default: None)." msgstr "" -#: flwr.server.app.start_server:28 flwr.server.driver.driver.Driver:6 of +#: flwr.server.app.start_server:12 flwr.server.compat.app.start_driver:13 of msgid "" -"Tuple containing root certificate, server certificate, and private key to" -" start a secure SSL-enabled server. The tuple is expected to have three " -"bytes elements in the following order: * CA certificate. * " -"server certificate. * server private key." +"An implementation of the abstract base class " +"`flwr.server.strategy.Strategy`. If no strategy is provided, then " +"`start_server` will use `flwr.server.strategy.FedAvg`." msgstr "" -#: flwr.server.app.start_server:28 flwr.server.driver.driver.Driver:6 of +#: flwr.server.compat.app.start_driver:17 of msgid "" -"Tuple containing root certificate, server certificate, and private key to" -" start a secure SSL-enabled server. The tuple is expected to have three " -"bytes elements in the following order:" +"An implementation of the class `flwr.server.ClientManager`. If no " +"implementation is provided, then `start_driver` will use " +"`flwr.server.SimpleClientManager`." msgstr "" -#: flwr.server.app.start_server:32 flwr.server.driver.driver.Driver:10 of -msgid "CA certificate." +#: flwr.server.compat.app.start_driver:25 of +msgid "The Driver object to use." msgstr "" -#: flwr.server.app.start_server:33 flwr.server.driver.driver.Driver:11 of -msgid "server certificate." +#: flwr.server.app.start_server:37 flwr.server.compat.app.start_driver:28 of +msgid "**hist** -- Object containing training and evaluation metrics." msgstr "" -#: flwr.server.app.start_server:34 flwr.server.driver.driver.Driver:12 of -msgid "server private key." +#: flwr.server.compat.app.start_driver:33 of +msgid "Starting a driver that connects to an insecure server:" msgstr "" -#: flwr.server.driver.driver.Driver.get_nodes:1::1 of -msgid ":py:obj:`get_nodes `\\ \\(\\)" +#: flwr.server.compat.app.start_driver:37 of +msgid "Starting a driver that connects to an SSL-enabled server:" msgstr "" -#: flwr.server.driver.driver.Driver.get_nodes:1 -#: flwr.server.driver.driver.Driver.get_nodes:1::1 of -msgid "Get node IDs." +#: ../../source/ref-api/flwr.server.start_server.rst:2 +msgid "start\\_server" msgstr "" -#: flwr.server.driver.driver.Driver.get_nodes:1::1 of -msgid "" -":py:obj:`pull_task_res `\\ " -"\\(task\\_ids\\)" +#: flwr.server.app.start_server:3 of +msgid "The IPv4 or IPv6 address of the server. Defaults to `\"[::]:8080\"`." msgstr "" -#: flwr.server.driver.driver.Driver.get_nodes:1::1 -#: flwr.server.driver.driver.Driver.pull_task_res:1 -#: flwr.server.driver.grpc_driver.GrpcDriver.connect:1::1 -#: flwr.server.driver.grpc_driver.GrpcDriver.pull_task_res:1 of -msgid "Get task results." +#: flwr.server.app.start_server:5 of +msgid "" +"A server implementation, either `flwr.server.Server` or a subclass " +"thereof. If no instance is provided, then `start_server` will create one." msgstr "" -#: flwr.server.driver.driver.Driver.get_nodes:1::1 of +#: flwr.server.app.start_server:16 of msgid "" -":py:obj:`push_task_ins `\\ " -"\\(task\\_ins\\_list\\)" +"An implementation of the abstract base class `flwr.server.ClientManager`." +" If no implementation is provided, then `start_server` will use " +"`flwr.server.client_manager.SimpleClientManager`." msgstr "" -#: flwr.server.driver.driver.Driver.get_nodes:1::1 -#: flwr.server.driver.driver.Driver.push_task_ins:1 -#: flwr.server.driver.grpc_driver.GrpcDriver.connect:1::1 -#: flwr.server.driver.grpc_driver.GrpcDriver.push_task_ins:1 of -msgid "Schedule tasks." +#: flwr.server.app.start_server:21 of +msgid "" +"The maximum length of gRPC messages that can be exchanged with the Flower" +" clients. The default should be sufficient for most models. Users who " +"train very large models might need to increase this value. Note that the " +"Flower clients need to be started with the same value (see " +"`flwr.client.start_client`), otherwise clients will not know about the " +"increased limit and block larger messages." msgstr "" -#: ../../source/ref-api/flwr.server.driver.GrpcDriver.rst:2 -msgid "GrpcDriver" +#: flwr.server.app.start_server:42 of +msgid "Starting an insecure server:" msgstr "" -#: flwr.server.driver.grpc_driver.GrpcDriver.connect:1::1 of -msgid ":py:obj:`connect `\\ \\(\\)" +#: flwr.server.app.start_server:46 of +msgid "Starting an SSL-enabled server:" msgstr "" -#: flwr.server.driver.grpc_driver.GrpcDriver.connect:1 -#: flwr.server.driver.grpc_driver.GrpcDriver.connect:1::1 of -msgid "Connect to the Driver API." +#: ../../source/ref-api/flwr.server.strategy.rst:2 +msgid "strategy" msgstr "" -#: flwr.server.driver.grpc_driver.GrpcDriver.connect:1::1 of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`create_run `\\ " -"\\(req\\)" +":py:obj:`Bulyan `\\ \\(\\*\\, " +"fraction\\_fit\\, fraction\\_evaluate\\, ...\\)" msgstr "" -#: flwr.server.driver.grpc_driver.GrpcDriver.connect:1::1 -#: flwr.server.driver.grpc_driver.GrpcDriver.create_run:1 of -msgid "Request for run ID." +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.bulyan.Bulyan:1 of +msgid "Bulyan strategy." msgstr "" -#: flwr.server.driver.grpc_driver.GrpcDriver.connect:1::1 of -msgid ":py:obj:`disconnect `\\ \\(\\)" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +msgid "" +":py:obj:`DPFedAvgAdaptive `\\ " +"\\(strategy\\, num\\_sampled\\_clients\\)" msgstr "" -#: flwr.server.driver.grpc_driver.GrpcDriver.connect:1::1 -#: flwr.server.driver.grpc_driver.GrpcDriver.disconnect:1 of -msgid "Disconnect from the Driver API." +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive:1 of +msgid "Wrapper for configuring a Strategy for DP with Adaptive Clipping." msgstr "" -#: flwr.server.driver.grpc_driver.GrpcDriver.connect:1::1 of -msgid ":py:obj:`get_nodes `\\ \\(req\\)" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +msgid "" +":py:obj:`DPFedAvgFixed `\\ " +"\\(strategy\\, num\\_sampled\\_clients\\, ...\\)" msgstr "" -#: flwr.server.driver.grpc_driver.GrpcDriver.connect:1::1 -#: flwr.server.driver.grpc_driver.GrpcDriver.get_nodes:1 of -msgid "Get client IDs." +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed:1 of +msgid "Wrapper for configuring a Strategy for DP with Fixed Clipping." msgstr "" -#: flwr.server.driver.grpc_driver.GrpcDriver.connect:1::1 of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`pull_task_res `\\ " -"\\(req\\)" +":py:obj:`DifferentialPrivacyClientSideAdaptiveClipping " +"`\\ " +"\\(...\\)" +msgstr "" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:1 +#: of +msgid "Strategy wrapper for central DP with client-side adaptive clipping." msgstr "" -#: flwr.server.driver.grpc_driver.GrpcDriver.connect:1::1 of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`push_task_ins `\\ " -"\\(req\\)" +":py:obj:`DifferentialPrivacyServerSideAdaptiveClipping " +"`\\ " +"\\(...\\)" msgstr "" -#: ../../source/ref-api/flwr.server.driver.start_driver.rst:2 -msgid "start\\_driver" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:1 +#: of +msgid "Strategy wrapper for central DP with server-side adaptive clipping." msgstr "" -#: flwr.server.driver.app.start_driver:3 of +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -"The IPv4 or IPv6 address of the Driver API server. Defaults to " -"`\"[::]:8080\"`." -msgstr "" - -#: flwr.server.driver.app.start_driver:6 of -msgid "" -"A server implementation, either `flwr.server.Server` or a subclass " -"thereof. If no instance is provided, then `start_driver` will create one." -msgstr "" - -#: flwr.server.app.start_server:9 flwr.server.driver.app.start_driver:10 -#: flwr.simulation.app.start_simulation:28 of -msgid "" -"Currently supported values are `num_rounds` (int, default: 1) and " -"`round_timeout` in seconds (float, default: None)." -msgstr "" - -#: flwr.server.app.start_server:12 flwr.server.driver.app.start_driver:13 of -msgid "" -"An implementation of the abstract base class " -"`flwr.server.strategy.Strategy`. If no strategy is provided, then " -"`start_server` will use `flwr.server.strategy.FedAvg`." -msgstr "" - -#: flwr.server.driver.app.start_driver:17 of -msgid "" -"An implementation of the class `flwr.server.ClientManager`. If no " -"implementation is provided, then `start_driver` will use " -"`flwr.server.SimpleClientManager`." -msgstr "" - -#: flwr.server.app.start_server:37 flwr.server.driver.app.start_driver:26 of -msgid "**hist** -- Object containing training and evaluation metrics." -msgstr "" - -#: flwr.server.driver.app.start_driver:31 of -msgid "Starting a driver that connects to an insecure server:" -msgstr "" - -#: flwr.server.driver.app.start_driver:35 of -msgid "Starting a driver that connects to an SSL-enabled server:" -msgstr "" - -#: ../../source/ref-api/flwr.server.run_driver_api.rst:2 -msgid "run\\_driver\\_api" -msgstr "" - -#: ../../source/ref-api/flwr.server.run_fleet_api.rst:2 -msgid "run\\_fleet\\_api" -msgstr "" - -#: ../../source/ref-api/flwr.server.run_server_app.rst:2 -msgid "run\\_server\\_app" -msgstr "" - -#: ../../source/ref-api/flwr.server.run_superlink.rst:2 -msgid "run\\_superlink" -msgstr "" - -#: ../../source/ref-api/flwr.server.start_server.rst:2 -msgid "start\\_server" -msgstr "" - -#: flwr.server.app.start_server:3 of -msgid "The IPv4 or IPv6 address of the server. Defaults to `\"[::]:8080\"`." -msgstr "" - -#: flwr.server.app.start_server:5 of -msgid "" -"A server implementation, either `flwr.server.Server` or a subclass " -"thereof. If no instance is provided, then `start_server` will create one." -msgstr "" - -#: flwr.server.app.start_server:16 of -msgid "" -"An implementation of the abstract base class `flwr.server.ClientManager`." -" If no implementation is provided, then `start_server` will use " -"`flwr.server.client_manager.SimpleClientManager`." -msgstr "" - -#: flwr.server.app.start_server:21 of -msgid "" -"The maximum length of gRPC messages that can be exchanged with the Flower" -" clients. The default should be sufficient for most models. Users who " -"train very large models might need to increase this value. Note that the " -"Flower clients need to be started with the same value (see " -"`flwr.client.start_client`), otherwise clients will not know about the " -"increased limit and block larger messages." +":py:obj:`DifferentialPrivacyClientSideFixedClipping " +"`\\ " +"\\(...\\)" msgstr "" -#: flwr.server.app.start_server:42 of -msgid "Starting an insecure server:" -msgstr "" - -#: flwr.server.app.start_server:46 of -msgid "Starting an SSL-enabled server:" -msgstr "" - -#: ../../source/ref-api/flwr.server.strategy.rst:2 -msgid "strategy" +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:1 +#: of +msgid "Strategy wrapper for central DP with client-side fixed clipping." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`FaultTolerantFedAvg " -"`\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" +":py:obj:`DifferentialPrivacyServerSideFixedClipping " +"`\\ " +"\\(...\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg:1 of -msgid "Configurable fault-tolerant FedAvg strategy implementation." +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:1 +#: of +msgid "Strategy wrapper for central DP with server-side fixed clipping." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" ":py:obj:`FedAdagrad `\\ \\(\\*\\[\\, " "fraction\\_fit\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 #: flwr.server.strategy.fedadagrad.FedAdagrad:1 of msgid "FedAdagrad strategy - Adaptive Federated Optimization using Adagrad." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" ":py:obj:`FedAdam `\\ \\(\\*\\[\\, " "fraction\\_fit\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 #: flwr.server.strategy.fedadam.FedAdam:1 of msgid "FedAdam - Adaptive Federated Optimization using Adam." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" ":py:obj:`FedAvg `\\ \\(\\*\\[\\, " "fraction\\_fit\\, fraction\\_evaluate\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 #: flwr.server.strategy.fedavg.FedAvg:1 #: flwr.server.strategy.fedavg_android.FedAvgAndroid:1 of msgid "Federated Averaging strategy." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 -msgid "" -":py:obj:`FedXgbNnAvg `\\ \\(\\*args\\, " -"\\*\\*kwargs\\)" -msgstr "" - -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 -#: flwr.server.strategy.fedxgb_nn_avg.FedXgbNnAvg:1 of -msgid "Configurable FedXgbNnAvg strategy implementation." -msgstr "" - -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 -msgid "" -":py:obj:`FedXgbBagging `\\ " -"\\(\\[evaluate\\_function\\]\\)" -msgstr "" - -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging:1 of -msgid "Configurable FedXgbBagging strategy implementation." -msgstr "" - -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 -msgid "" -":py:obj:`FedXgbCyclic `\\ " -"\\(\\*\\*kwargs\\)" -msgstr "" - -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic:1 of -msgid "Configurable FedXgbCyclic strategy implementation." -msgstr "" - -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" ":py:obj:`FedAvgAndroid `\\ " "\\(\\*\\[\\, fraction\\_fit\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" ":py:obj:`FedAvgM `\\ \\(\\*\\[\\, " "fraction\\_fit\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 #: flwr.server.strategy.fedavgm.FedAvgM:1 of msgid "Federated Averaging with Momentum strategy." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +msgid "" +":py:obj:`FedMedian `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" +msgstr "" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedmedian.FedMedian:1 of +msgid "Configurable FedMedian strategy implementation." +msgstr "" + +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" ":py:obj:`FedOpt `\\ \\(\\*\\[\\, " "fraction\\_fit\\, fraction\\_evaluate\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 #: flwr.server.strategy.fedopt.FedOpt:1 of msgid "Federated Optim strategy." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" ":py:obj:`FedProx `\\ \\(\\*\\[\\, " "fraction\\_fit\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 #: flwr.server.strategy.fedprox.FedProx:1 of msgid "Federated Optimization strategy." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`FedYogi `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" +":py:obj:`FedTrimmedAvg `\\ " +"\\(\\*\\[\\, fraction\\_fit\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 -#: flwr.server.strategy.fedyogi.FedYogi:1 of -msgid "FedYogi [Reddi et al., 2020] strategy." +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:1 of +msgid "Federated Averaging with Trimmed Mean [Dong Yin, et al., 2021]." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`QFedAvg `\\ \\(\\*\\[\\, " -"q\\_param\\, qffl\\_learning\\_rate\\, ...\\]\\)" +":py:obj:`FedXgbBagging `\\ " +"\\(\\[evaluate\\_function\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 -#: flwr.server.strategy.qfedavg.QFedAvg:1 of -msgid "Configurable QFedAvg strategy implementation." +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging:1 of +msgid "Configurable FedXgbBagging strategy implementation." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`FedMedian `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, ...\\]\\)" +":py:obj:`FedXgbCyclic `\\ " +"\\(\\*\\*kwargs\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 -#: flwr.server.strategy.fedmedian.FedMedian:1 of -msgid "Configurable FedMedian strategy implementation." +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic:1 of +msgid "Configurable FedXgbCyclic strategy implementation." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`FedTrimmedAvg `\\ " -"\\(\\*\\[\\, fraction\\_fit\\, ...\\]\\)" +":py:obj:`FedXgbNnAvg `\\ \\(\\*args\\, " +"\\*\\*kwargs\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:1 of -msgid "Federated Averaging with Trimmed Mean [Dong Yin, et al., 2021]." +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedxgb_nn_avg.FedXgbNnAvg:1 of +msgid "Configurable FedXgbNnAvg strategy implementation." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`Krum `\\ \\(\\*\\[\\, " -"fraction\\_fit\\, fraction\\_evaluate\\, ...\\]\\)" +":py:obj:`FedYogi `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 -#: flwr.server.strategy.krum.Krum:1 of -msgid "Krum [Blanchard et al., 2017] strategy." +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fedyogi.FedYogi:1 of +msgid "FedYogi [Reddi et al., 2020] strategy." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`Bulyan `\\ \\(\\*\\, " -"fraction\\_fit\\, fraction\\_evaluate\\, ...\\)" +":py:obj:`FaultTolerantFedAvg " +"`\\ \\(\\*\\[\\, " +"fraction\\_fit\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 -#: flwr.server.strategy.bulyan.Bulyan:1 of -msgid "Bulyan strategy." +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg:1 of +msgid "Configurable fault-tolerant FedAvg strategy implementation." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`DPFedAvgAdaptive `\\ " -"\\(strategy\\, num\\_sampled\\_clients\\)" +":py:obj:`Krum `\\ \\(\\*\\[\\, " +"fraction\\_fit\\, fraction\\_evaluate\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 -#: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive:1 of -msgid "Wrapper for configuring a Strategy for DP with Adaptive Clipping." +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.krum.Krum:1 of +msgid "Krum [Blanchard et al., 2017] strategy." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid "" -":py:obj:`DPFedAvgFixed `\\ " -"\\(strategy\\, num\\_sampled\\_clients\\, ...\\)" +":py:obj:`QFedAvg `\\ \\(\\*\\[\\, " +"q\\_param\\, qffl\\_learning\\_rate\\, ...\\]\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 -#: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed:1 of -msgid "Wrapper for configuring a Strategy for DP with Fixed Clipping." +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 +#: flwr.server.strategy.qfedavg.QFedAvg:1 of +msgid "Configurable QFedAvg strategy implementation." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 msgid ":py:obj:`Strategy `\\ \\(\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.rst:41::1 +#: ../../source/ref-api/flwr.server.strategy.rst:45::1 #: flwr.server.strategy.strategy.Strategy:1 of msgid "Abstract base class for server strategy implementations." msgstr "" @@ -8806,6 +9894,14 @@ msgid "" "parameters\\, ...\\)" msgstr "" +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.configure_evaluate:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.configure_evaluate:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.configure_evaluate:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.configure_evaluate:1 #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 #: flwr.server.strategy.fedavg.FedAvg.configure_evaluate:1 @@ -8827,6 +9923,14 @@ msgid "" "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.configure_fit:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.configure_fit:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.configure_fit:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.configure_fit:1 #: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive.configure_fit:1 #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 @@ -8920,6 +10024,10 @@ msgstr "" msgid "Return the sample size and the required number of available clients." msgstr "" +#: ../../source/ref-api/flwr.server.strategy.DPFedAvgAdaptive.rst:2 +msgid "DPFedAvgAdaptive" +msgstr "" + #: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive:1 of msgid "Bases: :py:class:`~flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed`" msgstr "" @@ -8937,6 +10045,14 @@ msgid "" "\\(server\\_round\\, results\\, ...\\)" msgstr "" +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1 #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 #: of @@ -8985,6 +10101,14 @@ msgid "" "\\(server\\_round\\, parameters\\)" msgstr "" +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.evaluate:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.evaluate:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.evaluate:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.evaluate:1 #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.evaluate:1 of msgid "Evaluate model parameters using an evaluation function from the strategy." @@ -8998,6 +10122,14 @@ msgid "" "\\(client\\_manager\\)" msgstr "" +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.initialize_parameters:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.initialize_parameters:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.initialize_parameters:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.initialize_parameters:1 #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.aggregate_evaluate:1::1 #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.initialize_parameters:1 of msgid "Initialize global model parameters using given strategy." @@ -9031,6 +10163,14 @@ msgid "" "round of federated evaluation." msgstr "" +#: ../../source/ref-api/flwr.server.strategy.DPFedAvgFixed.rst:2 +msgid "DPFedAvgFixed" +msgstr "" + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:1 #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed:1 #: flwr.server.strategy.fedavg.FedAvg:1 #: flwr.server.strategy.fedavg_android.FedAvgAndroid:1 of @@ -9112,8939 +10252,10559 @@ msgid "" "round of federated learning." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.FaultTolerantFedAvg.rst:2 -msgid "FaultTolerantFedAvg" +#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyClientSideAdaptiveClipping.rst:2 +msgid "DifferentialPrivacyClientSideAdaptiveClipping" msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:3 #: of -msgid "" -":py:obj:`aggregate_evaluate " -"`\\ " -"\\(server\\_round\\, results\\, ...\\)" +msgid "Use `adaptiveclipping_mod` modifier at the client side." msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:5 #: of msgid "" -":py:obj:`aggregate_fit " -"`\\ " -"\\(server\\_round\\, results\\, failures\\)" +"In comparison to `DifferentialPrivacyServerSideAdaptiveClipping`, which " +"performs clipping on the server-side, " +"`DifferentialPrivacyClientSideAdaptiveClipping` expects clipping to " +"happen on the client-side, usually by using the built-in " +"`adaptiveclipping_mod`." msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_fit:1 -#: flwr.server.strategy.fedadagrad.FedAdagrad.aggregate_fit:1 -#: flwr.server.strategy.fedadam.FedAdam.aggregate_fit:1 -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg.FedAvg.aggregate_fit:1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_fit:1 -#: flwr.server.strategy.fedavgm.FedAvgM.aggregate_fit:1 -#: flwr.server.strategy.fedxgb_nn_avg.FedXgbNnAvg.aggregate_fit:1 -#: flwr.server.strategy.fedyogi.FedYogi.aggregate_fit:1 -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_fit:1 of -msgid "Aggregate fit results using weighted average." +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:10 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:3 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:10 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:3 +#: of +msgid "The strategy to which DP functionalities will be added by this wrapper." msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:12 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:5 #: of -msgid "" -":py:obj:`configure_evaluate " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +msgid "The noise multiplier for the Gaussian mechanism for model updates." msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:14 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:7 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:17 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:10 #: of -msgid "" -":py:obj:`configure_fit " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +msgid "The number of clients that are sampled on each round." msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:16 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:9 #: of msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +"The initial value of clipping norm. Defaults to 0.1. Andrew et al. " +"recommends to set to 0.1." msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:19 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:12 #: of -msgid "" -":py:obj:`initialize_parameters " -"`\\ " -"\\(client\\_manager\\)" +msgid "The desired quantile of updates which should be clipped. Defaults to 0.5." msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:21 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:14 #: of msgid "" -":py:obj:`num_evaluation_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +"The learning rate for the clipping norm adaptation. Defaults to 0.2. " +"Andrew et al. recommends to set to 0.2." msgstr "" -#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:24 #: of msgid "" -":py:obj:`num_fit_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" -msgstr "" - -#: ../../source/ref-api/flwr.server.strategy.FedAdagrad.rst:2 -#: ../../source/ref-changelog.md:839 -msgid "FedAdagrad" -msgstr "" - -#: flwr.server.strategy.fedadagrad.FedAdagrad:1 -#: flwr.server.strategy.fedadam.FedAdam:1 -#: flwr.server.strategy.fedyogi.FedYogi:1 of -msgid "Bases: :py:class:`~flwr.server.strategy.fedopt.FedOpt`" -msgstr "" - -#: flwr.server.strategy.fedadagrad.FedAdagrad:3 -#: flwr.server.strategy.fedadam.FedAdam:3 flwr.server.strategy.fedopt.FedOpt:3 -#: flwr.server.strategy.fedyogi.FedYogi:3 of -msgid "Implementation based on https://arxiv.org/abs/2003.00295v5" -msgstr "" - -#: flwr.server.strategy.fedadagrad.FedAdagrad:21 -#: flwr.server.strategy.fedadagrad.FedAdagrad:23 -#: flwr.server.strategy.fedadam.FedAdam:25 -#: flwr.server.strategy.fedadam.FedAdam:27 -#: flwr.server.strategy.fedavg.FedAvg:29 flwr.server.strategy.fedavg.FedAvg:31 -#: flwr.server.strategy.fedopt.FedOpt:25 flwr.server.strategy.fedopt.FedOpt:27 -#: flwr.server.strategy.fedprox.FedProx:61 -#: flwr.server.strategy.fedprox.FedProx:63 -#: flwr.server.strategy.fedyogi.FedYogi:28 -#: flwr.server.strategy.fedyogi.FedYogi:30 of -msgid "Metrics aggregation function, optional." +"The stddev of the noise added to the count of updates currently below the" +" estimate. Andrew et al. recommends to set to `expected_num_records/20`" msgstr "" -#: flwr.server.strategy.fedadagrad.FedAdagrad:29 -#: flwr.server.strategy.fedadam.FedAdam:29 -#: flwr.server.strategy.fedopt.FedOpt:29 of -msgid "Server-side learning rate. Defaults to 1e-1." +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:30 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:23 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:22 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:15 +#: of +msgid "Create a strategy:" msgstr "" -#: flwr.server.strategy.fedadagrad.FedAdagrad:31 -#: flwr.server.strategy.fedadam.FedAdam:31 -#: flwr.server.strategy.fedopt.FedOpt:31 of -msgid "Client-side learning rate. Defaults to 1e-1." +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:34 +#: of +msgid "" +"Wrap the strategy with the " +"`DifferentialPrivacyClientSideAdaptiveClipping` wrapper:" msgstr "" -#: flwr.server.strategy.fedadagrad.FedAdagrad:33 -#: flwr.server.strategy.fedadam.FedAdam:37 -#: flwr.server.strategy.fedopt.FedOpt:37 of -msgid "Controls the algorithm's degree of adaptability. Defaults to 1e-9." +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping:40 +#: of +msgid "On the client, add the `adaptiveclipping_mod` to the client-side mods:" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ " -"\\(server\\_round\\, results\\, ...\\)" +"`\\" +" \\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`aggregate_fit `\\" +":py:obj:`aggregate_fit " +"`\\" " \\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_fit:1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_fit:1 +#: of +msgid "Aggregate training results and update clip norms." +msgstr "" + +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`configure_evaluate " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`configure_fit `\\" +":py:obj:`configure_fit " +"`\\" " \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +":py:obj:`evaluate " +"`\\" +" \\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyClientSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`initialize_parameters " -"`\\ " -"\\(client\\_manager\\)" +"`\\" +" \\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`num_evaluation_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyClientSideFixedClipping.rst:2 +msgid "DifferentialPrivacyClientSideFixedClipping" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`num_fit_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:3 +#: of +msgid "Use `fixedclipping_mod` modifier at the client side." msgstr "" -#: ../../source/ref-api/flwr.server.strategy.FedAdam.rst:2 -msgid "FedAdam" +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:5 +#: of +msgid "" +"In comparison to `DifferentialPrivacyServerSideFixedClipping`, which " +"performs clipping on the server-side, " +"`DifferentialPrivacyClientSideFixedClipping` expects clipping to happen " +"on the client-side, usually by using the built-in `fixedclipping_mod`." msgstr "" -#: flwr.server.strategy.fedadam.FedAdam:33 -#: flwr.server.strategy.fedyogi.FedYogi:36 of -msgid "Momentum parameter. Defaults to 0.9." +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:12 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:5 +#: of +msgid "" +"The noise multiplier for the Gaussian mechanism for model updates. A " +"value of 1.0 or higher is recommended for strong privacy." msgstr "" -#: flwr.server.strategy.fedadam.FedAdam:35 -#: flwr.server.strategy.fedyogi.FedYogi:38 of -msgid "Second moment parameter. Defaults to 0.99." +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:15 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:8 +#: of +msgid "The value of the clipping norm." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:26 +#: of msgid "" -":py:obj:`aggregate_evaluate " -"`\\ \\(server\\_round\\," -" results\\, ...\\)" +"Wrap the strategy with the `DifferentialPrivacyClientSideFixedClipping` " +"wrapper:" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`aggregate_fit `\\ " -"\\(server\\_round\\, results\\, failures\\)" +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping:32 +#: of +msgid "On the client, add the `fixedclipping_mod` to the client-side mods:" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`configure_evaluate " -"`\\ \\(server\\_round\\," -" parameters\\, ...\\)" +":py:obj:`aggregate_evaluate " +"`\\" +" \\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`configure_fit `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`aggregate_fit " +"`\\" +" \\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_fit:1 +#: of +msgid "Add noise to the aggregated parameters." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`initialize_parameters " -"`\\ " -"\\(client\\_manager\\)" +":py:obj:`configure_evaluate " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`num_evaluation_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +":py:obj:`configure_fit " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`num_fit_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +":py:obj:`evaluate " +"`\\" +" \\(server\\_round\\, parameters\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.FedAvg.rst:2 -msgid "FedAvg" +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyClientSideFixedClipping.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`initialize_parameters " +"`\\" +" \\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg:3 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid:3 of -msgid "Implementation based on https://arxiv.org/abs/1602.05629" +#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyServerSideAdaptiveClipping.rst:2 +msgid "DifferentialPrivacyServerSideAdaptiveClipping" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg:5 flwr.server.strategy.fedprox.FedProx:37 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:17 #: of msgid "" -"Fraction of clients used during training. In case `min_fit_clients` is " -"larger than `fraction_fit * available_clients`, `min_fit_clients` will " -"still be sampled. Defaults to 1.0." +"The standard deviation of the noise added to the count of updates below " +"the estimate. Andrew et al. recommends to set to " +"`expected_num_records/20`" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg:9 flwr.server.strategy.fedprox.FedProx:41 +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping:27 #: of msgid "" -"Fraction of clients used during validation. In case " -"`min_evaluate_clients` is larger than `fraction_evaluate * " -"available_clients`, `min_evaluate_clients` will still be sampled. " -"Defaults to 1.0." +"Wrap the strategy with the DifferentialPrivacyServerSideAdaptiveClipping " +"wrapper" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ \\(server\\_round\\, " -"results\\, ...\\)" +"`\\" +" \\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`aggregate_fit `\\ " -"\\(server\\_round\\, results\\, failures\\)" +":py:obj:`aggregate_fit " +"`\\" +" \\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`configure_evaluate " -"`\\ \\(server\\_round\\, " -"parameters\\, ...\\)" +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`configure_fit `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_fit " +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +":py:obj:`evaluate " +"`\\" +" \\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_adaptive_clipping.DifferentialPrivacyServerSideAdaptiveClipping.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`initialize_parameters " -"`\\ " -"\\(client\\_manager\\)" +"`\\" +" \\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`num_evaluation_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +#: ../../source/ref-api/flwr.server.strategy.DifferentialPrivacyServerSideFixedClipping.rst:2 +msgid "DifferentialPrivacyServerSideFixedClipping" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping:19 +#: of msgid "" -":py:obj:`num_fit_clients `\\" -" \\(num\\_available\\_clients\\)" -msgstr "" - -#: ../../source/ref-api/flwr.server.strategy.FedAvgAndroid.rst:2 -msgid "FedAvgAndroid" +"Wrap the strategy with the DifferentialPrivacyServerSideFixedClipping " +"wrapper" msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 #: of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ " -"\\(server\\_round\\, results\\, ...\\)" +"`\\" +" \\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 #: of msgid "" ":py:obj:`aggregate_fit " -"`\\ " -"\\(server\\_round\\, results\\, failures\\)" +"`\\" +" \\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_fit:1 #: of -msgid "" -":py:obj:`bytes_to_ndarray " -"`\\ \\(tensor\\)" +msgid "Compute the updates, clip, and pass them for aggregation." msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.bytes_to_ndarray:1 of -msgid "Deserialize NumPy array from bytes." -msgstr "" - -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 #: of msgid "" ":py:obj:`configure_evaluate " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 #: of msgid "" ":py:obj:`configure_fit " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +"`\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 #: of msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +":py:obj:`evaluate " +"`\\" +" \\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_evaluate:1::1 #: of msgid "" ":py:obj:`initialize_parameters " -"`\\ " -"\\(client\\_manager\\)" +"`\\" +" \\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.dp_fixed_clipping.DifferentialPrivacyServerSideFixedClipping.aggregate_fit:3 #: of -msgid "" -":py:obj:`ndarray_to_bytes " -"`\\ \\(ndarray\\)" +msgid "Afterward, add noise to the aggregated parameters." msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.ndarray_to_bytes:1 of -msgid "Serialize NumPy array to bytes." +#: ../../source/ref-api/flwr.server.strategy.FaultTolerantFedAvg.rst:2 +msgid "FaultTolerantFedAvg" msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 #: of msgid "" -":py:obj:`ndarrays_to_parameters " -"`\\ " -"\\(ndarrays\\)" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 #: of msgid "" -":py:obj:`num_evaluation_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +":py:obj:`aggregate_fit " +"`\\ " +"\\(server\\_round\\, results\\, failures\\)" msgstr "" +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_fit:1 +#: flwr.server.strategy.fedadagrad.FedAdagrad.aggregate_fit:1 +#: flwr.server.strategy.fedadam.FedAdam.aggregate_fit:1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_fit:1 #: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_fit:1 +#: flwr.server.strategy.fedavgm.FedAvgM.aggregate_fit:1 +#: flwr.server.strategy.fedxgb_nn_avg.FedXgbNnAvg.aggregate_fit:1 +#: flwr.server.strategy.fedyogi.FedYogi.aggregate_fit:1 +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_fit:1 of +msgid "Aggregate fit results using weighted average." +msgstr "" + +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 #: of msgid "" -":py:obj:`num_fit_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 #: of msgid "" -":py:obj:`parameters_to_ndarrays " -"`\\ " -"\\(parameters\\)" +":py:obj:`configure_fit " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedavg_android.FedAvgAndroid.parameters_to_ndarrays:1 +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 #: of -msgid "Convert parameters object to NumPy weights." +msgid "" +":py:obj:`evaluate `\\ " +"\\(server\\_round\\, parameters\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.FedAvgM.rst:2 -msgid "FedAvgM" +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.fedavgm.FedAvgM:3 of -msgid "Implementation based on https://arxiv.org/abs/1909.06335" +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.server.strategy.fedavgm.FedAvgM:25 of +#: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1::1 +#: of msgid "" -"Server-side learning rate used in server-side optimization. Defaults to " -"1.0." +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.server.strategy.fedavgm.FedAvgM:28 of -msgid "Server-side momentum factor used for FedAvgM. Defaults to 0.0." +#: ../../source/ref-api/flwr.server.strategy.FedAdagrad.rst:2 +#: ../../source/ref-changelog.md:839 +msgid "FedAdagrad" +msgstr "" + +#: flwr.server.strategy.fedadagrad.FedAdagrad:1 +#: flwr.server.strategy.fedadam.FedAdam:1 +#: flwr.server.strategy.fedyogi.FedYogi:1 of +msgid "Bases: :py:class:`~flwr.server.strategy.fedopt.FedOpt`" +msgstr "" + +#: flwr.server.strategy.fedadagrad.FedAdagrad:3 +#: flwr.server.strategy.fedadam.FedAdam:3 flwr.server.strategy.fedopt.FedOpt:3 +#: flwr.server.strategy.fedyogi.FedYogi:3 of +msgid "Implementation based on https://arxiv.org/abs/2003.00295v5" +msgstr "" + +#: flwr.server.strategy.fedadagrad.FedAdagrad:21 +#: flwr.server.strategy.fedadagrad.FedAdagrad:23 +#: flwr.server.strategy.fedadam.FedAdam:25 +#: flwr.server.strategy.fedadam.FedAdam:27 +#: flwr.server.strategy.fedavg.FedAvg:29 flwr.server.strategy.fedavg.FedAvg:31 +#: flwr.server.strategy.fedopt.FedOpt:25 flwr.server.strategy.fedopt.FedOpt:27 +#: flwr.server.strategy.fedprox.FedProx:61 +#: flwr.server.strategy.fedprox.FedProx:63 +#: flwr.server.strategy.fedyogi.FedYogi:28 +#: flwr.server.strategy.fedyogi.FedYogi:30 of +msgid "Metrics aggregation function, optional." +msgstr "" + +#: flwr.server.strategy.fedadagrad.FedAdagrad:29 +#: flwr.server.strategy.fedadam.FedAdam:29 +#: flwr.server.strategy.fedopt.FedOpt:29 of +msgid "Server-side learning rate. Defaults to 1e-1." +msgstr "" + +#: flwr.server.strategy.fedadagrad.FedAdagrad:31 +#: flwr.server.strategy.fedadam.FedAdam:31 +#: flwr.server.strategy.fedopt.FedOpt:31 of +msgid "Client-side learning rate. Defaults to 1e-1." +msgstr "" + +#: flwr.server.strategy.fedadagrad.FedAdagrad:33 +#: flwr.server.strategy.fedadam.FedAdam:37 +#: flwr.server.strategy.fedopt.FedOpt:37 of +msgid "Controls the algorithm's degree of adaptability. Defaults to 1e-9." msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ \\(server\\_round\\," -" results\\, ...\\)" +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`aggregate_fit `\\ " -"\\(server\\_round\\, results\\, failures\\)" +":py:obj:`aggregate_fit `\\" +" \\(server\\_round\\, results\\, failures\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`configure_evaluate " -"`\\ \\(server\\_round\\," -" parameters\\, ...\\)" +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`configure_fit `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_fit `\\" +" \\(server\\_round\\, parameters\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`evaluate `\\ " +":py:obj:`evaluate `\\ " "\\(server\\_round\\, parameters\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`initialize_parameters " -"`\\ " +"`\\ " "\\(client\\_manager\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`num_evaluation_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`num_fit_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.FedMedian.rst:2 -msgid "FedMedian" +#: ../../source/ref-api/flwr.server.strategy.FedAdam.rst:2 +msgid "FedAdam" +msgstr "" + +#: flwr.server.strategy.fedadam.FedAdam:33 +#: flwr.server.strategy.fedyogi.FedYogi:36 of +msgid "Momentum parameter. Defaults to 0.9." +msgstr "" + +#: flwr.server.strategy.fedadam.FedAdam:35 +#: flwr.server.strategy.fedyogi.FedYogi:38 of +msgid "Second moment parameter. Defaults to 0.99." msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ " -"\\(server\\_round\\, results\\, ...\\)" +"`\\ \\(server\\_round\\," +" results\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`aggregate_fit `\\ " +":py:obj:`aggregate_fit `\\ " "\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedmedian.FedMedian.aggregate_fit:1 of -msgid "Aggregate fit results using median." -msgstr "" - #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`configure_evaluate " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +"`\\ \\(server\\_round\\," +" parameters\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`configure_fit `\\ " +":py:obj:`configure_fit `\\ " "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`evaluate `\\ " +":py:obj:`evaluate `\\ " "\\(server\\_round\\, parameters\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`initialize_parameters " -"`\\ " +"`\\ " "\\(client\\_manager\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`num_evaluation_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`num_fit_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.FedOpt.rst:2 -msgid "FedOpt" +#: ../../source/ref-api/flwr.server.strategy.FedAvg.rst:2 +msgid "FedAvg" msgstr "" -#: flwr.server.strategy.fedopt.FedOpt:33 of -msgid "Momentum parameter. Defaults to 0.0." +#: flwr.server.strategy.fedavg.FedAvg:3 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid:3 of +msgid "Implementation based on https://arxiv.org/abs/1602.05629" msgstr "" -#: flwr.server.strategy.fedopt.FedOpt:35 of -msgid "Second moment parameter. Defaults to 0.0." +#: flwr.server.strategy.fedavg.FedAvg:5 flwr.server.strategy.fedprox.FedProx:37 +#: of +msgid "" +"Fraction of clients used during training. In case `min_fit_clients` is " +"larger than `fraction_fit * available_clients`, `min_fit_clients` will " +"still be sampled. Defaults to 1.0." +msgstr "" + +#: flwr.server.strategy.fedavg.FedAvg:9 flwr.server.strategy.fedprox.FedProx:41 +#: of +msgid "" +"Fraction of clients used during validation. In case " +"`min_evaluate_clients` is larger than `fraction_evaluate * " +"available_clients`, `min_evaluate_clients` will still be sampled. " +"Defaults to 1.0." +msgstr "" + +#: flwr.server.strategy.fedavg.FedAvg:33 of +msgid "Enable (True) or disable (False) in-place aggregation of model updates." msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ \\(server\\_round\\, " +"`\\ \\(server\\_round\\, " "results\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`aggregate_fit `\\ " +":py:obj:`aggregate_fit `\\ " "\\(server\\_round\\, results\\, failures\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`configure_evaluate " -"`\\ \\(server\\_round\\, " +"`\\ \\(server\\_round\\, " "parameters\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`configure_fit `\\ " +":py:obj:`configure_fit `\\ " "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`evaluate `\\ " +":py:obj:`evaluate `\\ " "\\(server\\_round\\, parameters\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`initialize_parameters " -"`\\ " +"`\\ " "\\(client\\_manager\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`num_evaluation_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`num_fit_clients `\\" +":py:obj:`num_fit_clients `\\" " \\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.FedProx.rst:2 -msgid "FedProx" +#: ../../source/ref-api/flwr.server.strategy.FedAvgAndroid.rst:2 +msgid "FedAvgAndroid" msgstr "" -#: flwr.server.strategy.fedprox.FedProx:3 of -msgid "Implementation based on https://arxiv.org/abs/1812.06127" +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedprox.FedProx:5 of +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of msgid "" -"The strategy in itself will not be different than FedAvg, the client " -"needs to be adjusted. A proximal term needs to be added to the loss " -"function during the training:" +":py:obj:`aggregate_fit " +"`\\ " +"\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.strategy.fedprox.FedProx:9 of +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of msgid "" -"\\\\frac{\\\\mu}{2} || w - w^t ||^2\n" -"\n" +":py:obj:`bytes_to_ndarray " +"`\\ \\(tensor\\)" msgstr "" -#: flwr.server.strategy.fedprox.FedProx:12 of -msgid "" -"Where $w^t$ are the global parameters and $w$ are the local weights the " -"function will be optimized with." +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.bytes_to_ndarray:1 of +msgid "Deserialize NumPy array from bytes." msgstr "" -#: flwr.server.strategy.fedprox.FedProx:15 of -msgid "In PyTorch, for example, the loss would go from:" +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedprox.FedProx:21 of -msgid "To:" +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`configure_fit " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedprox.FedProx:30 of +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of msgid "" -"With `global_params` being a copy of the parameters before the training " -"takes place." +":py:obj:`evaluate `\\ " +"\\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.strategy.fedprox.FedProx:65 of +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of msgid "" -"The weight of the proximal term used in the optimization. 0.0 makes this " -"strategy equivalent to FedAvg, and the higher the coefficient, the more " -"regularization will be used (that is, the client parameters will need to " -"be closer to the server parameters during training)." +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" +msgstr "" + +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`ndarray_to_bytes " +"`\\ \\(ndarray\\)" +msgstr "" + +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.ndarray_to_bytes:1 of +msgid "Serialize NumPy array to bytes." +msgstr "" + +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`ndarrays_to_parameters " +"`\\ " +"\\(ndarrays\\)" +msgstr "" + +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" +msgstr "" + +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" +msgstr "" + +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: of +msgid "" +":py:obj:`parameters_to_ndarrays " +"`\\ " +"\\(parameters\\)" +msgstr "" + +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedavg_android.FedAvgAndroid.parameters_to_ndarrays:1 +#: of +msgid "Convert parameters object to NumPy weights." +msgstr "" + +#: ../../source/ref-api/flwr.server.strategy.FedAvgM.rst:2 +msgid "FedAvgM" +msgstr "" + +#: flwr.server.strategy.fedavgm.FedAvgM:3 of +msgid "Implementation based on https://arxiv.org/abs/1909.06335" +msgstr "" + +#: flwr.server.strategy.fedavgm.FedAvgM:25 of +msgid "" +"Server-side learning rate used in server-side optimization. Defaults to " +"1.0." +msgstr "" + +#: flwr.server.strategy.fedavgm.FedAvgM:28 of +msgid "Server-side momentum factor used for FedAvgM. Defaults to 0.0." msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ \\(server\\_round\\," +"`\\ \\(server\\_round\\," " results\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`aggregate_fit `\\ " +":py:obj:`aggregate_fit `\\ " "\\(server\\_round\\, results\\, failures\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`configure_evaluate " -"`\\ \\(server\\_round\\," +"`\\ \\(server\\_round\\," " parameters\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`configure_fit `\\ " +":py:obj:`configure_fit `\\ " "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`evaluate `\\ " +":py:obj:`evaluate `\\ " "\\(server\\_round\\, parameters\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`initialize_parameters " -"`\\ " +"`\\ " "\\(client\\_manager\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`num_evaluation_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`num_fit_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.server.strategy.fedprox.FedProx.configure_fit:3 of -msgid "Sends the proximal factor mu to the clients" -msgstr "" - -#: ../../source/ref-api/flwr.server.strategy.FedTrimmedAvg.rst:2 -msgid "FedTrimmedAvg" -msgstr "" - -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:3 of -msgid "Implemented based on: https://arxiv.org/abs/1803.01498" -msgstr "" - -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:25 of -msgid "Fraction to cut off of both tails of the distribution. Defaults to 0.2." +#: ../../source/ref-api/flwr.server.strategy.FedMedian.rst:2 +msgid "FedMedian" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ " +"`\\ " "\\(server\\_round\\, results\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`aggregate_fit " -"`\\ " +":py:obj:`aggregate_fit `\\ " "\\(server\\_round\\, results\\, failures\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg.aggregate_fit:1 of -msgid "Aggregate fit results using trimmed average." +#: flwr.server.strategy.fedmedian.FedMedian.aggregate_fit:1 of +msgid "Aggregate fit results using median." msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`configure_evaluate " -"`\\ " +"`\\ " "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`configure_fit " -"`\\ " +":py:obj:`configure_fit `\\ " "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`evaluate `\\ " +":py:obj:`evaluate `\\ " "\\(server\\_round\\, parameters\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`initialize_parameters " -"`\\ " +"`\\ " "\\(client\\_manager\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`num_evaluation_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`num_fit_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.FedXgbBagging.rst:2 -msgid "FedXgbBagging" +#: ../../source/ref-api/flwr.server.strategy.FedOpt.rst:2 +msgid "FedOpt" msgstr "" -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`aggregate_evaluate " -"`\\ " -"\\(server\\_round\\, results\\, ...\\)" +#: flwr.server.strategy.fedopt.FedOpt:33 of +msgid "Momentum parameter. Defaults to 0.0." msgstr "" -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1 -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: of -msgid "Aggregate evaluation metrics using average." +#: flwr.server.strategy.fedopt.FedOpt:35 of +msgid "Second moment parameter. Defaults to 0.0." msgstr "" -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`aggregate_fit " -"`\\ " -"\\(server\\_round\\, results\\, failures\\)" +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\, " +"results\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_fit:1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_fit:1 of -msgid "Aggregate fit results using bagging." +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`aggregate_fit `\\ " +"\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`configure_evaluate " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +"`\\ \\(server\\_round\\, " +"parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`configure_fit " -"`\\ " +":py:obj:`configure_fit `\\ " "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`evaluate `\\ " +":py:obj:`evaluate `\\ " "\\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`initialize_parameters " -"`\\ " +"`\\ " "\\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`num_evaluation_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`num_fit_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +":py:obj:`num_fit_clients `\\" +" \\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.FedXgbCyclic.rst:2 -msgid "FedXgbCyclic" +#: ../../source/ref-api/flwr.server.strategy.FedProx.rst:2 +msgid "FedProx" msgstr "" -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedprox.FedProx:3 of +msgid "Implementation based on https://arxiv.org/abs/1812.06127" +msgstr "" + +#: flwr.server.strategy.fedprox.FedProx:5 of msgid "" -":py:obj:`aggregate_evaluate " -"`\\ " -"\\(server\\_round\\, results\\, ...\\)" +"The strategy in itself will not be different than FedAvg, the client " +"needs to be adjusted. A proximal term needs to be added to the loss " +"function during the training:" msgstr "" -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedprox.FedProx:9 of msgid "" -":py:obj:`aggregate_fit " -"`\\ \\(server\\_round\\," -" results\\, failures\\)" +"\\\\frac{\\\\mu}{2} || w - w^t ||^2\n" +"\n" msgstr "" -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedprox.FedProx:12 of msgid "" -":py:obj:`configure_evaluate " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" -msgstr "" - -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`configure_fit " -"`\\ \\(server\\_round\\," -" parameters\\, ...\\)" -msgstr "" - -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`evaluate `\\ " -"\\(server\\_round\\, parameters\\)" +"Where $w^t$ are the global parameters and $w$ are the local weights the " +"function will be optimized with." msgstr "" -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`initialize_parameters " -"`\\ " -"\\(client\\_manager\\)" +#: flwr.server.strategy.fedprox.FedProx:15 of +msgid "In PyTorch, for example, the loss would go from:" msgstr "" -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: of -msgid "" -":py:obj:`num_evaluation_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" +#: flwr.server.strategy.fedprox.FedProx:21 of +msgid "To:" msgstr "" -#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedprox.FedProx:30 of msgid "" -":py:obj:`num_fit_clients " -"`\\ " -"\\(num\\_available\\_clients\\)" -msgstr "" - -#: ../../source/ref-api/flwr.server.strategy.FedXgbNnAvg.rst:2 -msgid "FedXgbNnAvg" +"With `global_params` being a copy of the parameters before the training " +"takes place." msgstr "" -#: flwr.server.strategy.fedxgb_nn_avg.FedXgbNnAvg:5 of +#: flwr.server.strategy.fedprox.FedProx:65 of msgid "" -"This strategy is deprecated, but a copy of it is available in Flower " -"Baselines: " -"https://github.com/adap/flower/tree/main/baselines/hfedxgboost." +"The weight of the proximal term used in the optimization. 0.0 makes this " +"strategy equivalent to FedAvg, and the higher the coefficient, the more " +"regularization will be used (that is, the client parameters will need to " +"be closer to the server parameters during training)." msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ " -"\\(server\\_round\\, results\\, ...\\)" +"`\\ \\(server\\_round\\," +" results\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`aggregate_fit " -"`\\ \\(server\\_round\\, " -"results\\, failures\\)" +":py:obj:`aggregate_fit `\\ " +"\\(server\\_round\\, results\\, failures\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`configure_evaluate " -"`\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +"`\\ \\(server\\_round\\," +" parameters\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`configure_fit " -"`\\ \\(server\\_round\\, " -"parameters\\, ...\\)" +":py:obj:`configure_fit `\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`evaluate `\\ " +":py:obj:`evaluate `\\ " "\\(server\\_round\\, parameters\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`initialize_parameters " -"`\\ " +"`\\ " "\\(client\\_manager\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`num_evaluation_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`num_fit_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.FedYogi.rst:2 -msgid "FedYogi" +#: flwr.server.strategy.fedprox.FedProx.configure_fit:3 of +msgid "Sends the proximal factor mu to the clients" msgstr "" -#: flwr.server.strategy.fedyogi.FedYogi:32 of -msgid "Server-side learning rate. Defaults to 1e-2." +#: ../../source/ref-api/flwr.server.strategy.FedTrimmedAvg.rst:2 +msgid "FedTrimmedAvg" msgstr "" -#: flwr.server.strategy.fedyogi.FedYogi:34 of -msgid "Client-side learning rate. Defaults to 0.0316." +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:3 of +msgid "Implemented based on: https://arxiv.org/abs/1803.01498" msgstr "" -#: flwr.server.strategy.fedyogi.FedYogi:40 of -msgid "Controls the algorithm's degree of adaptability. Defaults to 1e-3." +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg:25 of +msgid "Fraction to cut off of both tails of the distribution. Defaults to 0.2." msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ \\(server\\_round\\," -" results\\, ...\\)" +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`aggregate_fit `\\ " +":py:obj:`aggregate_fit " +"`\\ " "\\(server\\_round\\, results\\, failures\\)" msgstr "" +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg.aggregate_fit:1 of +msgid "Aggregate fit results using trimmed average." +msgstr "" + #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`configure_evaluate " -"`\\ \\(server\\_round\\," -" parameters\\, ...\\)" +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`configure_fit `\\ " +":py:obj:`configure_fit " +"`\\ " "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`evaluate `\\ " +":py:obj:`evaluate `\\ " "\\(server\\_round\\, parameters\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`initialize_parameters " -"`\\ " +"`\\ " "\\(client\\_manager\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`num_evaluation_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`num_fit_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.Krum.rst:2 -msgid "Krum" -msgstr "" - -#: flwr.server.strategy.krum.Krum:3 of -msgid "Implementation based on https://arxiv.org/abs/1703.02757" +#: ../../source/ref-api/flwr.server.strategy.FedXgbBagging.rst:2 +msgid "FedXgbBagging" msgstr "" -#: flwr.server.strategy.krum.Krum:17 of +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: of msgid "" -"Number of clients to keep before averaging (MultiKrum). Defaults to 0, in" -" that case classical Krum is applied." +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of -msgid "" -":py:obj:`aggregate_evaluate " -"`\\ \\(server\\_round\\, " -"results\\, ...\\)" +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: of +msgid "Aggregate evaluation metrics using average." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`aggregate_fit `\\ " +":py:obj:`aggregate_fit " +"`\\ " "\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 -#: flwr.server.strategy.krum.Krum.aggregate_fit:1 of -msgid "Aggregate fit results using Krum." +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_fit:1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_fit:1 of +msgid "Aggregate fit results using bagging." msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`configure_evaluate " -"`\\ \\(server\\_round\\, " -"parameters\\, ...\\)" +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`configure_fit `\\ " +":py:obj:`configure_fit " +"`\\ " "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`evaluate `\\ " +":py:obj:`evaluate `\\ " "\\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`initialize_parameters " -"`\\ " +"`\\ " "\\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`num_evaluation_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedxgb_bagging.FedXgbBagging.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`num_fit_clients `\\ " +":py:obj:`num_fit_clients " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.QFedAvg.rst:2 -msgid "QFedAvg" +#: ../../source/ref-api/flwr.server.strategy.FedXgbCyclic.rst:2 +msgid "FedXgbCyclic" msgstr "" -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`aggregate_evaluate " -"`\\ \\(server\\_round\\," -" results\\, ...\\)" +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`aggregate_fit `\\ " -"\\(server\\_round\\, results\\, failures\\)" +":py:obj:`aggregate_fit " +"`\\ \\(server\\_round\\," +" results\\, failures\\)" msgstr "" -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`configure_evaluate " -"`\\ \\(server\\_round\\," -" parameters\\, ...\\)" +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`configure_fit `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_fit " +"`\\ \\(server\\_round\\," +" parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: of msgid "" -":py:obj:`evaluate `\\ " +":py:obj:`evaluate `\\ " "\\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`initialize_parameters " -"`\\ " +"`\\ " "\\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`num_evaluation_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +#: flwr.server.strategy.fedxgb_cyclic.FedXgbCyclic.aggregate_evaluate:1::1 +#: of msgid "" ":py:obj:`num_fit_clients " -"`\\ " +"`\\ " "\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-api/flwr.server.strategy.Strategy.rst:2 -msgid "Strategy" +#: ../../source/ref-api/flwr.server.strategy.FedXgbNnAvg.rst:2 +msgid "FedXgbNnAvg" msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedxgb_nn_avg.FedXgbNnAvg:5 of msgid "" -":py:obj:`aggregate_evaluate " -"`\\ " -"\\(server\\_round\\, results\\, ...\\)" -msgstr "" - -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1 -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 -#: of -msgid "Aggregate evaluation results." +"This strategy is deprecated, but a copy of it is available in Flower " +"Baselines: " +"https://github.com/adap/flower/tree/main/baselines/hfedxgboost." msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`aggregate_fit `\\ " -"\\(server\\_round\\, results\\, failures\\)" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 -#: flwr.server.strategy.strategy.Strategy.aggregate_fit:1 of -msgid "Aggregate training results." +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`aggregate_fit " +"`\\ \\(server\\_round\\, " +"results\\, failures\\)" msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`configure_evaluate " -"`\\ " +"`\\ " "\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`configure_fit `\\ " -"\\(server\\_round\\, parameters\\, ...\\)" +":py:obj:`configure_fit " +"`\\ \\(server\\_round\\, " +"parameters\\, ...\\)" msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`evaluate `\\ " +":py:obj:`evaluate `\\ " "\\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 -#: flwr.server.strategy.strategy.Strategy.evaluate:1 of -msgid "Evaluate the current model parameters." -msgstr "" - -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 -#: of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" ":py:obj:`initialize_parameters " -"`\\ " +"`\\ " "\\(client\\_manager\\)" msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 -#: flwr.server.strategy.strategy.Strategy.initialize_parameters:1 of -msgid "Initialize the (global) model parameters." +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:5 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"Successful updates from the previously selected and configured clients. " -"Each pair of `(ClientProxy, FitRes` constitutes a successful update from " -"one of the previously selected clients. Not that not all previously " -"selected clients are necessarily included in this list: a client might " -"drop out and not submit a result. For each client that did not submit an " -"update, there should be an `Exception` in `failures`." +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:13 -#: flwr.server.strategy.strategy.Strategy.aggregate_fit:13 of -msgid "Exceptions that occurred while the server was waiting for client updates." +#: ../../source/ref-api/flwr.server.strategy.FedYogi.rst:2 +msgid "FedYogi" msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:16 of -msgid "" -"**aggregation_result** -- The aggregated evaluation result. Aggregation " -"typically uses some variant of a weighted average." +#: flwr.server.strategy.fedyogi.FedYogi:32 of +msgid "Server-side learning rate. Defaults to 1e-2." msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_fit:5 of -msgid "" -"Successful updates from the previously selected and configured clients. " -"Each pair of `(ClientProxy, FitRes)` constitutes a successful update from" -" one of the previously selected clients. Not that not all previously " -"selected clients are necessarily included in this list: a client might " -"drop out and not submit a result. For each client that did not submit an " -"update, there should be an `Exception` in `failures`." +#: flwr.server.strategy.fedyogi.FedYogi:34 of +msgid "Client-side learning rate. Defaults to 0.0316." msgstr "" -#: flwr.server.strategy.strategy.Strategy.aggregate_fit:17 of -msgid "" -"**parameters** -- If parameters are returned, then the server will treat " -"these as the new global model parameters (i.e., it will replace the " -"previous parameters with the ones returned from this method). If `None` " -"is returned (e.g., because there were only failures and no viable " -"results) then the server will no update the previous model parameters, " -"the updates received in this round are discarded, and the global model " -"parameters remain the same." +#: flwr.server.strategy.fedyogi.FedYogi:40 of +msgid "Controls the algorithm's degree of adaptability. Defaults to 1e-3." msgstr "" -#: flwr.server.strategy.strategy.Strategy.evaluate:3 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"This function can be used to perform centralized (i.e., server-side) " -"evaluation of model parameters." +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\," +" results\\, ...\\)" msgstr "" -#: flwr.server.strategy.strategy.Strategy.evaluate:11 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"**evaluation_result** -- The evaluation result, usually a Tuple " -"containing loss and a dictionary containing task-specific metrics (e.g., " -"accuracy)." +":py:obj:`aggregate_fit `\\ " +"\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.server.strategy.strategy.Strategy.initialize_parameters:6 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"**parameters** -- If parameters are returned, then the server will treat " -"these as the initial global model parameters." -msgstr "" - -#: ../../source/ref-api/flwr.simulation.rst:2 -msgid "simulation" +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\," +" parameters\\, ...\\)" msgstr "" -#: ../../source/ref-api/flwr.simulation.rst:17::1 +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -":py:obj:`start_simulation `\\ \\(\\*\\," -" client\\_fn\\[\\, ...\\]\\)" -msgstr "" - -#: ../../source/ref-api/flwr.simulation.rst:17::1 -#: flwr.simulation.app.start_simulation:1 of -msgid "Start a Ray-based Flower simulation server." -msgstr "" - -#: ../../source/ref-api/flwr.simulation.start_simulation.rst:2 -msgid "start\\_simulation" +":py:obj:`configure_fit `\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.simulation.app.start_simulation:3 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"A function creating client instances. The function must take a single " -"`str` argument called `cid`. It should return a single client instance of" -" type Client. Note that the created client instances are ephemeral and " -"will often be destroyed after a single method invocation. Since client " -"instances are not long-lived, they should not attempt to carry state over" -" method invocations. Any state required by the instance (model, dataset, " -"hyperparameters, ...) should be (re-)created in either the call to " -"`client_fn` or the call to any of the client methods (e.g., load " -"evaluation data in the `evaluate` method itself)." +":py:obj:`evaluate `\\ " +"\\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.simulation.app.start_simulation:13 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"The total number of clients in this simulation. This must be set if " -"`clients_ids` is not set and vice-versa." +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -#: flwr.simulation.app.start_simulation:16 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"List `client_id`s for each client. This is only required if `num_clients`" -" is not set. Setting both `num_clients` and `clients_ids` with " -"`len(clients_ids)` not equal to `num_clients` generates an error." +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.simulation.app.start_simulation:20 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"CPU and GPU resources for a single client. Supported keys are `num_cpus` " -"and `num_gpus`. To understand the GPU utilization caused by `num_gpus`, " -"as well as using custom resources, please consult the Ray documentation." +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.simulation.app.start_simulation:25 of -msgid "" -"An implementation of the abstract base class `flwr.server.Server`. If no " -"instance is provided, then `start_server` will create one." +#: ../../source/ref-api/flwr.server.strategy.Krum.rst:2 +msgid "Krum" msgstr "" -#: flwr.simulation.app.start_simulation:31 of -msgid "" -"An implementation of the abstract base class `flwr.server.Strategy`. If " -"no strategy is provided, then `start_server` will use " -"`flwr.server.strategy.FedAvg`." +#: flwr.server.strategy.krum.Krum:3 of +msgid "Implementation based on https://arxiv.org/abs/1703.02757" msgstr "" -#: flwr.simulation.app.start_simulation:35 of +#: flwr.server.strategy.krum.Krum:17 of msgid "" -"An implementation of the abstract base class `flwr.server.ClientManager`." -" If no implementation is provided, then `start_simulation` will use " -"`flwr.server.client_manager.SimpleClientManager`." +"Number of clients to keep before averaging (MultiKrum). Defaults to 0, in" +" that case classical Krum is applied." msgstr "" -#: flwr.simulation.app.start_simulation:39 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"Optional dictionary containing arguments for the call to `ray.init`. If " -"ray_init_args is None (the default), Ray will be initialized with the " -"following default args: { \"ignore_reinit_error\": True, " -"\"include_dashboard\": False } An empty dictionary can be used " -"(ray_init_args={}) to prevent any arguments from being passed to " -"ray.init." +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\, " +"results\\, ...\\)" msgstr "" -#: flwr.simulation.app.start_simulation:39 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"Optional dictionary containing arguments for the call to `ray.init`. If " -"ray_init_args is None (the default), Ray will be initialized with the " -"following default args:" +":py:obj:`aggregate_fit `\\ " +"\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: flwr.simulation.app.start_simulation:43 of -msgid "{ \"ignore_reinit_error\": True, \"include_dashboard\": False }" +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 +#: flwr.server.strategy.krum.Krum.aggregate_fit:1 of +msgid "Aggregate fit results using Krum." msgstr "" -#: flwr.simulation.app.start_simulation:45 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"An empty dictionary can be used (ray_init_args={}) to prevent any " -"arguments from being passed to ray.init." +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\, " +"parameters\\, ...\\)" msgstr "" -#: flwr.simulation.app.start_simulation:48 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"Set to True to prevent `ray.shutdown()` in case " -"`ray.is_initialized()=True`." +":py:obj:`configure_fit `\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: flwr.simulation.app.start_simulation:50 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"Optionally specify the type of actor to use. The actor object, which " -"persists throughout the simulation, will be the process in charge of " -"running the clients' jobs (i.e. their `fit()` method)." +":py:obj:`evaluate `\\ " +"\\(server\\_round\\, parameters\\)" msgstr "" -#: flwr.simulation.app.start_simulation:54 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"If you want to create your own Actor classes, you might need to pass some" -" input argument. You can use this dictionary for such purpose." +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -#: flwr.simulation.app.start_simulation:57 of +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of msgid "" -"(default: \"DEFAULT\") Optional string (\"DEFAULT\" or \"SPREAD\") for " -"the VCE to choose in which node the actor is placed. If you are an " -"advanced user needed more control you can use lower-level scheduling " -"strategies to pin actors to specific compute nodes (e.g. via " -"NodeAffinitySchedulingStrategy). Please note this is an advanced feature." -" For all details, please refer to the Ray documentation: " -"https://docs.ray.io/en/latest/ray-core/scheduling/index.html" +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: flwr.simulation.app.start_simulation:66 of -msgid "**hist** -- Object containing metrics from training." +#: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`num_fit_clients `\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-changelog.md:1 -msgid "Changelog" +#: ../../source/ref-api/flwr.server.strategy.QFedAvg.rst:2 +msgid "QFedAvg" msgstr "" -#: ../../source/ref-changelog.md:3 -msgid "Unreleased" +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`aggregate_evaluate " +"`\\ \\(server\\_round\\," +" results\\, ...\\)" msgstr "" -#: ../../source/ref-changelog.md:5 ../../source/ref-changelog.md:17 -#: ../../source/ref-changelog.md:110 ../../source/ref-changelog.md:210 -#: ../../source/ref-changelog.md:294 ../../source/ref-changelog.md:358 -#: ../../source/ref-changelog.md:416 ../../source/ref-changelog.md:485 -#: ../../source/ref-changelog.md:614 ../../source/ref-changelog.md:656 -#: ../../source/ref-changelog.md:723 ../../source/ref-changelog.md:789 -#: ../../source/ref-changelog.md:834 ../../source/ref-changelog.md:873 -#: ../../source/ref-changelog.md:906 ../../source/ref-changelog.md:956 -msgid "What's new?" +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`aggregate_fit `\\ " +"\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: ../../source/ref-changelog.md:7 ../../source/ref-changelog.md:80 -#: ../../source/ref-changelog.md:192 ../../source/ref-changelog.md:282 -#: ../../source/ref-changelog.md:346 ../../source/ref-changelog.md:404 -#: ../../source/ref-changelog.md:473 ../../source/ref-changelog.md:535 -#: ../../source/ref-changelog.md:554 ../../source/ref-changelog.md:710 -#: ../../source/ref-changelog.md:781 ../../source/ref-changelog.md:818 -#: ../../source/ref-changelog.md:861 -msgid "Incompatible changes" +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`configure_evaluate " +"`\\ \\(server\\_round\\," +" parameters\\, ...\\)" msgstr "" -#: ../../source/ref-changelog.md:9 -msgid "v1.7.0 (2024-02-05)" +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`configure_fit `\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: ../../source/ref-changelog.md:11 ../../source/ref-changelog.md:104 -#: ../../source/ref-changelog.md:204 ../../source/ref-changelog.md:288 -#: ../../source/ref-changelog.md:352 ../../source/ref-changelog.md:410 -#: ../../source/ref-changelog.md:479 ../../source/ref-changelog.md:548 -msgid "Thanks to our contributors" +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of +msgid "" +":py:obj:`evaluate `\\ " +"\\(server\\_round\\, parameters\\)" msgstr "" -#: ../../source/ref-changelog.md:13 ../../source/ref-changelog.md:106 -#: ../../source/ref-changelog.md:206 ../../source/ref-changelog.md:290 -#: ../../source/ref-changelog.md:354 ../../source/ref-changelog.md:412 +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of msgid "" -"We would like to give our special thanks to all the contributors who made" -" the new version of Flower possible (in `git shortlog` order):" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -#: ../../source/ref-changelog.md:15 +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of msgid "" -"`Aasheesh Singh`, `Adam Narozniak`, `Aml Hassan Esmil`, `Charles " -"Beauville`, `Daniel J. Beutel`, `Daniel Nata Nugraha`, `Edoardo " -"Gabrielli`, `Gustavo Bertoli`, `HelinLin`, `Heng Pan`, `Javier`, `M S " -"Chaitanya Kumar`, `Mohammad Naseri`, `Nikos Vlachakis`, `Pritam Neog`, " -"`Robert Kuska`, `Robert Steiner`, `Taner Topal`, `Yahia Salaheldin " -"Shaaban`, `Yan Gao`, `Yasar Abbas` " +":py:obj:`num_evaluation_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-changelog.md:19 +#: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1::1 of msgid "" -"**Introduce stateful clients (experimental)** " -"([#2770](https://github.com/adap/flower/pull/2770), " -"[#2686](https://github.com/adap/flower/pull/2686), " -"[#2696](https://github.com/adap/flower/pull/2696), " -"[#2643](https://github.com/adap/flower/pull/2643), " -"[#2769](https://github.com/adap/flower/pull/2769))" +":py:obj:`num_fit_clients " +"`\\ " +"\\(num\\_available\\_clients\\)" msgstr "" -#: ../../source/ref-changelog.md:21 -msgid "" -"Subclasses of `Client` and `NumPyClient` can now store local state that " -"remains on the client. Let's start with the highlight first: this new " -"feature is compatible with both simulated clients (via " -"`start_simulation`) and networked clients (via `start_client`). It's also" -" the first preview of new abstractions like `Context` and `RecordSet`. " -"Clients can access state of type `RecordSet` via `state: RecordSet = " -"self.context.state`. Changes to this `RecordSet` are preserved across " -"different rounds of execution to enable stateful computations in a " -"unified way across simulation and deployment." +#: ../../source/ref-api/flwr.server.strategy.Strategy.rst:2 +msgid "Strategy" msgstr "" -#: ../../source/ref-changelog.md:23 +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: of msgid "" -"**Improve performance** " -"([#2293](https://github.com/adap/flower/pull/2293))" +":py:obj:`aggregate_evaluate " +"`\\ " +"\\(server\\_round\\, results\\, ...\\)" msgstr "" -#: ../../source/ref-changelog.md:25 -msgid "" -"Flower is faster than ever. All `FedAvg`-derived strategies now use in-" -"place aggregation to reduce memory consumption. The Flower client " -"serialization/deserialization has been rewritten from the ground up, " -"which results in significant speedups, especially when the client-side " -"training time is short." +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1 +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: of +msgid "Aggregate evaluation results." msgstr "" -#: ../../source/ref-changelog.md:27 +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: of msgid "" -"**Support Federated Learning with Apple MLX and Flower** " -"([#2693](https://github.com/adap/flower/pull/2693))" +":py:obj:`aggregate_fit `\\ " +"\\(server\\_round\\, results\\, failures\\)" msgstr "" -#: ../../source/ref-changelog.md:29 -msgid "" -"Flower has official support for federated learning using [Apple " -"MLX](https://ml-explore.github.io/mlx) via the new `quickstart-mlx` code " -"example." +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: flwr.server.strategy.strategy.Strategy.aggregate_fit:1 of +msgid "Aggregate training results." msgstr "" -#: ../../source/ref-changelog.md:31 +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: of msgid "" -"**Introduce new XGBoost cyclic strategy** " -"([#2666](https://github.com/adap/flower/pull/2666), " -"[#2668](https://github.com/adap/flower/pull/2668))" +":py:obj:`configure_evaluate " +"`\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: ../../source/ref-changelog.md:33 +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: of msgid "" -"A new strategy called `FedXgbCyclic` supports a client-by-client style of" -" training (often called cyclic). The `xgboost-comprehensive` code example" -" shows how to use it in a full project. In addition to that, `xgboost-" -"comprehensive` now also supports simulation mode. With this, Flower " -"offers best-in-class XGBoost support." +":py:obj:`configure_fit `\\ " +"\\(server\\_round\\, parameters\\, ...\\)" msgstr "" -#: ../../source/ref-changelog.md:35 +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: of msgid "" -"**Support Python 3.11** " -"([#2394](https://github.com/adap/flower/pull/2394))" +":py:obj:`evaluate `\\ " +"\\(server\\_round\\, parameters\\)" msgstr "" -#: ../../source/ref-changelog.md:37 -msgid "" -"Framework tests now run on Python 3.8, 3.9, 3.10, and 3.11. This will " -"ensure better support for users using more recent Python versions." +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: flwr.server.strategy.strategy.Strategy.evaluate:1 of +msgid "Evaluate the current model parameters." msgstr "" -#: ../../source/ref-changelog.md:39 +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: of msgid "" -"**Update gRPC and ProtoBuf dependencies** " -"([#2814](https://github.com/adap/flower/pull/2814))" +":py:obj:`initialize_parameters " +"`\\ " +"\\(client\\_manager\\)" msgstr "" -#: ../../source/ref-changelog.md:41 -msgid "" -"The `grpcio` and `protobuf` dependencies were updated to their latest " -"versions for improved security and performance." +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1::1 +#: flwr.server.strategy.strategy.Strategy.initialize_parameters:1 of +msgid "Initialize the (global) model parameters." msgstr "" -#: ../../source/ref-changelog.md:43 +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:5 of msgid "" -"**Introduce Docker image for Flower server** " -"([#2700](https://github.com/adap/flower/pull/2700), " -"[#2688](https://github.com/adap/flower/pull/2688), " -"[#2705](https://github.com/adap/flower/pull/2705), " -"[#2695](https://github.com/adap/flower/pull/2695), " -"[#2747](https://github.com/adap/flower/pull/2747), " -"[#2746](https://github.com/adap/flower/pull/2746), " -"[#2680](https://github.com/adap/flower/pull/2680), " -"[#2682](https://github.com/adap/flower/pull/2682), " -"[#2701](https://github.com/adap/flower/pull/2701))" +"Successful updates from the previously selected and configured clients. " +"Each pair of `(ClientProxy, FitRes` constitutes a successful update from " +"one of the previously selected clients. Not that not all previously " +"selected clients are necessarily included in this list: a client might " +"drop out and not submit a result. For each client that did not submit an " +"update, there should be an `Exception` in `failures`." msgstr "" -#: ../../source/ref-changelog.md:45 +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:13 +#: flwr.server.strategy.strategy.Strategy.aggregate_fit:13 of +msgid "Exceptions that occurred while the server was waiting for client updates." +msgstr "" + +#: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:16 of msgid "" -"The Flower server can now be run using an official Docker image. A new " -"how-to guide explains [how to run Flower using " -"Docker](https://flower.ai/docs/framework/how-to-run-flower-using-" -"docker.html). An official Flower client Docker image will follow." +"**aggregation_result** -- The aggregated evaluation result. Aggregation " +"typically uses some variant of a weighted average." msgstr "" -#: ../../source/ref-changelog.md:47 +#: flwr.server.strategy.strategy.Strategy.aggregate_fit:5 of msgid "" -"**Introduce** `flower-via-docker-compose` **example** " -"([#2626](https://github.com/adap/flower/pull/2626))" +"Successful updates from the previously selected and configured clients. " +"Each pair of `(ClientProxy, FitRes)` constitutes a successful update from" +" one of the previously selected clients. Not that not all previously " +"selected clients are necessarily included in this list: a client might " +"drop out and not submit a result. For each client that did not submit an " +"update, there should be an `Exception` in `failures`." msgstr "" -#: ../../source/ref-changelog.md:49 +#: flwr.server.strategy.strategy.Strategy.aggregate_fit:17 of msgid "" -"**Introduce** `quickstart-sklearn-tabular` **example** " -"([#2719](https://github.com/adap/flower/pull/2719))" +"**parameters** -- If parameters are returned, then the server will treat " +"these as the new global model parameters (i.e., it will replace the " +"previous parameters with the ones returned from this method). If `None` " +"is returned (e.g., because there were only failures and no viable " +"results) then the server will no update the previous model parameters, " +"the updates received in this round are discarded, and the global model " +"parameters remain the same." msgstr "" -#: ../../source/ref-changelog.md:51 +#: flwr.server.strategy.strategy.Strategy.evaluate:3 of msgid "" -"**Introduce** `custom-metrics` **example** " -"([#1958](https://github.com/adap/flower/pull/1958))" +"This function can be used to perform centralized (i.e., server-side) " +"evaluation of model parameters." msgstr "" -#: ../../source/ref-changelog.md:53 +#: flwr.server.strategy.strategy.Strategy.evaluate:11 of msgid "" -"**Update code examples to use Flower Datasets** " -"([#2450](https://github.com/adap/flower/pull/2450), " -"[#2456](https://github.com/adap/flower/pull/2456), " -"[#2318](https://github.com/adap/flower/pull/2318), " -"[#2712](https://github.com/adap/flower/pull/2712))" +"**evaluation_result** -- The evaluation result, usually a Tuple " +"containing loss and a dictionary containing task-specific metrics (e.g., " +"accuracy)." msgstr "" -#: ../../source/ref-changelog.md:55 +#: flwr.server.strategy.strategy.Strategy.initialize_parameters:6 of msgid "" -"Several code examples were updated to use [Flower " -"Datasets](https://flower.ai/docs/datasets/)." +"**parameters** -- If parameters are returned, then the server will treat " +"these as the initial global model parameters." msgstr "" -#: ../../source/ref-changelog.md:57 -msgid "" -"**General updates to Flower Examples** " -"([#2381](https://github.com/adap/flower/pull/2381), " -"[#2805](https://github.com/adap/flower/pull/2805), " -"[#2782](https://github.com/adap/flower/pull/2782), " -"[#2806](https://github.com/adap/flower/pull/2806), " -"[#2829](https://github.com/adap/flower/pull/2829), " -"[#2825](https://github.com/adap/flower/pull/2825), " -"[#2816](https://github.com/adap/flower/pull/2816), " -"[#2726](https://github.com/adap/flower/pull/2726), " -"[#2659](https://github.com/adap/flower/pull/2659), " -"[#2655](https://github.com/adap/flower/pull/2655))" +#: ../../source/ref-api/flwr.server.workflow.rst:2 +msgid "workflow" msgstr "" -#: ../../source/ref-changelog.md:59 -msgid "Many Flower code examples received substantial updates." +#: ../../source/ref-api/flwr.server.workflow.rst:24::1 +msgid "" +":py:obj:`DefaultWorkflow `\\ " +"\\(\\[fit\\_workflow\\, ...\\]\\)" msgstr "" -#: ../../source/ref-changelog.md:61 ../../source/ref-changelog.md:154 -msgid "**Update Flower Baselines**" +#: ../../source/ref-api/flwr.server.workflow.rst:24::1 +#: flwr.server.workflow.default_workflows.DefaultWorkflow:1 of +msgid "Default workflow in Flower." msgstr "" -#: ../../source/ref-changelog.md:63 +#: ../../source/ref-api/flwr.server.workflow.rst:24::1 msgid "" -"HFedXGBoost ([#2226](https://github.com/adap/flower/pull/2226), " -"[#2771](https://github.com/adap/flower/pull/2771))" +":py:obj:`SecAggPlusWorkflow `\\ " +"\\(num\\_shares\\, ...\\[\\, ...\\]\\)" msgstr "" -#: ../../source/ref-changelog.md:64 -msgid "FedVSSL ([#2412](https://github.com/adap/flower/pull/2412))" +#: ../../source/ref-api/flwr.server.workflow.rst:24::1 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:1 +#: of +msgid "The workflow for the SecAgg+ protocol." msgstr "" -#: ../../source/ref-changelog.md:65 -msgid "FedNova ([#2179](https://github.com/adap/flower/pull/2179))" +#: ../../source/ref-api/flwr.server.workflow.rst:24::1 +msgid "" +":py:obj:`SecAggWorkflow `\\ " +"\\(reconstruction\\_threshold\\, \\*\\)" msgstr "" -#: ../../source/ref-changelog.md:66 -msgid "HeteroFL ([#2439](https://github.com/adap/flower/pull/2439))" +#: ../../source/ref-api/flwr.server.workflow.rst:24::1 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:1 of +msgid "The workflow for the SecAgg protocol." msgstr "" -#: ../../source/ref-changelog.md:67 -msgid "FedAvgM ([#2246](https://github.com/adap/flower/pull/2246))" +#: ../../source/ref-api/flwr.server.workflow.DefaultWorkflow.rst:2 +msgid "DefaultWorkflow" msgstr "" -#: ../../source/ref-changelog.md:68 -msgid "FedPara ([#2722](https://github.com/adap/flower/pull/2722))" +#: ../../source/ref-api/flwr.server.workflow.SecAggPlusWorkflow.rst:2 +msgid "SecAggPlusWorkflow" msgstr "" -#: ../../source/ref-changelog.md:70 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:3 +#: of msgid "" -"**Improve documentation** " -"([#2674](https://github.com/adap/flower/pull/2674), " -"[#2480](https://github.com/adap/flower/pull/2480), " -"[#2826](https://github.com/adap/flower/pull/2826), " -"[#2727](https://github.com/adap/flower/pull/2727), " -"[#2761](https://github.com/adap/flower/pull/2761), " -"[#2900](https://github.com/adap/flower/pull/2900))" +"The SecAgg+ protocol ensures the secure summation of integer vectors " +"owned by multiple parties, without accessing any individual integer " +"vector. This workflow allows the server to compute the weighted average " +"of model parameters across all clients, ensuring individual contributions" +" remain private. This is achieved by clients sending both, a weighting " +"factor and a weighted version of the locally updated parameters, both of " +"which are masked for privacy. Specifically, each client uploads \"[w, w *" +" params]\" with masks, where weighting factor 'w' is the number of " +"examples ('num_examples') and 'params' represents the model parameters " +"('parameters') from the client's `FitRes`. The server then aggregates " +"these contributions to compute the weighted average of model parameters." msgstr "" -#: ../../source/ref-changelog.md:72 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:14 +#: of msgid "" -"**Improved testing and development infrastructure** " -"([#2797](https://github.com/adap/flower/pull/2797), " -"[#2676](https://github.com/adap/flower/pull/2676), " -"[#2644](https://github.com/adap/flower/pull/2644), " -"[#2656](https://github.com/adap/flower/pull/2656), " -"[#2848](https://github.com/adap/flower/pull/2848), " -"[#2675](https://github.com/adap/flower/pull/2675), " -"[#2735](https://github.com/adap/flower/pull/2735), " -"[#2767](https://github.com/adap/flower/pull/2767), " -"[#2732](https://github.com/adap/flower/pull/2732), " -"[#2744](https://github.com/adap/flower/pull/2744), " -"[#2681](https://github.com/adap/flower/pull/2681), " -"[#2699](https://github.com/adap/flower/pull/2699), " -"[#2745](https://github.com/adap/flower/pull/2745), " -"[#2734](https://github.com/adap/flower/pull/2734), " -"[#2731](https://github.com/adap/flower/pull/2731), " -"[#2652](https://github.com/adap/flower/pull/2652), " -"[#2720](https://github.com/adap/flower/pull/2720), " -"[#2721](https://github.com/adap/flower/pull/2721), " -"[#2717](https://github.com/adap/flower/pull/2717), " -"[#2864](https://github.com/adap/flower/pull/2864), " -"[#2694](https://github.com/adap/flower/pull/2694), " -"[#2709](https://github.com/adap/flower/pull/2709), " -"[#2658](https://github.com/adap/flower/pull/2658), " -"[#2796](https://github.com/adap/flower/pull/2796), " -"[#2692](https://github.com/adap/flower/pull/2692), " -"[#2657](https://github.com/adap/flower/pull/2657), " -"[#2813](https://github.com/adap/flower/pull/2813), " -"[#2661](https://github.com/adap/flower/pull/2661), " -"[#2398](https://github.com/adap/flower/pull/2398))" +"The protocol involves four main stages: - 'setup': Send SecAgg+ " +"configuration to clients and collect their public keys. - 'share keys': " +"Broadcast public keys among clients and collect encrypted secret" msgstr "" -#: ../../source/ref-changelog.md:74 -msgid "" -"The Flower testing and development infrastructure has received " -"substantial updates. This makes Flower 1.7 the most tested release ever." +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:17 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:17 +#: of +msgid "key shares." msgstr "" -#: ../../source/ref-changelog.md:76 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:18 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:18 +#: of msgid "" -"**Update dependencies** " -"([#2753](https://github.com/adap/flower/pull/2753), " -"[#2651](https://github.com/adap/flower/pull/2651), " -"[#2739](https://github.com/adap/flower/pull/2739), " -"[#2837](https://github.com/adap/flower/pull/2837), " -"[#2788](https://github.com/adap/flower/pull/2788), " -"[#2811](https://github.com/adap/flower/pull/2811), " -"[#2774](https://github.com/adap/flower/pull/2774), " -"[#2790](https://github.com/adap/flower/pull/2790), " -"[#2751](https://github.com/adap/flower/pull/2751), " -"[#2850](https://github.com/adap/flower/pull/2850), " -"[#2812](https://github.com/adap/flower/pull/2812), " -"[#2872](https://github.com/adap/flower/pull/2872), " -"[#2736](https://github.com/adap/flower/pull/2736), " -"[#2756](https://github.com/adap/flower/pull/2756), " -"[#2857](https://github.com/adap/flower/pull/2857), " -"[#2757](https://github.com/adap/flower/pull/2757), " -"[#2810](https://github.com/adap/flower/pull/2810), " -"[#2740](https://github.com/adap/flower/pull/2740), " -"[#2789](https://github.com/adap/flower/pull/2789))" +"'collect masked vectors': Forward encrypted secret key shares to target " +"clients and collect masked model parameters." msgstr "" -#: ../../source/ref-changelog.md:78 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:20 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:20 +#: of msgid "" -"**General improvements** " -"([#2803](https://github.com/adap/flower/pull/2803), " -"[#2847](https://github.com/adap/flower/pull/2847), " -"[#2877](https://github.com/adap/flower/pull/2877), " -"[#2690](https://github.com/adap/flower/pull/2690), " -"[#2889](https://github.com/adap/flower/pull/2889), " -"[#2874](https://github.com/adap/flower/pull/2874), " -"[#2819](https://github.com/adap/flower/pull/2819), " -"[#2689](https://github.com/adap/flower/pull/2689), " -"[#2457](https://github.com/adap/flower/pull/2457), " -"[#2870](https://github.com/adap/flower/pull/2870), " -"[#2669](https://github.com/adap/flower/pull/2669), " -"[#2876](https://github.com/adap/flower/pull/2876), " -"[#2885](https://github.com/adap/flower/pull/2885), " -"[#2858](https://github.com/adap/flower/pull/2858), " -"[#2867](https://github.com/adap/flower/pull/2867), " -"[#2351](https://github.com/adap/flower/pull/2351), " -"[#2886](https://github.com/adap/flower/pull/2886), " -"[#2860](https://github.com/adap/flower/pull/2860), " -"[#2828](https://github.com/adap/flower/pull/2828), " -"[#2869](https://github.com/adap/flower/pull/2869), " -"[#2875](https://github.com/adap/flower/pull/2875), " -"[#2733](https://github.com/adap/flower/pull/2733), " -"[#2488](https://github.com/adap/flower/pull/2488), " -"[#2646](https://github.com/adap/flower/pull/2646), " -"[#2879](https://github.com/adap/flower/pull/2879), " -"[#2821](https://github.com/adap/flower/pull/2821), " -"[#2855](https://github.com/adap/flower/pull/2855), " -"[#2800](https://github.com/adap/flower/pull/2800), " -"[#2807](https://github.com/adap/flower/pull/2807), " -"[#2801](https://github.com/adap/flower/pull/2801), " -"[#2804](https://github.com/adap/flower/pull/2804), " -"[#2851](https://github.com/adap/flower/pull/2851), " -"[#2787](https://github.com/adap/flower/pull/2787), " -"[#2852](https://github.com/adap/flower/pull/2852), " -"[#2672](https://github.com/adap/flower/pull/2672), " -"[#2759](https://github.com/adap/flower/pull/2759))" +"'unmask': Collect secret key shares to decrypt and aggregate the model " +"parameters." msgstr "" -#: ../../source/ref-changelog.md:82 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:22 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:22 +#: of msgid "" -"**Deprecate** `start_numpy_client` " -"([#2563](https://github.com/adap/flower/pull/2563), " -"[#2718](https://github.com/adap/flower/pull/2718))" +"Only the aggregated model parameters are exposed and passed to " +"`Strategy.aggregate_fit`, ensuring individual data privacy." msgstr "" -#: ../../source/ref-changelog.md:84 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:25 +#: of msgid "" -"Until now, clients of type `NumPyClient` needed to be started via " -"`start_numpy_client`. In our efforts to consolidate framework APIs, we " -"have introduced changes, and now all client types should start via " -"`start_client`. To continue using `NumPyClient` clients, you simply need " -"to first call the `.to_client()` method and then pass returned `Client` " -"object to `start_client`. The examples and the documentation have been " -"updated accordingly." +"The number of shares into which each client's private key is split under " +"the SecAgg+ protocol. If specified as a float, it represents the " +"proportion of all selected clients, and the number of shares will be set " +"dynamically in the run time. A private key can be reconstructed from " +"these shares, allowing for the secure aggregation of model updates. Each " +"client sends one share to each of its neighbors while retaining one." msgstr "" -#: ../../source/ref-changelog.md:86 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:25 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:32 +#: of msgid "" -"**Deprecate legacy DP wrappers** " -"([#2749](https://github.com/adap/flower/pull/2749))" +"The minimum number of shares required to reconstruct a client's private " +"key, or, if specified as a float, it represents the proportion of the " +"total number of shares needed for reconstruction. This threshold ensures " +"privacy by allowing for the recovery of contributions from dropped " +"clients during aggregation, without compromising individual client data." msgstr "" -#: ../../source/ref-changelog.md:88 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:31 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:38 +#: of msgid "" -"Legacy DP wrapper classes are deprecated, but still functional. This is " -"in preparation for an all-new pluggable version of differential privacy " -"support in Flower." +"The maximum value of the weight that can be assigned to any single " +"client's update during the weighted average calculation on the server " +"side, e.g., in the FedAvg algorithm." msgstr "" -#: ../../source/ref-changelog.md:90 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:35 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:42 +#: of msgid "" -"**Make optional arg** `--callable` **in** `flower-client` **a required " -"positional arg** ([#2673](https://github.com/adap/flower/pull/2673))" +"The range within which model parameters are clipped before quantization. " +"This parameter ensures each model parameter is bounded within " +"[-clipping_range, clipping_range], facilitating quantization." msgstr "" -#: ../../source/ref-changelog.md:92 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:39 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:46 +#: of msgid "" -"**Rename** `certificates` **to** `root_certificates` **in** `Driver` " -"([#2890](https://github.com/adap/flower/pull/2890))" +"The size of the range into which floating-point model parameters are " +"quantized, mapping each parameter to an integer in [0, " +"quantization_range-1]. This facilitates cryptographic operations on the " +"model updates." msgstr "" -#: ../../source/ref-changelog.md:94 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:43 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:50 +#: of msgid "" -"**Drop experimental** `Task` **fields** " -"([#2866](https://github.com/adap/flower/pull/2866), " -"[#2865](https://github.com/adap/flower/pull/2865))" +"The range of values from which random mask entries are uniformly sampled " +"([0, modulus_range-1]). `modulus_range` must be less than 4294967296. " +"Please use 2**n values for `modulus_range` to prevent overflow issues." msgstr "" -#: ../../source/ref-changelog.md:96 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:47 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:54 +#: of msgid "" -"Experimental fields `sa`, `legacy_server_message` and " -"`legacy_client_message` were removed from `Task` message. The removed " -"fields are superseded by the new `RecordSet` abstraction." +"The timeout duration in seconds. If specified, the workflow will wait for" +" replies for this duration each time. If `None`, there is no time limit " +"and the workflow will wait until replies for all messages are received." msgstr "" -#: ../../source/ref-changelog.md:98 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:61 +#: of msgid "" -"**Retire MXNet examples** " -"([#2724](https://github.com/adap/flower/pull/2724))" +"Generally, higher `num_shares` means more robust to dropouts while " +"increasing the computational costs; higher `reconstruction_threshold` " +"means better privacy guarantees but less tolerance to dropouts." msgstr "" -#: ../../source/ref-changelog.md:100 -msgid "" -"The development of the MXNet fremework has ended and the project is now " -"[archived on GitHub](https://github.com/apache/mxnet). Existing MXNet " -"examples won't receive updates." +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:58 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:64 +#: of +msgid "Too large `max_weight` may compromise the precision of the quantization." msgstr "" -#: ../../source/ref-changelog.md:102 -msgid "v1.6.0 (2023-11-28)" +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:59 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:65 +#: of +msgid "`modulus_range` must be 2**n and larger than `quantization_range`." msgstr "" -#: ../../source/ref-changelog.md:108 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:66 +#: of msgid "" -"`Aashish Kolluri`, `Adam Narozniak`, `Alessio Mora`, `Barathwaja S`, " -"`Charles Beauville`, `Daniel J. Beutel`, `Daniel Nata Nugraha`, `Gabriel " -"Mota`, `Heng Pan`, `Ivan Agarský`, `JS.KIM`, `Javier`, `Marius Schlegel`," -" `Navin Chandra`, `Nic Lane`, `Peterpan828`, `Qinbin Li`, `Shaz-hash`, " -"`Steve Laskaridis`, `Taner Topal`, `William Lindskog`, `Yan Gao`, " -"`cnxdeveloper`, `k3nfalt` " +"When `num_shares` is a float, it is interpreted as the proportion of all " +"selected clients, and hence the number of shares will be determined in " +"the runtime. This allows for dynamic adjustment based on the total number" +" of participating clients." msgstr "" -#: ../../source/ref-changelog.md:112 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:69 +#: of msgid "" -"**Add experimental support for Python 3.12** " -"([#2565](https://github.com/adap/flower/pull/2565))" +"Similarly, when `reconstruction_threshold` is a float, it is interpreted " +"as the proportion of the number of shares needed for the reconstruction " +"of a private key. This feature enables flexibility in setting the " +"security threshold relative to the number of distributed shares." msgstr "" -#: ../../source/ref-changelog.md:114 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow:73 +#: of msgid "" -"**Add new XGBoost examples** " -"([#2612](https://github.com/adap/flower/pull/2612), " -"[#2554](https://github.com/adap/flower/pull/2554), " -"[#2617](https://github.com/adap/flower/pull/2617), " -"[#2618](https://github.com/adap/flower/pull/2618), " -"[#2619](https://github.com/adap/flower/pull/2619), " -"[#2567](https://github.com/adap/flower/pull/2567))" +"`num_shares`, `reconstruction_threshold`, and the quantization parameters" +" (`clipping_range`, `quantization_range`, `modulus_range`) play critical " +"roles in balancing privacy, robustness, and efficiency within the SecAgg+" +" protocol." msgstr "" -#: ../../source/ref-changelog.md:116 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of msgid "" -"We have added a new `xgboost-quickstart` example alongside a new " -"`xgboost-comprehensive` example that goes more in-depth." +":py:obj:`collect_masked_vectors_stage " +"`\\" +" \\(driver\\, ...\\)" msgstr "" -#: ../../source/ref-changelog.md:118 -msgid "" -"**Add Vertical FL example** " -"([#2598](https://github.com/adap/flower/pull/2598))" +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of +msgid "Execute the 'collect masked vectors' stage." msgstr "" -#: ../../source/ref-changelog.md:120 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of msgid "" -"We had many questions about Vertical Federated Learning using Flower, so " -"we decided to add an simple example for it on the [Titanic " -"dataset](https://www.kaggle.com/competitions/titanic/data) alongside a " -"tutorial (in the README)." +":py:obj:`setup_stage " +"`\\ \\(driver\\, " +"context\\, state\\)" msgstr "" -#: ../../source/ref-changelog.md:122 -msgid "" -"**Support custom** `ClientManager` **in** `start_driver()` " -"([#2292](https://github.com/adap/flower/pull/2292))" +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.setup_stage:1 +#: of +msgid "Execute the 'setup' stage." msgstr "" -#: ../../source/ref-changelog.md:124 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of msgid "" -"**Update REST API to support create and delete nodes** " -"([#2283](https://github.com/adap/flower/pull/2283))" +":py:obj:`share_keys_stage " +"`\\ " +"\\(driver\\, context\\, state\\)" msgstr "" -#: ../../source/ref-changelog.md:126 -msgid "" -"**Update the Android SDK** " -"([#2187](https://github.com/adap/flower/pull/2187))" +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.share_keys_stage:1 +#: of +msgid "Execute the 'share keys' stage." msgstr "" -#: ../../source/ref-changelog.md:128 -msgid "Add gRPC request-response capability to the Android SDK." +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of +msgid "" +":py:obj:`unmask_stage " +"`\\ \\(driver\\, " +"context\\, state\\)" msgstr "" -#: ../../source/ref-changelog.md:130 -msgid "" -"**Update the C++ SDK** " -"([#2537](https://github.com/adap/flower/pull/2537), " -"[#2528](https://github.com/adap/flower/pull/2528), " -"[#2523](https://github.com/adap/flower/pull/2523), " -"[#2522](https://github.com/adap/flower/pull/2522))" +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.unmask_stage:1 +#: of +msgid "Execute the 'unmask' stage." msgstr "" -#: ../../source/ref-changelog.md:132 -msgid "Add gRPC request-response capability to the C++ SDK." +#: ../../source/ref-api/flwr.server.workflow.SecAggWorkflow.rst:2 +msgid "SecAggWorkflow" msgstr "" -#: ../../source/ref-changelog.md:134 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:1 of msgid "" -"**Make HTTPS the new default** " -"([#2591](https://github.com/adap/flower/pull/2591), " -"[#2636](https://github.com/adap/flower/pull/2636))" +"Bases: " +":py:class:`~flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow`" msgstr "" -#: ../../source/ref-changelog.md:136 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:3 of msgid "" -"Flower is moving to HTTPS by default. The new `flower-server` requires " -"passing `--certificates`, but users can enable `--insecure` to use HTTP " -"for prototyping. The same applies to `flower-client`, which can either " -"use user-provided credentials or gRPC-bundled certificates to connect to " -"an HTTPS-enabled server or requires opt-out via passing `--insecure` to " -"enable insecure HTTP connections." +"The SecAgg protocol ensures the secure summation of integer vectors owned" +" by multiple parties, without accessing any individual integer vector. " +"This workflow allows the server to compute the weighted average of model " +"parameters across all clients, ensuring individual contributions remain " +"private. This is achieved by clients sending both, a weighting factor and" +" a weighted version of the locally updated parameters, both of which are " +"masked for privacy. Specifically, each client uploads \"[w, w * params]\"" +" with masks, where weighting factor 'w' is the number of examples " +"('num_examples') and 'params' represents the model parameters " +"('parameters') from the client's `FitRes`. The server then aggregates " +"these contributions to compute the weighted average of model parameters." msgstr "" -#: ../../source/ref-changelog.md:138 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:14 of msgid "" -"For backward compatibility, `start_client()` and `start_numpy_client()` " -"will still start in insecure mode by default. In a future release, " -"insecure connections will require user opt-in by passing `insecure=True`." +"The protocol involves four main stages: - 'setup': Send SecAgg " +"configuration to clients and collect their public keys. - 'share keys': " +"Broadcast public keys among clients and collect encrypted secret" msgstr "" -#: ../../source/ref-changelog.md:140 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:54 of msgid "" -"**Unify client API** ([#2303](https://github.com/adap/flower/pull/2303), " -"[#2390](https://github.com/adap/flower/pull/2390), " -"[#2493](https://github.com/adap/flower/pull/2493))" +"Each client's private key is split into N shares under the SecAgg " +"protocol, where N is the number of selected clients." msgstr "" -#: ../../source/ref-changelog.md:142 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:56 of msgid "" -"Using the `client_fn`, Flower clients can interchangeably run as " -"standalone processes (i.e. via `start_client`) or in simulation (i.e. via" -" `start_simulation`) without requiring changes to how the client class is" -" defined and instantiated. The `to_client()` function is introduced to " -"convert a `NumPyClient` to a `Client`." +"Generally, higher `reconstruction_threshold` means better privacy " +"guarantees but less tolerance to dropouts." msgstr "" -#: ../../source/ref-changelog.md:144 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:60 of msgid "" -"**Add new** `Bulyan` **strategy** " -"([#1817](https://github.com/adap/flower/pull/1817), " -"[#1891](https://github.com/adap/flower/pull/1891))" +"When `reconstruction_threshold` is a float, it is interpreted as the " +"proportion of the number of all selected clients needed for the " +"reconstruction of a private key. This feature enables flexibility in " +"setting the security threshold relative to the number of selected " +"clients." msgstr "" -#: ../../source/ref-changelog.md:146 +#: flwr.server.workflow.secure_aggregation.secagg_workflow.SecAggWorkflow:64 of msgid "" -"The new `Bulyan` strategy implements Bulyan by [El Mhamdi et al., " -"2018](https://arxiv.org/abs/1802.07927)" +"`reconstruction_threshold`, and the quantization parameters " +"(`clipping_range`, `quantization_range`, `modulus_range`) play critical " +"roles in balancing privacy, robustness, and efficiency within the SecAgg " +"protocol." msgstr "" -#: ../../source/ref-changelog.md:148 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of msgid "" -"**Add new** `XGB Bagging` **strategy** " -"([#2611](https://github.com/adap/flower/pull/2611))" +":py:obj:`collect_masked_vectors_stage " +"`\\ " +"\\(driver\\, ...\\)" msgstr "" -#: ../../source/ref-changelog.md:150 ../../source/ref-changelog.md:152 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of msgid "" -"**Introduce `WorkloadState`** " -"([#2564](https://github.com/adap/flower/pull/2564), " -"[#2632](https://github.com/adap/flower/pull/2632))" +":py:obj:`setup_stage `\\" +" \\(driver\\, context\\, state\\)" msgstr "" -#: ../../source/ref-changelog.md:156 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of msgid "" -"FedProx ([#2210](https://github.com/adap/flower/pull/2210), " -"[#2286](https://github.com/adap/flower/pull/2286), " -"[#2509](https://github.com/adap/flower/pull/2509))" +":py:obj:`share_keys_stage " +"`\\ \\(driver\\, " +"context\\, state\\)" msgstr "" -#: ../../source/ref-changelog.md:158 +#: flwr.server.workflow.secure_aggregation.secaggplus_workflow.SecAggPlusWorkflow.collect_masked_vectors_stage:1::1 +#: of msgid "" -"Baselines Docs ([#2290](https://github.com/adap/flower/pull/2290), " -"[#2400](https://github.com/adap/flower/pull/2400))" +":py:obj:`unmask_stage " +"`\\ \\(driver\\, " +"context\\, state\\)" msgstr "" -#: ../../source/ref-changelog.md:160 -msgid "" -"FedMLB ([#2340](https://github.com/adap/flower/pull/2340), " -"[#2507](https://github.com/adap/flower/pull/2507))" +#: ../../source/ref-api/flwr.simulation.rst:2 +msgid "simulation" msgstr "" -#: ../../source/ref-changelog.md:162 +#: ../../source/ref-api/flwr.simulation.rst:19::1 msgid "" -"TAMUNA ([#2254](https://github.com/adap/flower/pull/2254), " -"[#2508](https://github.com/adap/flower/pull/2508))" +":py:obj:`start_simulation `\\ \\(\\*\\," +" client\\_fn\\[\\, ...\\]\\)" msgstr "" -#: ../../source/ref-changelog.md:164 -msgid "FedMeta [#2438](https://github.com/adap/flower/pull/2438)" +#: ../../source/ref-api/flwr.simulation.rst:19::1 +#: flwr.simulation.app.start_simulation:1 of +msgid "Start a Ray-based Flower simulation server." msgstr "" -#: ../../source/ref-changelog.md:166 -msgid "FjORD [#2431](https://github.com/adap/flower/pull/2431)" +#: ../../source/ref-api/flwr.simulation.rst:19::1 +msgid "" +":py:obj:`run_simulation_from_cli " +"`\\ \\(\\)" msgstr "" -#: ../../source/ref-changelog.md:168 -msgid "MOON [#2421](https://github.com/adap/flower/pull/2421)" +#: ../../source/ref-api/flwr.simulation.rst:19::1 +#: flwr.simulation.run_simulation.run_simulation_from_cli:1 of +msgid "Run Simulation Engine from the CLI." msgstr "" -#: ../../source/ref-changelog.md:170 -msgid "DepthFL [#2295](https://github.com/adap/flower/pull/2295)" +#: ../../source/ref-api/flwr.simulation.rst:19::1 +msgid "" +":py:obj:`run_simulation `\\ " +"\\(server\\_app\\, client\\_app\\, ...\\)" msgstr "" -#: ../../source/ref-changelog.md:172 -msgid "FedPer [#2266](https://github.com/adap/flower/pull/2266)" +#: ../../source/ref-api/flwr.simulation.rst:19::1 +#: flwr.simulation.run_simulation.run_simulation:1 of +msgid "Run a Flower App using the Simulation Engine." msgstr "" -#: ../../source/ref-changelog.md:174 -msgid "FedWav2vec [#2551](https://github.com/adap/flower/pull/2551)" +#: ../../source/ref-api/flwr.simulation.run_simulation.rst:2 +msgid "run\\_simulation" msgstr "" -#: ../../source/ref-changelog.md:176 -msgid "niid-Bench [#2428](https://github.com/adap/flower/pull/2428)" +#: flwr.simulation.run_simulation.run_simulation:3 of +msgid "" +"The `ServerApp` to be executed. It will send messages to different " +"`ClientApp` instances running on different (virtual) SuperNodes." msgstr "" -#: ../../source/ref-changelog.md:178 +#: flwr.simulation.run_simulation.run_simulation:6 of msgid "" -"FedBN ([#2608](https://github.com/adap/flower/pull/2608), " -"[#2615](https://github.com/adap/flower/pull/2615))" +"The `ClientApp` to be executed by each of the SuperNodes. It will receive" +" messages sent by the `ServerApp`." msgstr "" -#: ../../source/ref-changelog.md:180 +#: flwr.simulation.run_simulation.run_simulation:9 of msgid "" -"**General updates to Flower Examples** " -"([#2384](https://github.com/adap/flower/pull/2384), " -"[#2425](https://github.com/adap/flower/pull/2425), " -"[#2526](https://github.com/adap/flower/pull/2526), " -"[#2302](https://github.com/adap/flower/pull/2302), " -"[#2545](https://github.com/adap/flower/pull/2545))" +"Number of nodes that run a ClientApp. They can be sampled by a Driver in " +"the ServerApp and receive a Message describing what the ClientApp should " +"perform." msgstr "" -#: ../../source/ref-changelog.md:182 -msgid "" -"**General updates to Flower Baselines** " -"([#2301](https://github.com/adap/flower/pull/2301), " -"[#2305](https://github.com/adap/flower/pull/2305), " -"[#2307](https://github.com/adap/flower/pull/2307), " -"[#2327](https://github.com/adap/flower/pull/2327), " -"[#2435](https://github.com/adap/flower/pull/2435), " -"[#2462](https://github.com/adap/flower/pull/2462), " -"[#2463](https://github.com/adap/flower/pull/2463), " -"[#2461](https://github.com/adap/flower/pull/2461), " -"[#2469](https://github.com/adap/flower/pull/2469), " -"[#2466](https://github.com/adap/flower/pull/2466), " -"[#2471](https://github.com/adap/flower/pull/2471), " -"[#2472](https://github.com/adap/flower/pull/2472), " -"[#2470](https://github.com/adap/flower/pull/2470))" +#: flwr.simulation.run_simulation.run_simulation:13 of +msgid "A simulation backend that runs `ClientApp`s." msgstr "" -#: ../../source/ref-changelog.md:184 +#: flwr.simulation.run_simulation.run_simulation:15 of msgid "" -"**General updates to the simulation engine** " -"([#2331](https://github.com/adap/flower/pull/2331), " -"[#2447](https://github.com/adap/flower/pull/2447), " -"[#2448](https://github.com/adap/flower/pull/2448), " -"[#2294](https://github.com/adap/flower/pull/2294))" +"'A dictionary, e.g {\"\": , \"\": } to " +"configure a backend. Values supported in are those included by " +"`flwr.common.typing.ConfigsRecordValues`." msgstr "" -#: ../../source/ref-changelog.md:186 +#: flwr.simulation.run_simulation.run_simulation:19 of msgid "" -"**General updates to Flower SDKs** " -"([#2288](https://github.com/adap/flower/pull/2288), " -"[#2429](https://github.com/adap/flower/pull/2429), " -"[#2555](https://github.com/adap/flower/pull/2555), " -"[#2543](https://github.com/adap/flower/pull/2543), " -"[#2544](https://github.com/adap/flower/pull/2544), " -"[#2597](https://github.com/adap/flower/pull/2597), " -"[#2623](https://github.com/adap/flower/pull/2623))" +"A boolean to indicate whether to enable GPU growth on the main thread. " +"This is desirable if you make use of a TensorFlow model on your " +"`ServerApp` while having your `ClientApp` running on the same GPU. " +"Without enabling this, you might encounter an out-of-memory error because" +" TensorFlow, by default, allocates all GPU memory. Read more about how " +"`tf.config.experimental.set_memory_growth()` works in the TensorFlow " +"documentation: https://www.tensorflow.org/api/stable." msgstr "" -#: ../../source/ref-changelog.md:188 +#: flwr.simulation.run_simulation.run_simulation:26 of msgid "" -"**General improvements** " -"([#2309](https://github.com/adap/flower/pull/2309), " -"[#2310](https://github.com/adap/flower/pull/2310), " -"[#2313](https://github.com/adap/flower/pull/2313), " -"[#2316](https://github.com/adap/flower/pull/2316), " -"[#2317](https://github.com/adap/flower/pull/2317), " -"[#2349](https://github.com/adap/flower/pull/2349), " -"[#2360](https://github.com/adap/flower/pull/2360), " -"[#2402](https://github.com/adap/flower/pull/2402), " -"[#2446](https://github.com/adap/flower/pull/2446), " -"[#2561](https://github.com/adap/flower/pull/2561), " -"[#2273](https://github.com/adap/flower/pull/2273), " -"[#2267](https://github.com/adap/flower/pull/2267), " -"[#2274](https://github.com/adap/flower/pull/2274), " -"[#2275](https://github.com/adap/flower/pull/2275), " -"[#2432](https://github.com/adap/flower/pull/2432), " -"[#2251](https://github.com/adap/flower/pull/2251), " -"[#2321](https://github.com/adap/flower/pull/2321), " -"[#1936](https://github.com/adap/flower/pull/1936), " -"[#2408](https://github.com/adap/flower/pull/2408), " -"[#2413](https://github.com/adap/flower/pull/2413), " -"[#2401](https://github.com/adap/flower/pull/2401), " -"[#2531](https://github.com/adap/flower/pull/2531), " -"[#2534](https://github.com/adap/flower/pull/2534), " -"[#2535](https://github.com/adap/flower/pull/2535), " -"[#2521](https://github.com/adap/flower/pull/2521), " -"[#2553](https://github.com/adap/flower/pull/2553), " -"[#2596](https://github.com/adap/flower/pull/2596))" +"When diabled, only INFO, WARNING and ERROR log messages will be shown. If" +" enabled, DEBUG-level logs will be displayed." msgstr "" -#: ../../source/ref-changelog.md:190 ../../source/ref-changelog.md:280 -#: ../../source/ref-changelog.md:344 ../../source/ref-changelog.md:398 -#: ../../source/ref-changelog.md:465 -msgid "Flower received many improvements under the hood, too many to list here." +#: ../../source/ref-api/flwr.simulation.run_simulation_from_cli.rst:2 +msgid "run\\_simulation\\_from\\_cli" msgstr "" -#: ../../source/ref-changelog.md:194 -msgid "" -"**Remove support for Python 3.7** " -"([#2280](https://github.com/adap/flower/pull/2280), " -"[#2299](https://github.com/adap/flower/pull/2299), " -"[#2304](https://github.com/adap/flower/pull/2304), " -"[#2306](https://github.com/adap/flower/pull/2306), " -"[#2355](https://github.com/adap/flower/pull/2355), " -"[#2356](https://github.com/adap/flower/pull/2356))" +#: ../../source/ref-api/flwr.simulation.start_simulation.rst:2 +msgid "start\\_simulation" msgstr "" -#: ../../source/ref-changelog.md:196 +#: flwr.simulation.app.start_simulation:3 of msgid "" -"Python 3.7 support was deprecated in Flower 1.5, and this release removes" -" support. Flower now requires Python 3.8." +"A function creating client instances. The function must take a single " +"`str` argument called `cid`. It should return a single client instance of" +" type Client. Note that the created client instances are ephemeral and " +"will often be destroyed after a single method invocation. Since client " +"instances are not long-lived, they should not attempt to carry state over" +" method invocations. Any state required by the instance (model, dataset, " +"hyperparameters, ...) should be (re-)created in either the call to " +"`client_fn` or the call to any of the client methods (e.g., load " +"evaluation data in the `evaluate` method itself)." msgstr "" -#: ../../source/ref-changelog.md:198 +#: flwr.simulation.app.start_simulation:13 of msgid "" -"**Remove experimental argument** `rest` **from** `start_client` " -"([#2324](https://github.com/adap/flower/pull/2324))" +"The total number of clients in this simulation. This must be set if " +"`clients_ids` is not set and vice-versa." msgstr "" -#: ../../source/ref-changelog.md:200 +#: flwr.simulation.app.start_simulation:16 of msgid "" -"The (still experimental) argument `rest` was removed from `start_client` " -"and `start_numpy_client`. Use `transport=\"rest\"` to opt into the " -"experimental REST API instead." -msgstr "" - -#: ../../source/ref-changelog.md:202 -msgid "v1.5.0 (2023-08-31)" +"List `client_id`s for each client. This is only required if `num_clients`" +" is not set. Setting both `num_clients` and `clients_ids` with " +"`len(clients_ids)` not equal to `num_clients` generates an error." msgstr "" -#: ../../source/ref-changelog.md:208 +#: flwr.simulation.app.start_simulation:20 of msgid "" -"`Adam Narozniak`, `Anass Anhari`, `Charles Beauville`, `Dana-Farber`, " -"`Daniel J. Beutel`, `Daniel Nata Nugraha`, `Edoardo Gabrielli`, `Gustavo " -"Bertoli`, `Heng Pan`, `Javier`, `Mahdi`, `Steven Hé (Sīchàng)`, `Taner " -"Topal`, `achiverram28`, `danielnugraha`, `eunchung`, `ruthgal` " +"CPU and GPU resources for a single client. Supported keys are `num_cpus` " +"and `num_gpus`. To understand the GPU utilization caused by `num_gpus`, " +"as well as using custom resources, please consult the Ray documentation." msgstr "" -#: ../../source/ref-changelog.md:212 +#: flwr.simulation.app.start_simulation:25 of msgid "" -"**Introduce new simulation engine** " -"([#1969](https://github.com/adap/flower/pull/1969), " -"[#2221](https://github.com/adap/flower/pull/2221), " -"[#2248](https://github.com/adap/flower/pull/2248))" +"An implementation of the abstract base class `flwr.server.Server`. If no " +"instance is provided, then `start_server` will create one." msgstr "" -#: ../../source/ref-changelog.md:214 +#: flwr.simulation.app.start_simulation:31 of msgid "" -"The new simulation engine has been rewritten from the ground up, yet it " -"remains fully backwards compatible. It offers much improved stability and" -" memory handling, especially when working with GPUs. Simulations " -"transparently adapt to different settings to scale simulation in CPU-" -"only, CPU+GPU, multi-GPU, or multi-node multi-GPU environments." +"An implementation of the abstract base class `flwr.server.Strategy`. If " +"no strategy is provided, then `start_server` will use " +"`flwr.server.strategy.FedAvg`." msgstr "" -#: ../../source/ref-changelog.md:216 +#: flwr.simulation.app.start_simulation:35 of msgid "" -"Comprehensive documentation includes a new [how-to run " -"simulations](https://flower.ai/docs/framework/how-to-run-" -"simulations.html) guide, new [simulation-" -"pytorch](https://flower.ai/docs/examples/simulation-pytorch.html) and " -"[simulation-tensorflow](https://flower.ai/docs/examples/simulation-" -"tensorflow.html) notebooks, and a new [YouTube tutorial " -"series](https://www.youtube.com/watch?v=cRebUIGB5RU&list=PLNG4feLHqCWlnj8a_E1A_n5zr2-8pafTB)." +"An implementation of the abstract base class `flwr.server.ClientManager`." +" If no implementation is provided, then `start_simulation` will use " +"`flwr.server.client_manager.SimpleClientManager`." msgstr "" -#: ../../source/ref-changelog.md:218 +#: flwr.simulation.app.start_simulation:39 of msgid "" -"**Restructure Flower Docs** " -"([#1824](https://github.com/adap/flower/pull/1824), " -"[#1865](https://github.com/adap/flower/pull/1865), " -"[#1884](https://github.com/adap/flower/pull/1884), " -"[#1887](https://github.com/adap/flower/pull/1887), " -"[#1919](https://github.com/adap/flower/pull/1919), " -"[#1922](https://github.com/adap/flower/pull/1922), " -"[#1920](https://github.com/adap/flower/pull/1920), " -"[#1923](https://github.com/adap/flower/pull/1923), " -"[#1924](https://github.com/adap/flower/pull/1924), " -"[#1962](https://github.com/adap/flower/pull/1962), " -"[#2006](https://github.com/adap/flower/pull/2006), " -"[#2133](https://github.com/adap/flower/pull/2133), " -"[#2203](https://github.com/adap/flower/pull/2203), " -"[#2215](https://github.com/adap/flower/pull/2215), " -"[#2122](https://github.com/adap/flower/pull/2122), " -"[#2223](https://github.com/adap/flower/pull/2223), " -"[#2219](https://github.com/adap/flower/pull/2219), " -"[#2232](https://github.com/adap/flower/pull/2232), " -"[#2233](https://github.com/adap/flower/pull/2233), " -"[#2234](https://github.com/adap/flower/pull/2234), " -"[#2235](https://github.com/adap/flower/pull/2235), " -"[#2237](https://github.com/adap/flower/pull/2237), " -"[#2238](https://github.com/adap/flower/pull/2238), " -"[#2242](https://github.com/adap/flower/pull/2242), " -"[#2231](https://github.com/adap/flower/pull/2231), " -"[#2243](https://github.com/adap/flower/pull/2243), " -"[#2227](https://github.com/adap/flower/pull/2227))" +"Optional dictionary containing arguments for the call to `ray.init`. If " +"ray_init_args is None (the default), Ray will be initialized with the " +"following default args: { \"ignore_reinit_error\": True, " +"\"include_dashboard\": False } An empty dictionary can be used " +"(ray_init_args={}) to prevent any arguments from being passed to " +"ray.init." msgstr "" -#: ../../source/ref-changelog.md:220 +#: flwr.simulation.app.start_simulation:39 of msgid "" -"Much effort went into a completely restructured Flower docs experience. " -"The documentation on [flower.ai/docs](flower.ai/docs) is now divided " -"into Flower Framework, Flower Baselines, Flower Android SDK, Flower iOS " -"SDK, and code example projects." +"Optional dictionary containing arguments for the call to `ray.init`. If " +"ray_init_args is None (the default), Ray will be initialized with the " +"following default args:" msgstr "" -#: ../../source/ref-changelog.md:222 -msgid "" -"**Introduce Flower Swift SDK** " -"([#1858](https://github.com/adap/flower/pull/1858), " -"[#1897](https://github.com/adap/flower/pull/1897))" +#: flwr.simulation.app.start_simulation:43 of +msgid "{ \"ignore_reinit_error\": True, \"include_dashboard\": False }" msgstr "" -#: ../../source/ref-changelog.md:224 +#: flwr.simulation.app.start_simulation:45 of msgid "" -"This is the first preview release of the Flower Swift SDK. Flower support" -" on iOS is improving, and alongside the Swift SDK and code example, there" -" is now also an iOS quickstart tutorial." +"An empty dictionary can be used (ray_init_args={}) to prevent any " +"arguments from being passed to ray.init." msgstr "" -#: ../../source/ref-changelog.md:226 +#: flwr.simulation.app.start_simulation:48 of msgid "" -"**Introduce Flower Android SDK** " -"([#2131](https://github.com/adap/flower/pull/2131))" +"Set to True to prevent `ray.shutdown()` in case " +"`ray.is_initialized()=True`." msgstr "" -#: ../../source/ref-changelog.md:228 +#: flwr.simulation.app.start_simulation:50 of msgid "" -"This is the first preview release of the Flower Kotlin SDK. Flower " -"support on Android is improving, and alongside the Kotlin SDK and code " -"example, there is now also an Android quickstart tutorial." +"Optionally specify the type of actor to use. The actor object, which " +"persists throughout the simulation, will be the process in charge of " +"executing a ClientApp wrapping input argument `client_fn`." msgstr "" -#: ../../source/ref-changelog.md:230 +#: flwr.simulation.app.start_simulation:54 of msgid "" -"**Introduce new end-to-end testing infrastructure** " -"([#1842](https://github.com/adap/flower/pull/1842), " -"[#2071](https://github.com/adap/flower/pull/2071), " -"[#2072](https://github.com/adap/flower/pull/2072), " -"[#2068](https://github.com/adap/flower/pull/2068), " -"[#2067](https://github.com/adap/flower/pull/2067), " -"[#2069](https://github.com/adap/flower/pull/2069), " -"[#2073](https://github.com/adap/flower/pull/2073), " -"[#2070](https://github.com/adap/flower/pull/2070), " -"[#2074](https://github.com/adap/flower/pull/2074), " -"[#2082](https://github.com/adap/flower/pull/2082), " -"[#2084](https://github.com/adap/flower/pull/2084), " -"[#2093](https://github.com/adap/flower/pull/2093), " -"[#2109](https://github.com/adap/flower/pull/2109), " -"[#2095](https://github.com/adap/flower/pull/2095), " -"[#2140](https://github.com/adap/flower/pull/2140), " -"[#2137](https://github.com/adap/flower/pull/2137), " -"[#2165](https://github.com/adap/flower/pull/2165))" +"If you want to create your own Actor classes, you might need to pass some" +" input argument. You can use this dictionary for such purpose." msgstr "" -#: ../../source/ref-changelog.md:232 +#: flwr.simulation.app.start_simulation:57 of msgid "" -"A new testing infrastructure ensures that new changes stay compatible " -"with existing framework integrations or strategies." +"(default: \"DEFAULT\") Optional string (\"DEFAULT\" or \"SPREAD\") for " +"the VCE to choose in which node the actor is placed. If you are an " +"advanced user needed more control you can use lower-level scheduling " +"strategies to pin actors to specific compute nodes (e.g. via " +"NodeAffinitySchedulingStrategy). Please note this is an advanced feature." +" For all details, please refer to the Ray documentation: " +"https://docs.ray.io/en/latest/ray-core/scheduling/index.html" msgstr "" -#: ../../source/ref-changelog.md:234 -msgid "**Deprecate Python 3.7**" +#: flwr.simulation.app.start_simulation:66 of +msgid "**hist** -- Object containing metrics from training." msgstr "" -#: ../../source/ref-changelog.md:236 -msgid "" -"Since Python 3.7 reached its end of life (EOL) on 2023-06-27, support for" -" Python 3.7 is now deprecated and will be removed in an upcoming release." +#: ../../source/ref-changelog.md:1 +msgid "Changelog" msgstr "" -#: ../../source/ref-changelog.md:238 -msgid "" -"**Add new** `FedTrimmedAvg` **strategy** " -"([#1769](https://github.com/adap/flower/pull/1769), " -"[#1853](https://github.com/adap/flower/pull/1853))" +#: ../../source/ref-changelog.md:3 +msgid "Unreleased" msgstr "" -#: ../../source/ref-changelog.md:240 -msgid "" -"The new `FedTrimmedAvg` strategy implements Trimmed Mean by [Dong Yin, " -"2018](https://arxiv.org/abs/1803.01498)." +#: ../../source/ref-changelog.md:5 ../../source/ref-changelog.md:17 +#: ../../source/ref-changelog.md:110 ../../source/ref-changelog.md:210 +#: ../../source/ref-changelog.md:294 ../../source/ref-changelog.md:358 +#: ../../source/ref-changelog.md:416 ../../source/ref-changelog.md:485 +#: ../../source/ref-changelog.md:614 ../../source/ref-changelog.md:656 +#: ../../source/ref-changelog.md:723 ../../source/ref-changelog.md:789 +#: ../../source/ref-changelog.md:834 ../../source/ref-changelog.md:873 +#: ../../source/ref-changelog.md:906 ../../source/ref-changelog.md:956 +msgid "What's new?" msgstr "" -#: ../../source/ref-changelog.md:242 -msgid "" -"**Introduce start_driver** " -"([#1697](https://github.com/adap/flower/pull/1697))" +#: ../../source/ref-changelog.md:7 ../../source/ref-changelog.md:80 +#: ../../source/ref-changelog.md:192 ../../source/ref-changelog.md:282 +#: ../../source/ref-changelog.md:346 ../../source/ref-changelog.md:404 +#: ../../source/ref-changelog.md:473 ../../source/ref-changelog.md:535 +#: ../../source/ref-changelog.md:554 ../../source/ref-changelog.md:710 +#: ../../source/ref-changelog.md:781 ../../source/ref-changelog.md:818 +#: ../../source/ref-changelog.md:861 +msgid "Incompatible changes" msgstr "" -#: ../../source/ref-changelog.md:244 -msgid "" -"In addition to `start_server` and using the raw Driver API, there is a " -"new `start_driver` function that allows for running `start_server` " -"scripts as a Flower driver with only a single-line code change. Check out" -" the `mt-pytorch` code example to see a working example using " -"`start_driver`." +#: ../../source/ref-changelog.md:9 +msgid "v1.7.0 (2024-02-05)" msgstr "" -#: ../../source/ref-changelog.md:246 -msgid "" -"**Add parameter aggregation to** `mt-pytorch` **code example** " -"([#1785](https://github.com/adap/flower/pull/1785))" +#: ../../source/ref-changelog.md:11 ../../source/ref-changelog.md:104 +#: ../../source/ref-changelog.md:204 ../../source/ref-changelog.md:288 +#: ../../source/ref-changelog.md:352 ../../source/ref-changelog.md:410 +#: ../../source/ref-changelog.md:479 ../../source/ref-changelog.md:548 +msgid "Thanks to our contributors" msgstr "" -#: ../../source/ref-changelog.md:248 +#: ../../source/ref-changelog.md:13 ../../source/ref-changelog.md:106 +#: ../../source/ref-changelog.md:206 ../../source/ref-changelog.md:290 +#: ../../source/ref-changelog.md:354 ../../source/ref-changelog.md:412 msgid "" -"The `mt-pytorch` example shows how to aggregate parameters when writing a" -" driver script. The included `driver.py` and `server.py` have been " -"aligned to demonstrate both the low-level way and the high-level way of " -"building server-side logic." +"We would like to give our special thanks to all the contributors who made" +" the new version of Flower possible (in `git shortlog` order):" msgstr "" -#: ../../source/ref-changelog.md:250 +#: ../../source/ref-changelog.md:15 msgid "" -"**Migrate experimental REST API to Starlette** " -"([2171](https://github.com/adap/flower/pull/2171))" +"`Aasheesh Singh`, `Adam Narozniak`, `Aml Hassan Esmil`, `Charles " +"Beauville`, `Daniel J. Beutel`, `Daniel Nata Nugraha`, `Edoardo " +"Gabrielli`, `Gustavo Bertoli`, `HelinLin`, `Heng Pan`, `Javier`, `M S " +"Chaitanya Kumar`, `Mohammad Naseri`, `Nikos Vlachakis`, `Pritam Neog`, " +"`Robert Kuska`, `Robert Steiner`, `Taner Topal`, `Yahia Salaheldin " +"Shaaban`, `Yan Gao`, `Yasar Abbas` " msgstr "" -#: ../../source/ref-changelog.md:252 +#: ../../source/ref-changelog.md:19 msgid "" -"The (experimental) REST API used to be implemented in " -"[FastAPI](https://fastapi.tiangolo.com/), but it has now been migrated to" -" use [Starlette](https://www.starlette.io/) directly." +"**Introduce stateful clients (experimental)** " +"([#2770](https://github.com/adap/flower/pull/2770), " +"[#2686](https://github.com/adap/flower/pull/2686), " +"[#2696](https://github.com/adap/flower/pull/2696), " +"[#2643](https://github.com/adap/flower/pull/2643), " +"[#2769](https://github.com/adap/flower/pull/2769))" msgstr "" -#: ../../source/ref-changelog.md:254 +#: ../../source/ref-changelog.md:21 msgid "" -"Please note: The REST request-response API is still experimental and will" -" likely change significantly over time." +"Subclasses of `Client` and `NumPyClient` can now store local state that " +"remains on the client. Let's start with the highlight first: this new " +"feature is compatible with both simulated clients (via " +"`start_simulation`) and networked clients (via `start_client`). It's also" +" the first preview of new abstractions like `Context` and `RecordSet`. " +"Clients can access state of type `RecordSet` via `state: RecordSet = " +"self.context.state`. Changes to this `RecordSet` are preserved across " +"different rounds of execution to enable stateful computations in a " +"unified way across simulation and deployment." msgstr "" -#: ../../source/ref-changelog.md:256 +#: ../../source/ref-changelog.md:23 msgid "" -"**Introduce experimental gRPC request-response API** " -"([#1867](https://github.com/adap/flower/pull/1867), " -"[#1901](https://github.com/adap/flower/pull/1901))" +"**Improve performance** " +"([#2293](https://github.com/adap/flower/pull/2293))" msgstr "" -#: ../../source/ref-changelog.md:258 +#: ../../source/ref-changelog.md:25 msgid "" -"In addition to the existing gRPC API (based on bidirectional streaming) " -"and the experimental REST API, there is now a new gRPC API that uses a " -"request-response model to communicate with client nodes." +"Flower is faster than ever. All `FedAvg`-derived strategies now use in-" +"place aggregation to reduce memory consumption. The Flower client " +"serialization/deserialization has been rewritten from the ground up, " +"which results in significant speedups, especially when the client-side " +"training time is short." msgstr "" -#: ../../source/ref-changelog.md:260 +#: ../../source/ref-changelog.md:27 msgid "" -"Please note: The gRPC request-response API is still experimental and will" -" likely change significantly over time." +"**Support Federated Learning with Apple MLX and Flower** " +"([#2693](https://github.com/adap/flower/pull/2693))" msgstr "" -#: ../../source/ref-changelog.md:262 +#: ../../source/ref-changelog.md:29 msgid "" -"**Replace the experimental** `start_client(rest=True)` **with the new** " -"`start_client(transport=\"rest\")` " -"([#1880](https://github.com/adap/flower/pull/1880))" +"Flower has official support for federated learning using [Apple " +"MLX](https://ml-explore.github.io/mlx) via the new `quickstart-mlx` code " +"example." msgstr "" -#: ../../source/ref-changelog.md:264 +#: ../../source/ref-changelog.md:31 msgid "" -"The (experimental) `start_client` argument `rest` was deprecated in " -"favour of a new argument `transport`. `start_client(transport=\"rest\")` " -"will yield the same behaviour as `start_client(rest=True)` did before. " -"All code should migrate to the new argument `transport`. The deprecated " -"argument `rest` will be removed in a future release." +"**Introduce new XGBoost cyclic strategy** " +"([#2666](https://github.com/adap/flower/pull/2666), " +"[#2668](https://github.com/adap/flower/pull/2668))" msgstr "" -#: ../../source/ref-changelog.md:266 +#: ../../source/ref-changelog.md:33 msgid "" -"**Add a new gRPC option** " -"([#2197](https://github.com/adap/flower/pull/2197))" +"A new strategy called `FedXgbCyclic` supports a client-by-client style of" +" training (often called cyclic). The `xgboost-comprehensive` code example" +" shows how to use it in a full project. In addition to that, `xgboost-" +"comprehensive` now also supports simulation mode. With this, Flower " +"offers best-in-class XGBoost support." msgstr "" -#: ../../source/ref-changelog.md:268 +#: ../../source/ref-changelog.md:35 msgid "" -"We now start a gRPC server with the `grpc.keepalive_permit_without_calls`" -" option set to 0 by default. This prevents the clients from sending " -"keepalive pings when there is no outstanding stream." +"**Support Python 3.11** " +"([#2394](https://github.com/adap/flower/pull/2394))" msgstr "" -#: ../../source/ref-changelog.md:270 +#: ../../source/ref-changelog.md:37 msgid "" -"**Improve example notebooks** " -"([#2005](https://github.com/adap/flower/pull/2005))" +"Framework tests now run on Python 3.8, 3.9, 3.10, and 3.11. This will " +"ensure better support for users using more recent Python versions." msgstr "" -#: ../../source/ref-changelog.md:272 -msgid "There's a new 30min Federated Learning PyTorch tutorial!" +#: ../../source/ref-changelog.md:39 +msgid "" +"**Update gRPC and ProtoBuf dependencies** " +"([#2814](https://github.com/adap/flower/pull/2814))" msgstr "" -#: ../../source/ref-changelog.md:274 +#: ../../source/ref-changelog.md:41 msgid "" -"**Example updates** ([#1772](https://github.com/adap/flower/pull/1772), " -"[#1873](https://github.com/adap/flower/pull/1873), " -"[#1981](https://github.com/adap/flower/pull/1981), " -"[#1988](https://github.com/adap/flower/pull/1988), " -"[#1984](https://github.com/adap/flower/pull/1984), " -"[#1982](https://github.com/adap/flower/pull/1982), " -"[#2112](https://github.com/adap/flower/pull/2112), " -"[#2144](https://github.com/adap/flower/pull/2144), " -"[#2174](https://github.com/adap/flower/pull/2174), " -"[#2225](https://github.com/adap/flower/pull/2225), " -"[#2183](https://github.com/adap/flower/pull/2183))" +"The `grpcio` and `protobuf` dependencies were updated to their latest " +"versions for improved security and performance." msgstr "" -#: ../../source/ref-changelog.md:276 +#: ../../source/ref-changelog.md:43 msgid "" -"Many examples have received significant updates, including simplified " -"advanced-tensorflow and advanced-pytorch examples, improved macOS " -"compatibility of TensorFlow examples, and code examples for simulation. A" -" major upgrade is that all code examples now have a `requirements.txt` " -"(in addition to `pyproject.toml`)." +"**Introduce Docker image for Flower server** " +"([#2700](https://github.com/adap/flower/pull/2700), " +"[#2688](https://github.com/adap/flower/pull/2688), " +"[#2705](https://github.com/adap/flower/pull/2705), " +"[#2695](https://github.com/adap/flower/pull/2695), " +"[#2747](https://github.com/adap/flower/pull/2747), " +"[#2746](https://github.com/adap/flower/pull/2746), " +"[#2680](https://github.com/adap/flower/pull/2680), " +"[#2682](https://github.com/adap/flower/pull/2682), " +"[#2701](https://github.com/adap/flower/pull/2701))" msgstr "" -#: ../../source/ref-changelog.md:278 +#: ../../source/ref-changelog.md:45 msgid "" -"**General improvements** " -"([#1872](https://github.com/adap/flower/pull/1872), " -"[#1866](https://github.com/adap/flower/pull/1866), " -"[#1884](https://github.com/adap/flower/pull/1884), " -"[#1837](https://github.com/adap/flower/pull/1837), " -"[#1477](https://github.com/adap/flower/pull/1477), " -"[#2171](https://github.com/adap/flower/pull/2171))" +"The Flower server can now be run using an official Docker image. A new " +"how-to guide explains [how to run Flower using " +"Docker](https://flower.ai/docs/framework/how-to-run-flower-using-" +"docker.html). An official Flower client Docker image will follow." msgstr "" -#: ../../source/ref-changelog.md:284 ../../source/ref-changelog.md:348 -#: ../../source/ref-changelog.md:406 ../../source/ref-changelog.md:475 -#: ../../source/ref-changelog.md:537 -msgid "None" +#: ../../source/ref-changelog.md:47 +msgid "" +"**Introduce** `flower-via-docker-compose` **example** " +"([#2626](https://github.com/adap/flower/pull/2626))" msgstr "" -#: ../../source/ref-changelog.md:286 -msgid "v1.4.0 (2023-04-21)" +#: ../../source/ref-changelog.md:49 +msgid "" +"**Introduce** `quickstart-sklearn-tabular` **example** " +"([#2719](https://github.com/adap/flower/pull/2719))" msgstr "" -#: ../../source/ref-changelog.md:292 +#: ../../source/ref-changelog.md:51 msgid "" -"`Adam Narozniak`, `Alexander Viala Bellander`, `Charles Beauville`, " -"`Chenyang Ma (Danny)`, `Daniel J. Beutel`, `Edoardo`, `Gautam Jajoo`, " -"`Iacob-Alexandru-Andrei`, `JDRanpariya`, `Jean Charle Yaacoub`, `Kunal " -"Sarkhel`, `L. Jiang`, `Lennart Behme`, `Max Kapsecker`, `Michał`, `Nic " -"Lane`, `Nikolaos Episkopos`, `Ragy`, `Saurav Maheshkar`, `Semo Yang`, " -"`Steve Laskaridis`, `Steven Hé (Sīchàng)`, `Taner Topal`" +"**Introduce** `custom-metrics` **example** " +"([#1958](https://github.com/adap/flower/pull/1958))" msgstr "" -#: ../../source/ref-changelog.md:296 +#: ../../source/ref-changelog.md:53 msgid "" -"**Introduce support for XGBoost (**`FedXgbNnAvg` **strategy and " -"example)** ([#1694](https://github.com/adap/flower/pull/1694), " -"[#1709](https://github.com/adap/flower/pull/1709), " -"[#1715](https://github.com/adap/flower/pull/1715), " -"[#1717](https://github.com/adap/flower/pull/1717), " -"[#1763](https://github.com/adap/flower/pull/1763), " -"[#1795](https://github.com/adap/flower/pull/1795))" +"**Update code examples to use Flower Datasets** " +"([#2450](https://github.com/adap/flower/pull/2450), " +"[#2456](https://github.com/adap/flower/pull/2456), " +"[#2318](https://github.com/adap/flower/pull/2318), " +"[#2712](https://github.com/adap/flower/pull/2712))" msgstr "" -#: ../../source/ref-changelog.md:298 +#: ../../source/ref-changelog.md:55 msgid "" -"XGBoost is a tree-based ensemble machine learning algorithm that uses " -"gradient boosting to improve model accuracy. We added a new `FedXgbNnAvg`" -" " -"[strategy](https://github.com/adap/flower/tree/main/src/py/flwr/server/strategy/fedxgb_nn_avg.py)," -" and a [code " -"example](https://github.com/adap/flower/tree/main/examples/xgboost-quickstart)" -" that demonstrates the usage of this new strategy in an XGBoost project." +"Several code examples were updated to use [Flower " +"Datasets](https://flower.ai/docs/datasets/)." msgstr "" -#: ../../source/ref-changelog.md:300 +#: ../../source/ref-changelog.md:57 msgid "" -"**Introduce iOS SDK (preview)** " -"([#1621](https://github.com/adap/flower/pull/1621), " -"[#1764](https://github.com/adap/flower/pull/1764))" +"**General updates to Flower Examples** " +"([#2381](https://github.com/adap/flower/pull/2381), " +"[#2805](https://github.com/adap/flower/pull/2805), " +"[#2782](https://github.com/adap/flower/pull/2782), " +"[#2806](https://github.com/adap/flower/pull/2806), " +"[#2829](https://github.com/adap/flower/pull/2829), " +"[#2825](https://github.com/adap/flower/pull/2825), " +"[#2816](https://github.com/adap/flower/pull/2816), " +"[#2726](https://github.com/adap/flower/pull/2726), " +"[#2659](https://github.com/adap/flower/pull/2659), " +"[#2655](https://github.com/adap/flower/pull/2655))" msgstr "" -#: ../../source/ref-changelog.md:302 -msgid "" -"This is a major update for anyone wanting to implement Federated Learning" -" on iOS mobile devices. We now have a swift iOS SDK present under " -"[src/swift/flwr](https://github.com/adap/flower/tree/main/src/swift/flwr)" -" that will facilitate greatly the app creating process. To showcase its " -"use, the [iOS " -"example](https://github.com/adap/flower/tree/main/examples/ios) has also " -"been updated!" +#: ../../source/ref-changelog.md:59 +msgid "Many Flower code examples received substantial updates." msgstr "" -#: ../../source/ref-changelog.md:304 -msgid "" -"**Introduce new \"What is Federated Learning?\" tutorial** " -"([#1657](https://github.com/adap/flower/pull/1657), " -"[#1721](https://github.com/adap/flower/pull/1721))" +#: ../../source/ref-changelog.md:61 ../../source/ref-changelog.md:154 +msgid "**Update Flower Baselines**" msgstr "" -#: ../../source/ref-changelog.md:306 +#: ../../source/ref-changelog.md:63 msgid "" -"A new [entry-level tutorial](https://flower.ai/docs/framework/tutorial-" -"what-is-federated-learning.html) in our documentation explains the basics" -" of Fedetated Learning. It enables anyone who's unfamiliar with Federated" -" Learning to start their journey with Flower. Forward it to anyone who's " -"interested in Federated Learning!" +"HFedXGBoost ([#2226](https://github.com/adap/flower/pull/2226), " +"[#2771](https://github.com/adap/flower/pull/2771))" msgstr "" -#: ../../source/ref-changelog.md:308 -msgid "" -"**Introduce new Flower Baseline: FedProx MNIST** " -"([#1513](https://github.com/adap/flower/pull/1513), " -"[#1680](https://github.com/adap/flower/pull/1680), " -"[#1681](https://github.com/adap/flower/pull/1681), " -"[#1679](https://github.com/adap/flower/pull/1679))" +#: ../../source/ref-changelog.md:64 +msgid "FedVSSL ([#2412](https://github.com/adap/flower/pull/2412))" msgstr "" -#: ../../source/ref-changelog.md:310 -msgid "" -"This new baseline replicates the MNIST+CNN task from the paper [Federated" -" Optimization in Heterogeneous Networks (Li et al., " -"2018)](https://arxiv.org/abs/1812.06127). It uses the `FedProx` strategy," -" which aims at making convergence more robust in heterogeneous settings." +#: ../../source/ref-changelog.md:65 +msgid "FedNova ([#2179](https://github.com/adap/flower/pull/2179))" msgstr "" -#: ../../source/ref-changelog.md:312 -msgid "" -"**Introduce new Flower Baseline: FedAvg FEMNIST** " -"([#1655](https://github.com/adap/flower/pull/1655))" +#: ../../source/ref-changelog.md:66 +msgid "HeteroFL ([#2439](https://github.com/adap/flower/pull/2439))" msgstr "" -#: ../../source/ref-changelog.md:314 -msgid "" -"This new baseline replicates an experiment evaluating the performance of " -"the FedAvg algorithm on the FEMNIST dataset from the paper [LEAF: A " -"Benchmark for Federated Settings (Caldas et al., " -"2018)](https://arxiv.org/abs/1812.01097)." +#: ../../source/ref-changelog.md:67 +msgid "FedAvgM ([#2246](https://github.com/adap/flower/pull/2246))" msgstr "" -#: ../../source/ref-changelog.md:316 -msgid "" -"**Introduce (experimental) REST API** " -"([#1594](https://github.com/adap/flower/pull/1594), " -"[#1690](https://github.com/adap/flower/pull/1690), " -"[#1695](https://github.com/adap/flower/pull/1695), " -"[#1712](https://github.com/adap/flower/pull/1712), " -"[#1802](https://github.com/adap/flower/pull/1802), " -"[#1770](https://github.com/adap/flower/pull/1770), " -"[#1733](https://github.com/adap/flower/pull/1733))" +#: ../../source/ref-changelog.md:68 +msgid "FedPara ([#2722](https://github.com/adap/flower/pull/2722))" msgstr "" -#: ../../source/ref-changelog.md:318 +#: ../../source/ref-changelog.md:70 msgid "" -"A new REST API has been introduced as an alternative to the gRPC-based " -"communication stack. In this initial version, the REST API only supports " -"anonymous clients." +"**Improve documentation** " +"([#2674](https://github.com/adap/flower/pull/2674), " +"[#2480](https://github.com/adap/flower/pull/2480), " +"[#2826](https://github.com/adap/flower/pull/2826), " +"[#2727](https://github.com/adap/flower/pull/2727), " +"[#2761](https://github.com/adap/flower/pull/2761), " +"[#2900](https://github.com/adap/flower/pull/2900))" msgstr "" -#: ../../source/ref-changelog.md:320 +#: ../../source/ref-changelog.md:72 msgid "" -"Please note: The REST API is still experimental and will likely change " -"significantly over time." +"**Improved testing and development infrastructure** " +"([#2797](https://github.com/adap/flower/pull/2797), " +"[#2676](https://github.com/adap/flower/pull/2676), " +"[#2644](https://github.com/adap/flower/pull/2644), " +"[#2656](https://github.com/adap/flower/pull/2656), " +"[#2848](https://github.com/adap/flower/pull/2848), " +"[#2675](https://github.com/adap/flower/pull/2675), " +"[#2735](https://github.com/adap/flower/pull/2735), " +"[#2767](https://github.com/adap/flower/pull/2767), " +"[#2732](https://github.com/adap/flower/pull/2732), " +"[#2744](https://github.com/adap/flower/pull/2744), " +"[#2681](https://github.com/adap/flower/pull/2681), " +"[#2699](https://github.com/adap/flower/pull/2699), " +"[#2745](https://github.com/adap/flower/pull/2745), " +"[#2734](https://github.com/adap/flower/pull/2734), " +"[#2731](https://github.com/adap/flower/pull/2731), " +"[#2652](https://github.com/adap/flower/pull/2652), " +"[#2720](https://github.com/adap/flower/pull/2720), " +"[#2721](https://github.com/adap/flower/pull/2721), " +"[#2717](https://github.com/adap/flower/pull/2717), " +"[#2864](https://github.com/adap/flower/pull/2864), " +"[#2694](https://github.com/adap/flower/pull/2694), " +"[#2709](https://github.com/adap/flower/pull/2709), " +"[#2658](https://github.com/adap/flower/pull/2658), " +"[#2796](https://github.com/adap/flower/pull/2796), " +"[#2692](https://github.com/adap/flower/pull/2692), " +"[#2657](https://github.com/adap/flower/pull/2657), " +"[#2813](https://github.com/adap/flower/pull/2813), " +"[#2661](https://github.com/adap/flower/pull/2661), " +"[#2398](https://github.com/adap/flower/pull/2398))" msgstr "" -#: ../../source/ref-changelog.md:322 +#: ../../source/ref-changelog.md:74 msgid "" -"**Improve the (experimental) Driver API** " -"([#1663](https://github.com/adap/flower/pull/1663), " -"[#1666](https://github.com/adap/flower/pull/1666), " -"[#1667](https://github.com/adap/flower/pull/1667), " -"[#1664](https://github.com/adap/flower/pull/1664), " -"[#1675](https://github.com/adap/flower/pull/1675), " -"[#1676](https://github.com/adap/flower/pull/1676), " -"[#1693](https://github.com/adap/flower/pull/1693), " -"[#1662](https://github.com/adap/flower/pull/1662), " -"[#1794](https://github.com/adap/flower/pull/1794))" +"The Flower testing and development infrastructure has received " +"substantial updates. This makes Flower 1.7 the most tested release ever." msgstr "" -#: ../../source/ref-changelog.md:324 +#: ../../source/ref-changelog.md:76 msgid "" -"The Driver API is still an experimental feature, but this release " -"introduces some major upgrades. One of the main improvements is the " -"introduction of an SQLite database to store server state on disk (instead" -" of in-memory). Another improvement is that tasks (instructions or " -"results) that have been delivered will now be deleted. This greatly " -"improves the memory efficiency of a long-running Flower server." +"**Update dependencies** " +"([#2753](https://github.com/adap/flower/pull/2753), " +"[#2651](https://github.com/adap/flower/pull/2651), " +"[#2739](https://github.com/adap/flower/pull/2739), " +"[#2837](https://github.com/adap/flower/pull/2837), " +"[#2788](https://github.com/adap/flower/pull/2788), " +"[#2811](https://github.com/adap/flower/pull/2811), " +"[#2774](https://github.com/adap/flower/pull/2774), " +"[#2790](https://github.com/adap/flower/pull/2790), " +"[#2751](https://github.com/adap/flower/pull/2751), " +"[#2850](https://github.com/adap/flower/pull/2850), " +"[#2812](https://github.com/adap/flower/pull/2812), " +"[#2872](https://github.com/adap/flower/pull/2872), " +"[#2736](https://github.com/adap/flower/pull/2736), " +"[#2756](https://github.com/adap/flower/pull/2756), " +"[#2857](https://github.com/adap/flower/pull/2857), " +"[#2757](https://github.com/adap/flower/pull/2757), " +"[#2810](https://github.com/adap/flower/pull/2810), " +"[#2740](https://github.com/adap/flower/pull/2740), " +"[#2789](https://github.com/adap/flower/pull/2789))" msgstr "" -#: ../../source/ref-changelog.md:326 +#: ../../source/ref-changelog.md:78 msgid "" -"**Fix spilling issues related to Ray during simulations** " -"([#1698](https://github.com/adap/flower/pull/1698))" +"**General improvements** " +"([#2803](https://github.com/adap/flower/pull/2803), " +"[#2847](https://github.com/adap/flower/pull/2847), " +"[#2877](https://github.com/adap/flower/pull/2877), " +"[#2690](https://github.com/adap/flower/pull/2690), " +"[#2889](https://github.com/adap/flower/pull/2889), " +"[#2874](https://github.com/adap/flower/pull/2874), " +"[#2819](https://github.com/adap/flower/pull/2819), " +"[#2689](https://github.com/adap/flower/pull/2689), " +"[#2457](https://github.com/adap/flower/pull/2457), " +"[#2870](https://github.com/adap/flower/pull/2870), " +"[#2669](https://github.com/adap/flower/pull/2669), " +"[#2876](https://github.com/adap/flower/pull/2876), " +"[#2885](https://github.com/adap/flower/pull/2885), " +"[#2858](https://github.com/adap/flower/pull/2858), " +"[#2867](https://github.com/adap/flower/pull/2867), " +"[#2351](https://github.com/adap/flower/pull/2351), " +"[#2886](https://github.com/adap/flower/pull/2886), " +"[#2860](https://github.com/adap/flower/pull/2860), " +"[#2828](https://github.com/adap/flower/pull/2828), " +"[#2869](https://github.com/adap/flower/pull/2869), " +"[#2875](https://github.com/adap/flower/pull/2875), " +"[#2733](https://github.com/adap/flower/pull/2733), " +"[#2488](https://github.com/adap/flower/pull/2488), " +"[#2646](https://github.com/adap/flower/pull/2646), " +"[#2879](https://github.com/adap/flower/pull/2879), " +"[#2821](https://github.com/adap/flower/pull/2821), " +"[#2855](https://github.com/adap/flower/pull/2855), " +"[#2800](https://github.com/adap/flower/pull/2800), " +"[#2807](https://github.com/adap/flower/pull/2807), " +"[#2801](https://github.com/adap/flower/pull/2801), " +"[#2804](https://github.com/adap/flower/pull/2804), " +"[#2851](https://github.com/adap/flower/pull/2851), " +"[#2787](https://github.com/adap/flower/pull/2787), " +"[#2852](https://github.com/adap/flower/pull/2852), " +"[#2672](https://github.com/adap/flower/pull/2672), " +"[#2759](https://github.com/adap/flower/pull/2759))" msgstr "" -#: ../../source/ref-changelog.md:328 +#: ../../source/ref-changelog.md:82 msgid "" -"While running long simulations, `ray` was sometimes spilling huge amounts" -" of data that would make the training unable to continue. This is now " -"fixed! 🎉" +"**Deprecate** `start_numpy_client` " +"([#2563](https://github.com/adap/flower/pull/2563), " +"[#2718](https://github.com/adap/flower/pull/2718))" msgstr "" -#: ../../source/ref-changelog.md:330 +#: ../../source/ref-changelog.md:84 msgid "" -"**Add new example using** `TabNet` **and Flower** " -"([#1725](https://github.com/adap/flower/pull/1725))" +"Until now, clients of type `NumPyClient` needed to be started via " +"`start_numpy_client`. In our efforts to consolidate framework APIs, we " +"have introduced changes, and now all client types should start via " +"`start_client`. To continue using `NumPyClient` clients, you simply need " +"to first call the `.to_client()` method and then pass returned `Client` " +"object to `start_client`. The examples and the documentation have been " +"updated accordingly." msgstr "" -#: ../../source/ref-changelog.md:332 +#: ../../source/ref-changelog.md:86 msgid "" -"TabNet is a powerful and flexible framework for training machine learning" -" models on tabular data. We now have a federated example using Flower: " -"[quickstart-tabnet](https://github.com/adap/flower/tree/main/examples/quickstart-tabnet)." +"**Deprecate legacy DP wrappers** " +"([#2749](https://github.com/adap/flower/pull/2749))" msgstr "" -#: ../../source/ref-changelog.md:334 +#: ../../source/ref-changelog.md:88 msgid "" -"**Add new how-to guide for monitoring simulations** " -"([#1649](https://github.com/adap/flower/pull/1649))" +"Legacy DP wrapper classes are deprecated, but still functional. This is " +"in preparation for an all-new pluggable version of differential privacy " +"support in Flower." msgstr "" -#: ../../source/ref-changelog.md:336 +#: ../../source/ref-changelog.md:90 msgid "" -"We now have a documentation guide to help users monitor their performance" -" during simulations." +"**Make optional arg** `--callable` **in** `flower-client` **a required " +"positional arg** ([#2673](https://github.com/adap/flower/pull/2673))" msgstr "" -#: ../../source/ref-changelog.md:338 +#: ../../source/ref-changelog.md:92 msgid "" -"**Add training metrics to** `History` **object during simulations** " -"([#1696](https://github.com/adap/flower/pull/1696))" +"**Rename** `certificates` **to** `root_certificates` **in** `Driver` " +"([#2890](https://github.com/adap/flower/pull/2890))" msgstr "" -#: ../../source/ref-changelog.md:340 +#: ../../source/ref-changelog.md:94 msgid "" -"The `fit_metrics_aggregation_fn` can be used to aggregate training " -"metrics, but previous releases did not save the results in the `History` " -"object. This is now the case!" +"**Drop experimental** `Task` **fields** " +"([#2866](https://github.com/adap/flower/pull/2866), " +"[#2865](https://github.com/adap/flower/pull/2865))" msgstr "" -#: ../../source/ref-changelog.md:342 +#: ../../source/ref-changelog.md:96 msgid "" -"**General improvements** " -"([#1659](https://github.com/adap/flower/pull/1659), " -"[#1646](https://github.com/adap/flower/pull/1646), " -"[#1647](https://github.com/adap/flower/pull/1647), " -"[#1471](https://github.com/adap/flower/pull/1471), " -"[#1648](https://github.com/adap/flower/pull/1648), " -"[#1651](https://github.com/adap/flower/pull/1651), " -"[#1652](https://github.com/adap/flower/pull/1652), " -"[#1653](https://github.com/adap/flower/pull/1653), " -"[#1659](https://github.com/adap/flower/pull/1659), " -"[#1665](https://github.com/adap/flower/pull/1665), " -"[#1670](https://github.com/adap/flower/pull/1670), " -"[#1672](https://github.com/adap/flower/pull/1672), " -"[#1677](https://github.com/adap/flower/pull/1677), " -"[#1684](https://github.com/adap/flower/pull/1684), " -"[#1683](https://github.com/adap/flower/pull/1683), " -"[#1686](https://github.com/adap/flower/pull/1686), " -"[#1682](https://github.com/adap/flower/pull/1682), " -"[#1685](https://github.com/adap/flower/pull/1685), " -"[#1692](https://github.com/adap/flower/pull/1692), " -"[#1705](https://github.com/adap/flower/pull/1705), " -"[#1708](https://github.com/adap/flower/pull/1708), " -"[#1711](https://github.com/adap/flower/pull/1711), " -"[#1713](https://github.com/adap/flower/pull/1713), " -"[#1714](https://github.com/adap/flower/pull/1714), " -"[#1718](https://github.com/adap/flower/pull/1718), " -"[#1716](https://github.com/adap/flower/pull/1716), " -"[#1723](https://github.com/adap/flower/pull/1723), " -"[#1735](https://github.com/adap/flower/pull/1735), " -"[#1678](https://github.com/adap/flower/pull/1678), " -"[#1750](https://github.com/adap/flower/pull/1750), " -"[#1753](https://github.com/adap/flower/pull/1753), " -"[#1736](https://github.com/adap/flower/pull/1736), " -"[#1766](https://github.com/adap/flower/pull/1766), " -"[#1760](https://github.com/adap/flower/pull/1760), " -"[#1775](https://github.com/adap/flower/pull/1775), " -"[#1776](https://github.com/adap/flower/pull/1776), " -"[#1777](https://github.com/adap/flower/pull/1777), " -"[#1779](https://github.com/adap/flower/pull/1779), " -"[#1784](https://github.com/adap/flower/pull/1784), " -"[#1773](https://github.com/adap/flower/pull/1773), " -"[#1755](https://github.com/adap/flower/pull/1755), " -"[#1789](https://github.com/adap/flower/pull/1789), " -"[#1788](https://github.com/adap/flower/pull/1788), " -"[#1798](https://github.com/adap/flower/pull/1798), " -"[#1799](https://github.com/adap/flower/pull/1799), " -"[#1739](https://github.com/adap/flower/pull/1739), " -"[#1800](https://github.com/adap/flower/pull/1800), " -"[#1804](https://github.com/adap/flower/pull/1804), " -"[#1805](https://github.com/adap/flower/pull/1805))" +"Experimental fields `sa`, `legacy_server_message` and " +"`legacy_client_message` were removed from `Task` message. The removed " +"fields are superseded by the new `RecordSet` abstraction." msgstr "" -#: ../../source/ref-changelog.md:350 -msgid "v1.3.0 (2023-02-06)" +#: ../../source/ref-changelog.md:98 +msgid "" +"**Retire MXNet examples** " +"([#2724](https://github.com/adap/flower/pull/2724))" msgstr "" -#: ../../source/ref-changelog.md:356 +#: ../../source/ref-changelog.md:100 msgid "" -"`Adam Narozniak`, `Alexander Viala Bellander`, `Charles Beauville`, " -"`Daniel J. Beutel`, `JDRanpariya`, `Lennart Behme`, `Taner Topal`" +"The development of the MXNet fremework has ended and the project is now " +"[archived on GitHub](https://github.com/apache/mxnet). Existing MXNet " +"examples won't receive updates." msgstr "" -#: ../../source/ref-changelog.md:360 -msgid "" -"**Add support for** `workload_id` **and** `group_id` **in Driver API** " -"([#1595](https://github.com/adap/flower/pull/1595))" +#: ../../source/ref-changelog.md:102 +msgid "v1.6.0 (2023-11-28)" msgstr "" -#: ../../source/ref-changelog.md:362 +#: ../../source/ref-changelog.md:108 msgid "" -"The (experimental) Driver API now supports a `workload_id` that can be " -"used to identify which workload a task belongs to. It also supports a new" -" `group_id` that can be used, for example, to indicate the current " -"training round. Both the `workload_id` and `group_id` enable client nodes" -" to decide whether they want to handle a task or not." +"`Aashish Kolluri`, `Adam Narozniak`, `Alessio Mora`, `Barathwaja S`, " +"`Charles Beauville`, `Daniel J. Beutel`, `Daniel Nata Nugraha`, `Gabriel " +"Mota`, `Heng Pan`, `Ivan Agarský`, `JS.KIM`, `Javier`, `Marius Schlegel`," +" `Navin Chandra`, `Nic Lane`, `Peterpan828`, `Qinbin Li`, `Shaz-hash`, " +"`Steve Laskaridis`, `Taner Topal`, `William Lindskog`, `Yan Gao`, " +"`cnxdeveloper`, `k3nfalt` " msgstr "" -#: ../../source/ref-changelog.md:364 +#: ../../source/ref-changelog.md:112 msgid "" -"**Make Driver API and Fleet API address configurable** " -"([#1637](https://github.com/adap/flower/pull/1637))" +"**Add experimental support for Python 3.12** " +"([#2565](https://github.com/adap/flower/pull/2565))" msgstr "" -#: ../../source/ref-changelog.md:366 +#: ../../source/ref-changelog.md:114 msgid "" -"The (experimental) long-running Flower server (Driver API and Fleet API) " -"can now configure the server address of both Driver API (via `--driver-" -"api-address`) and Fleet API (via `--fleet-api-address`) when starting:" +"**Add new XGBoost examples** " +"([#2612](https://github.com/adap/flower/pull/2612), " +"[#2554](https://github.com/adap/flower/pull/2554), " +"[#2617](https://github.com/adap/flower/pull/2617), " +"[#2618](https://github.com/adap/flower/pull/2618), " +"[#2619](https://github.com/adap/flower/pull/2619), " +"[#2567](https://github.com/adap/flower/pull/2567))" msgstr "" -#: ../../source/ref-changelog.md:368 +#: ../../source/ref-changelog.md:116 msgid "" -"`flower-server --driver-api-address \"0.0.0.0:8081\" --fleet-api-address " -"\"0.0.0.0:8086\"`" +"We have added a new `xgboost-quickstart` example alongside a new " +"`xgboost-comprehensive` example that goes more in-depth." msgstr "" -#: ../../source/ref-changelog.md:370 -msgid "Both IPv4 and IPv6 addresses are supported." +#: ../../source/ref-changelog.md:118 +msgid "" +"**Add Vertical FL example** " +"([#2598](https://github.com/adap/flower/pull/2598))" msgstr "" -#: ../../source/ref-changelog.md:372 +#: ../../source/ref-changelog.md:120 msgid "" -"**Add new example of Federated Learning using fastai and Flower** " -"([#1598](https://github.com/adap/flower/pull/1598))" +"We had many questions about Vertical Federated Learning using Flower, so " +"we decided to add an simple example for it on the [Titanic " +"dataset](https://www.kaggle.com/competitions/titanic/data) alongside a " +"tutorial (in the README)." msgstr "" -#: ../../source/ref-changelog.md:374 +#: ../../source/ref-changelog.md:122 msgid "" -"A new code example (`quickstart-fastai`) demonstrates federated learning " -"with [fastai](https://www.fast.ai/) and Flower. You can find it here: " -"[quickstart-fastai](https://github.com/adap/flower/tree/main/examples/quickstart-fastai)." +"**Support custom** `ClientManager` **in** `start_driver()` " +"([#2292](https://github.com/adap/flower/pull/2292))" msgstr "" -#: ../../source/ref-changelog.md:376 +#: ../../source/ref-changelog.md:124 msgid "" -"**Make Android example compatible with** `flwr >= 1.0.0` **and the latest" -" versions of Android** " -"([#1603](https://github.com/adap/flower/pull/1603))" +"**Update REST API to support create and delete nodes** " +"([#2283](https://github.com/adap/flower/pull/2283))" msgstr "" -#: ../../source/ref-changelog.md:378 +#: ../../source/ref-changelog.md:126 msgid "" -"The Android code example has received a substantial update: the project " -"is compatible with Flower 1.0 (and later), the UI received a full " -"refresh, and the project is updated to be compatible with newer Android " -"tooling." +"**Update the Android SDK** " +"([#2187](https://github.com/adap/flower/pull/2187))" msgstr "" -#: ../../source/ref-changelog.md:380 -msgid "" -"**Add new `FedProx` strategy** " -"([#1619](https://github.com/adap/flower/pull/1619))" +#: ../../source/ref-changelog.md:128 +msgid "Add gRPC request-response capability to the Android SDK." msgstr "" -#: ../../source/ref-changelog.md:382 +#: ../../source/ref-changelog.md:130 msgid "" -"This " -"[strategy](https://github.com/adap/flower/blob/main/src/py/flwr/server/strategy/fedprox.py)" -" is almost identical to " -"[`FedAvg`](https://github.com/adap/flower/blob/main/src/py/flwr/server/strategy/fedavg.py)," -" but helps users replicate what is described in this " -"[paper](https://arxiv.org/abs/1812.06127). It essentially adds a " -"parameter called `proximal_mu` to regularize the local models with " -"respect to the global models." +"**Update the C++ SDK** " +"([#2537](https://github.com/adap/flower/pull/2537), " +"[#2528](https://github.com/adap/flower/pull/2528), " +"[#2523](https://github.com/adap/flower/pull/2523), " +"[#2522](https://github.com/adap/flower/pull/2522))" msgstr "" -#: ../../source/ref-changelog.md:384 -msgid "" -"**Add new metrics to telemetry events** " -"([#1640](https://github.com/adap/flower/pull/1640))" +#: ../../source/ref-changelog.md:132 +msgid "Add gRPC request-response capability to the C++ SDK." msgstr "" -#: ../../source/ref-changelog.md:386 +#: ../../source/ref-changelog.md:134 msgid "" -"An updated event structure allows, for example, the clustering of events " -"within the same workload." +"**Make HTTPS the new default** " +"([#2591](https://github.com/adap/flower/pull/2591), " +"[#2636](https://github.com/adap/flower/pull/2636))" msgstr "" -#: ../../source/ref-changelog.md:388 +#: ../../source/ref-changelog.md:136 msgid "" -"**Add new custom strategy tutorial section** " -"[#1623](https://github.com/adap/flower/pull/1623)" +"Flower is moving to HTTPS by default. The new `flower-server` requires " +"passing `--certificates`, but users can enable `--insecure` to use HTTP " +"for prototyping. The same applies to `flower-client`, which can either " +"use user-provided credentials or gRPC-bundled certificates to connect to " +"an HTTPS-enabled server or requires opt-out via passing `--insecure` to " +"enable insecure HTTP connections." msgstr "" -#: ../../source/ref-changelog.md:390 +#: ../../source/ref-changelog.md:138 msgid "" -"The Flower tutorial now has a new section that covers implementing a " -"custom strategy from scratch: [Open in " -"Colab](https://colab.research.google.com/github/adap/flower/blob/main/doc/source" -"/tutorial-build-a-strategy-from-scratch-pytorch.ipynb)" +"For backward compatibility, `start_client()` and `start_numpy_client()` " +"will still start in insecure mode by default. In a future release, " +"insecure connections will require user opt-in by passing `insecure=True`." msgstr "" -#: ../../source/ref-changelog.md:392 +#: ../../source/ref-changelog.md:140 msgid "" -"**Add new custom serialization tutorial section** " -"([#1622](https://github.com/adap/flower/pull/1622))" +"**Unify client API** ([#2303](https://github.com/adap/flower/pull/2303), " +"[#2390](https://github.com/adap/flower/pull/2390), " +"[#2493](https://github.com/adap/flower/pull/2493))" msgstr "" -#: ../../source/ref-changelog.md:394 +#: ../../source/ref-changelog.md:142 msgid "" -"The Flower tutorial now has a new section that covers custom " -"serialization: [Open in " -"Colab](https://colab.research.google.com/github/adap/flower/blob/main/doc/source" -"/tutorial-customize-the-client-pytorch.ipynb)" +"Using the `client_fn`, Flower clients can interchangeably run as " +"standalone processes (i.e. via `start_client`) or in simulation (i.e. via" +" `start_simulation`) without requiring changes to how the client class is" +" defined and instantiated. The `to_client()` function is introduced to " +"convert a `NumPyClient` to a `Client`." msgstr "" -#: ../../source/ref-changelog.md:396 +#: ../../source/ref-changelog.md:144 msgid "" -"**General improvements** " -"([#1638](https://github.com/adap/flower/pull/1638), " -"[#1634](https://github.com/adap/flower/pull/1634), " -"[#1636](https://github.com/adap/flower/pull/1636), " -"[#1635](https://github.com/adap/flower/pull/1635), " -"[#1633](https://github.com/adap/flower/pull/1633), " -"[#1632](https://github.com/adap/flower/pull/1632), " -"[#1631](https://github.com/adap/flower/pull/1631), " -"[#1630](https://github.com/adap/flower/pull/1630), " -"[#1627](https://github.com/adap/flower/pull/1627), " -"[#1593](https://github.com/adap/flower/pull/1593), " -"[#1616](https://github.com/adap/flower/pull/1616), " -"[#1615](https://github.com/adap/flower/pull/1615), " -"[#1607](https://github.com/adap/flower/pull/1607), " -"[#1609](https://github.com/adap/flower/pull/1609), " -"[#1608](https://github.com/adap/flower/pull/1608), " -"[#1603](https://github.com/adap/flower/pull/1603), " -"[#1590](https://github.com/adap/flower/pull/1590), " -"[#1580](https://github.com/adap/flower/pull/1580), " -"[#1599](https://github.com/adap/flower/pull/1599), " -"[#1600](https://github.com/adap/flower/pull/1600), " -"[#1601](https://github.com/adap/flower/pull/1601), " -"[#1597](https://github.com/adap/flower/pull/1597), " -"[#1595](https://github.com/adap/flower/pull/1595), " -"[#1591](https://github.com/adap/flower/pull/1591), " -"[#1588](https://github.com/adap/flower/pull/1588), " -"[#1589](https://github.com/adap/flower/pull/1589), " -"[#1587](https://github.com/adap/flower/pull/1587), " -"[#1573](https://github.com/adap/flower/pull/1573), " -"[#1581](https://github.com/adap/flower/pull/1581), " -"[#1578](https://github.com/adap/flower/pull/1578), " -"[#1574](https://github.com/adap/flower/pull/1574), " -"[#1572](https://github.com/adap/flower/pull/1572), " -"[#1586](https://github.com/adap/flower/pull/1586))" +"**Add new** `Bulyan` **strategy** " +"([#1817](https://github.com/adap/flower/pull/1817), " +"[#1891](https://github.com/adap/flower/pull/1891))" msgstr "" -#: ../../source/ref-changelog.md:400 +#: ../../source/ref-changelog.md:146 msgid "" -"**Updated documentation** " -"([#1629](https://github.com/adap/flower/pull/1629), " -"[#1628](https://github.com/adap/flower/pull/1628), " -"[#1620](https://github.com/adap/flower/pull/1620), " -"[#1618](https://github.com/adap/flower/pull/1618), " -"[#1617](https://github.com/adap/flower/pull/1617), " -"[#1613](https://github.com/adap/flower/pull/1613), " -"[#1614](https://github.com/adap/flower/pull/1614))" +"The new `Bulyan` strategy implements Bulyan by [El Mhamdi et al., " +"2018](https://arxiv.org/abs/1802.07927)" msgstr "" -#: ../../source/ref-changelog.md:402 ../../source/ref-changelog.md:469 +#: ../../source/ref-changelog.md:148 msgid "" -"As usual, the documentation has improved quite a bit. It is another step " -"in our effort to make the Flower documentation the best documentation of " -"any project. Stay tuned and as always, feel free to provide feedback!" -msgstr "" - -#: ../../source/ref-changelog.md:408 -msgid "v1.2.0 (2023-01-13)" +"**Add new** `XGB Bagging` **strategy** " +"([#2611](https://github.com/adap/flower/pull/2611))" msgstr "" -#: ../../source/ref-changelog.md:414 +#: ../../source/ref-changelog.md:150 ../../source/ref-changelog.md:152 msgid "" -"`Adam Narozniak`, `Charles Beauville`, `Daniel J. Beutel`, `Edoardo`, `L." -" Jiang`, `Ragy`, `Taner Topal`, `dannymcy`" +"**Introduce `WorkloadState`** " +"([#2564](https://github.com/adap/flower/pull/2564), " +"[#2632](https://github.com/adap/flower/pull/2632))" msgstr "" -#: ../../source/ref-changelog.md:418 +#: ../../source/ref-changelog.md:156 msgid "" -"**Introduce new Flower Baseline: FedAvg MNIST** " -"([#1497](https://github.com/adap/flower/pull/1497), " -"[#1552](https://github.com/adap/flower/pull/1552))" +"FedProx ([#2210](https://github.com/adap/flower/pull/2210), " +"[#2286](https://github.com/adap/flower/pull/2286), " +"[#2509](https://github.com/adap/flower/pull/2509))" msgstr "" -#: ../../source/ref-changelog.md:420 +#: ../../source/ref-changelog.md:158 msgid "" -"Over the coming weeks, we will be releasing a number of new reference " -"implementations useful especially to FL newcomers. They will typically " -"revisit well known papers from the literature, and be suitable for " -"integration in your own application or for experimentation, in order to " -"deepen your knowledge of FL in general. Today's release is the first in " -"this series. [Read more.](https://flower.ai/blog/2023-01-12-fl-starter-" -"pack-fedavg-mnist-cnn/)" +"Baselines Docs ([#2290](https://github.com/adap/flower/pull/2290), " +"[#2400](https://github.com/adap/flower/pull/2400))" msgstr "" -#: ../../source/ref-changelog.md:422 +#: ../../source/ref-changelog.md:160 msgid "" -"**Improve GPU support in simulations** " -"([#1555](https://github.com/adap/flower/pull/1555))" +"FedMLB ([#2340](https://github.com/adap/flower/pull/2340), " +"[#2507](https://github.com/adap/flower/pull/2507))" msgstr "" -#: ../../source/ref-changelog.md:424 +#: ../../source/ref-changelog.md:162 msgid "" -"The Ray-based Virtual Client Engine (`start_simulation`) has been updated" -" to improve GPU support. The update includes some of the hard-earned " -"lessons from scaling simulations in GPU cluster environments. New " -"defaults make running GPU-based simulations substantially more robust." +"TAMUNA ([#2254](https://github.com/adap/flower/pull/2254), " +"[#2508](https://github.com/adap/flower/pull/2508))" msgstr "" -#: ../../source/ref-changelog.md:426 -msgid "" -"**Improve GPU support in Jupyter Notebook tutorials** " -"([#1527](https://github.com/adap/flower/pull/1527), " -"[#1558](https://github.com/adap/flower/pull/1558))" +#: ../../source/ref-changelog.md:164 +msgid "FedMeta [#2438](https://github.com/adap/flower/pull/2438)" msgstr "" -#: ../../source/ref-changelog.md:428 -msgid "" -"Some users reported that Jupyter Notebooks have not always been easy to " -"use on GPU instances. We listened and made improvements to all of our " -"Jupyter notebooks! Check out the updated notebooks here:" +#: ../../source/ref-changelog.md:166 +msgid "FjORD [#2431](https://github.com/adap/flower/pull/2431)" msgstr "" -#: ../../source/ref-changelog.md:430 -msgid "" -"[An Introduction to Federated Learning](https://flower.ai/docs/framework" -"/tutorial-get-started-with-flower-pytorch.html)" +#: ../../source/ref-changelog.md:168 +msgid "MOON [#2421](https://github.com/adap/flower/pull/2421)" msgstr "" -#: ../../source/ref-changelog.md:431 -msgid "" -"[Strategies in Federated Learning](https://flower.ai/docs/framework" -"/tutorial-use-a-federated-learning-strategy-pytorch.html)" +#: ../../source/ref-changelog.md:170 +msgid "DepthFL [#2295](https://github.com/adap/flower/pull/2295)" msgstr "" -#: ../../source/ref-changelog.md:432 -msgid "" -"[Building a Strategy](https://flower.ai/docs/framework/tutorial-build-a" -"-strategy-from-scratch-pytorch.html)" +#: ../../source/ref-changelog.md:172 +msgid "FedPer [#2266](https://github.com/adap/flower/pull/2266)" msgstr "" -#: ../../source/ref-changelog.md:433 -msgid "" -"[Client and NumPyClient](https://flower.ai/docs/framework/tutorial-" -"customize-the-client-pytorch.html)" +#: ../../source/ref-changelog.md:174 +msgid "FedWav2vec [#2551](https://github.com/adap/flower/pull/2551)" msgstr "" -#: ../../source/ref-changelog.md:435 -msgid "" -"**Introduce optional telemetry** " -"([#1533](https://github.com/adap/flower/pull/1533), " -"[#1544](https://github.com/adap/flower/pull/1544), " -"[#1584](https://github.com/adap/flower/pull/1584))" +#: ../../source/ref-changelog.md:176 +msgid "niid-Bench [#2428](https://github.com/adap/flower/pull/2428)" msgstr "" -#: ../../source/ref-changelog.md:437 +#: ../../source/ref-changelog.md:178 msgid "" -"After a [request for " -"feedback](https://github.com/adap/flower/issues/1534) from the community," -" the Flower open-source project introduces optional collection of " -"*anonymous* usage metrics to make well-informed decisions to improve " -"Flower. Doing this enables the Flower team to understand how Flower is " -"used and what challenges users might face." +"FedBN ([#2608](https://github.com/adap/flower/pull/2608), " +"[#2615](https://github.com/adap/flower/pull/2615))" msgstr "" -#: ../../source/ref-changelog.md:439 +#: ../../source/ref-changelog.md:180 msgid "" -"**Flower is a friendly framework for collaborative AI and data science.**" -" Staying true to this statement, Flower makes it easy to disable " -"telemetry for users who do not want to share anonymous usage metrics. " -"[Read more.](https://flower.ai/docs/telemetry.html)." +"**General updates to Flower Examples** " +"([#2384](https://github.com/adap/flower/pull/2384), " +"[#2425](https://github.com/adap/flower/pull/2425), " +"[#2526](https://github.com/adap/flower/pull/2526), " +"[#2302](https://github.com/adap/flower/pull/2302), " +"[#2545](https://github.com/adap/flower/pull/2545))" msgstr "" -#: ../../source/ref-changelog.md:441 +#: ../../source/ref-changelog.md:182 msgid "" -"**Introduce (experimental) Driver API** " -"([#1520](https://github.com/adap/flower/pull/1520), " -"[#1525](https://github.com/adap/flower/pull/1525), " -"[#1545](https://github.com/adap/flower/pull/1545), " -"[#1546](https://github.com/adap/flower/pull/1546), " -"[#1550](https://github.com/adap/flower/pull/1550), " -"[#1551](https://github.com/adap/flower/pull/1551), " -"[#1567](https://github.com/adap/flower/pull/1567))" +"**General updates to Flower Baselines** " +"([#2301](https://github.com/adap/flower/pull/2301), " +"[#2305](https://github.com/adap/flower/pull/2305), " +"[#2307](https://github.com/adap/flower/pull/2307), " +"[#2327](https://github.com/adap/flower/pull/2327), " +"[#2435](https://github.com/adap/flower/pull/2435), " +"[#2462](https://github.com/adap/flower/pull/2462), " +"[#2463](https://github.com/adap/flower/pull/2463), " +"[#2461](https://github.com/adap/flower/pull/2461), " +"[#2469](https://github.com/adap/flower/pull/2469), " +"[#2466](https://github.com/adap/flower/pull/2466), " +"[#2471](https://github.com/adap/flower/pull/2471), " +"[#2472](https://github.com/adap/flower/pull/2472), " +"[#2470](https://github.com/adap/flower/pull/2470))" msgstr "" -#: ../../source/ref-changelog.md:443 +#: ../../source/ref-changelog.md:184 msgid "" -"Flower now has a new (experimental) Driver API which will enable fully " -"programmable, async, and multi-tenant Federated Learning and Federated " -"Analytics applications. Phew, that's a lot! Going forward, the Driver API" -" will be the abstraction that many upcoming features will be built on - " -"and you can start building those things now, too." +"**General updates to the simulation engine** " +"([#2331](https://github.com/adap/flower/pull/2331), " +"[#2447](https://github.com/adap/flower/pull/2447), " +"[#2448](https://github.com/adap/flower/pull/2448), " +"[#2294](https://github.com/adap/flower/pull/2294))" msgstr "" -#: ../../source/ref-changelog.md:445 +#: ../../source/ref-changelog.md:186 msgid "" -"The Driver API also enables a new execution mode in which the server runs" -" indefinitely. Multiple individual workloads can run concurrently and " -"start and stop their execution independent of the server. This is " -"especially useful for users who want to deploy Flower in production." +"**General updates to Flower SDKs** " +"([#2288](https://github.com/adap/flower/pull/2288), " +"[#2429](https://github.com/adap/flower/pull/2429), " +"[#2555](https://github.com/adap/flower/pull/2555), " +"[#2543](https://github.com/adap/flower/pull/2543), " +"[#2544](https://github.com/adap/flower/pull/2544), " +"[#2597](https://github.com/adap/flower/pull/2597), " +"[#2623](https://github.com/adap/flower/pull/2623))" msgstr "" -#: ../../source/ref-changelog.md:447 +#: ../../source/ref-changelog.md:188 msgid "" -"To learn more, check out the `mt-pytorch` code example. We look forward " -"to you feedback!" +"**General improvements** " +"([#2309](https://github.com/adap/flower/pull/2309), " +"[#2310](https://github.com/adap/flower/pull/2310), " +"[#2313](https://github.com/adap/flower/pull/2313), " +"[#2316](https://github.com/adap/flower/pull/2316), " +"[#2317](https://github.com/adap/flower/pull/2317), " +"[#2349](https://github.com/adap/flower/pull/2349), " +"[#2360](https://github.com/adap/flower/pull/2360), " +"[#2402](https://github.com/adap/flower/pull/2402), " +"[#2446](https://github.com/adap/flower/pull/2446), " +"[#2561](https://github.com/adap/flower/pull/2561), " +"[#2273](https://github.com/adap/flower/pull/2273), " +"[#2267](https://github.com/adap/flower/pull/2267), " +"[#2274](https://github.com/adap/flower/pull/2274), " +"[#2275](https://github.com/adap/flower/pull/2275), " +"[#2432](https://github.com/adap/flower/pull/2432), " +"[#2251](https://github.com/adap/flower/pull/2251), " +"[#2321](https://github.com/adap/flower/pull/2321), " +"[#1936](https://github.com/adap/flower/pull/1936), " +"[#2408](https://github.com/adap/flower/pull/2408), " +"[#2413](https://github.com/adap/flower/pull/2413), " +"[#2401](https://github.com/adap/flower/pull/2401), " +"[#2531](https://github.com/adap/flower/pull/2531), " +"[#2534](https://github.com/adap/flower/pull/2534), " +"[#2535](https://github.com/adap/flower/pull/2535), " +"[#2521](https://github.com/adap/flower/pull/2521), " +"[#2553](https://github.com/adap/flower/pull/2553), " +"[#2596](https://github.com/adap/flower/pull/2596))" msgstr "" -#: ../../source/ref-changelog.md:449 -msgid "" -"Please note: *The Driver API is still experimental and will likely change" -" significantly over time.*" +#: ../../source/ref-changelog.md:190 ../../source/ref-changelog.md:280 +#: ../../source/ref-changelog.md:344 ../../source/ref-changelog.md:398 +#: ../../source/ref-changelog.md:465 +msgid "Flower received many improvements under the hood, too many to list here." msgstr "" -#: ../../source/ref-changelog.md:451 +#: ../../source/ref-changelog.md:194 msgid "" -"**Add new Federated Analytics with Pandas example** " -"([#1469](https://github.com/adap/flower/pull/1469), " -"[#1535](https://github.com/adap/flower/pull/1535))" +"**Remove support for Python 3.7** " +"([#2280](https://github.com/adap/flower/pull/2280), " +"[#2299](https://github.com/adap/flower/pull/2299), " +"[#2304](https://github.com/adap/flower/pull/2304), " +"[#2306](https://github.com/adap/flower/pull/2306), " +"[#2355](https://github.com/adap/flower/pull/2355), " +"[#2356](https://github.com/adap/flower/pull/2356))" msgstr "" -#: ../../source/ref-changelog.md:453 +#: ../../source/ref-changelog.md:196 msgid "" -"A new code example (`quickstart-pandas`) demonstrates federated analytics" -" with Pandas and Flower. You can find it here: " -"[quickstart-pandas](https://github.com/adap/flower/tree/main/examples/quickstart-pandas)." +"Python 3.7 support was deprecated in Flower 1.5, and this release removes" +" support. Flower now requires Python 3.8." msgstr "" -#: ../../source/ref-changelog.md:455 +#: ../../source/ref-changelog.md:198 msgid "" -"**Add new strategies: Krum and MultiKrum** " -"([#1481](https://github.com/adap/flower/pull/1481))" +"**Remove experimental argument** `rest` **from** `start_client` " +"([#2324](https://github.com/adap/flower/pull/2324))" msgstr "" -#: ../../source/ref-changelog.md:457 +#: ../../source/ref-changelog.md:200 msgid "" -"Edoardo, a computer science student at the Sapienza University of Rome, " -"contributed a new `Krum` strategy that enables users to easily use Krum " -"and MultiKrum in their workloads." +"The (still experimental) argument `rest` was removed from `start_client` " +"and `start_numpy_client`. Use `transport=\"rest\"` to opt into the " +"experimental REST API instead." msgstr "" -#: ../../source/ref-changelog.md:459 -msgid "" -"**Update C++ example to be compatible with Flower v1.2.0** " -"([#1495](https://github.com/adap/flower/pull/1495))" +#: ../../source/ref-changelog.md:202 +msgid "v1.5.0 (2023-08-31)" msgstr "" -#: ../../source/ref-changelog.md:461 +#: ../../source/ref-changelog.md:208 msgid "" -"The C++ code example has received a substantial update to make it " -"compatible with the latest version of Flower." +"`Adam Narozniak`, `Anass Anhari`, `Charles Beauville`, `Dana-Farber`, " +"`Daniel J. Beutel`, `Daniel Nata Nugraha`, `Edoardo Gabrielli`, `Gustavo " +"Bertoli`, `Heng Pan`, `Javier`, `Mahdi`, `Steven Hé (Sīchàng)`, `Taner " +"Topal`, `achiverram28`, `danielnugraha`, `eunchung`, `ruthgal` " msgstr "" -#: ../../source/ref-changelog.md:463 +#: ../../source/ref-changelog.md:212 msgid "" -"**General improvements** " -"([#1491](https://github.com/adap/flower/pull/1491), " -"[#1504](https://github.com/adap/flower/pull/1504), " -"[#1506](https://github.com/adap/flower/pull/1506), " -"[#1514](https://github.com/adap/flower/pull/1514), " -"[#1522](https://github.com/adap/flower/pull/1522), " -"[#1523](https://github.com/adap/flower/pull/1523), " -"[#1526](https://github.com/adap/flower/pull/1526), " -"[#1528](https://github.com/adap/flower/pull/1528), " -"[#1547](https://github.com/adap/flower/pull/1547), " -"[#1549](https://github.com/adap/flower/pull/1549), " -"[#1560](https://github.com/adap/flower/pull/1560), " -"[#1564](https://github.com/adap/flower/pull/1564), " -"[#1566](https://github.com/adap/flower/pull/1566))" +"**Introduce new simulation engine** " +"([#1969](https://github.com/adap/flower/pull/1969), " +"[#2221](https://github.com/adap/flower/pull/2221), " +"[#2248](https://github.com/adap/flower/pull/2248))" msgstr "" -#: ../../source/ref-changelog.md:467 +#: ../../source/ref-changelog.md:214 msgid "" -"**Updated documentation** " -"([#1494](https://github.com/adap/flower/pull/1494), " -"[#1496](https://github.com/adap/flower/pull/1496), " -"[#1500](https://github.com/adap/flower/pull/1500), " -"[#1503](https://github.com/adap/flower/pull/1503), " -"[#1505](https://github.com/adap/flower/pull/1505), " -"[#1524](https://github.com/adap/flower/pull/1524), " -"[#1518](https://github.com/adap/flower/pull/1518), " -"[#1519](https://github.com/adap/flower/pull/1519), " -"[#1515](https://github.com/adap/flower/pull/1515))" +"The new simulation engine has been rewritten from the ground up, yet it " +"remains fully backwards compatible. It offers much improved stability and" +" memory handling, especially when working with GPUs. Simulations " +"transparently adapt to different settings to scale simulation in CPU-" +"only, CPU+GPU, multi-GPU, or multi-node multi-GPU environments." msgstr "" -#: ../../source/ref-changelog.md:471 +#: ../../source/ref-changelog.md:216 msgid "" -"One highlight is the new [first time contributor " -"guide](https://flower.ai/docs/first-time-contributors.html): if you've " -"never contributed on GitHub before, this is the perfect place to start!" -msgstr "" - -#: ../../source/ref-changelog.md:477 -msgid "v1.1.0 (2022-10-31)" +"Comprehensive documentation includes a new [how-to run " +"simulations](https://flower.ai/docs/framework/how-to-run-" +"simulations.html) guide, new [simulation-" +"pytorch](https://flower.ai/docs/examples/simulation-pytorch.html) and " +"[simulation-tensorflow](https://flower.ai/docs/examples/simulation-" +"tensorflow.html) notebooks, and a new [YouTube tutorial " +"series](https://www.youtube.com/watch?v=cRebUIGB5RU&list=PLNG4feLHqCWlnj8a_E1A_n5zr2-8pafTB)." msgstr "" -#: ../../source/ref-changelog.md:481 +#: ../../source/ref-changelog.md:218 msgid "" -"We would like to give our **special thanks** to all the contributors who " -"made the new version of Flower possible (in `git shortlog` order):" +"**Restructure Flower Docs** " +"([#1824](https://github.com/adap/flower/pull/1824), " +"[#1865](https://github.com/adap/flower/pull/1865), " +"[#1884](https://github.com/adap/flower/pull/1884), " +"[#1887](https://github.com/adap/flower/pull/1887), " +"[#1919](https://github.com/adap/flower/pull/1919), " +"[#1922](https://github.com/adap/flower/pull/1922), " +"[#1920](https://github.com/adap/flower/pull/1920), " +"[#1923](https://github.com/adap/flower/pull/1923), " +"[#1924](https://github.com/adap/flower/pull/1924), " +"[#1962](https://github.com/adap/flower/pull/1962), " +"[#2006](https://github.com/adap/flower/pull/2006), " +"[#2133](https://github.com/adap/flower/pull/2133), " +"[#2203](https://github.com/adap/flower/pull/2203), " +"[#2215](https://github.com/adap/flower/pull/2215), " +"[#2122](https://github.com/adap/flower/pull/2122), " +"[#2223](https://github.com/adap/flower/pull/2223), " +"[#2219](https://github.com/adap/flower/pull/2219), " +"[#2232](https://github.com/adap/flower/pull/2232), " +"[#2233](https://github.com/adap/flower/pull/2233), " +"[#2234](https://github.com/adap/flower/pull/2234), " +"[#2235](https://github.com/adap/flower/pull/2235), " +"[#2237](https://github.com/adap/flower/pull/2237), " +"[#2238](https://github.com/adap/flower/pull/2238), " +"[#2242](https://github.com/adap/flower/pull/2242), " +"[#2231](https://github.com/adap/flower/pull/2231), " +"[#2243](https://github.com/adap/flower/pull/2243), " +"[#2227](https://github.com/adap/flower/pull/2227))" msgstr "" -#: ../../source/ref-changelog.md:483 +#: ../../source/ref-changelog.md:220 msgid "" -"`Akis Linardos`, `Christopher S`, `Daniel J. Beutel`, `George`, `Jan " -"Schlicht`, `Mohammad Fares`, `Pedro Porto Buarque de Gusmão`, `Philipp " -"Wiesner`, `Rob Luke`, `Taner Topal`, `VasundharaAgarwal`, " -"`danielnugraha`, `edogab33`" +"Much effort went into a completely restructured Flower docs experience. " +"The documentation on [flower.ai/docs](https://flower.ai/docs) is now " +"divided into Flower Framework, Flower Baselines, Flower Android SDK, " +"Flower iOS SDK, and code example projects." msgstr "" -#: ../../source/ref-changelog.md:487 +#: ../../source/ref-changelog.md:222 msgid "" -"**Introduce Differential Privacy wrappers (preview)** " -"([#1357](https://github.com/adap/flower/pull/1357), " -"[#1460](https://github.com/adap/flower/pull/1460))" +"**Introduce Flower Swift SDK** " +"([#1858](https://github.com/adap/flower/pull/1858), " +"[#1897](https://github.com/adap/flower/pull/1897))" msgstr "" -#: ../../source/ref-changelog.md:489 +#: ../../source/ref-changelog.md:224 msgid "" -"The first (experimental) preview of pluggable Differential Privacy " -"wrappers enables easy configuration and usage of differential privacy " -"(DP). The pluggable DP wrappers enable framework-agnostic **and** " -"strategy-agnostic usage of both client-side DP and server-side DP. Head " -"over to the Flower docs, a new explainer goes into more detail." +"This is the first preview release of the Flower Swift SDK. Flower support" +" on iOS is improving, and alongside the Swift SDK and code example, there" +" is now also an iOS quickstart tutorial." msgstr "" -#: ../../source/ref-changelog.md:491 +#: ../../source/ref-changelog.md:226 msgid "" -"**New iOS CoreML code example** " -"([#1289](https://github.com/adap/flower/pull/1289))" +"**Introduce Flower Android SDK** " +"([#2131](https://github.com/adap/flower/pull/2131))" msgstr "" -#: ../../source/ref-changelog.md:493 +#: ../../source/ref-changelog.md:228 msgid "" -"Flower goes iOS! A massive new code example shows how Flower clients can " -"be built for iOS. The code example contains both Flower iOS SDK " -"components that can be used for many tasks, and one task example running " -"on CoreML." +"This is the first preview release of the Flower Kotlin SDK. Flower " +"support on Android is improving, and alongside the Kotlin SDK and code " +"example, there is now also an Android quickstart tutorial." msgstr "" -#: ../../source/ref-changelog.md:495 +#: ../../source/ref-changelog.md:230 msgid "" -"**New FedMedian strategy** " -"([#1461](https://github.com/adap/flower/pull/1461))" +"**Introduce new end-to-end testing infrastructure** " +"([#1842](https://github.com/adap/flower/pull/1842), " +"[#2071](https://github.com/adap/flower/pull/2071), " +"[#2072](https://github.com/adap/flower/pull/2072), " +"[#2068](https://github.com/adap/flower/pull/2068), " +"[#2067](https://github.com/adap/flower/pull/2067), " +"[#2069](https://github.com/adap/flower/pull/2069), " +"[#2073](https://github.com/adap/flower/pull/2073), " +"[#2070](https://github.com/adap/flower/pull/2070), " +"[#2074](https://github.com/adap/flower/pull/2074), " +"[#2082](https://github.com/adap/flower/pull/2082), " +"[#2084](https://github.com/adap/flower/pull/2084), " +"[#2093](https://github.com/adap/flower/pull/2093), " +"[#2109](https://github.com/adap/flower/pull/2109), " +"[#2095](https://github.com/adap/flower/pull/2095), " +"[#2140](https://github.com/adap/flower/pull/2140), " +"[#2137](https://github.com/adap/flower/pull/2137), " +"[#2165](https://github.com/adap/flower/pull/2165))" msgstr "" -#: ../../source/ref-changelog.md:497 +#: ../../source/ref-changelog.md:232 msgid "" -"The new `FedMedian` strategy implements Federated Median (FedMedian) by " -"[Yin et al., 2018](https://arxiv.org/pdf/1803.01498v1.pdf)." +"A new testing infrastructure ensures that new changes stay compatible " +"with existing framework integrations or strategies." msgstr "" -#: ../../source/ref-changelog.md:499 +#: ../../source/ref-changelog.md:234 +msgid "**Deprecate Python 3.7**" +msgstr "" + +#: ../../source/ref-changelog.md:236 msgid "" -"**Log** `Client` **exceptions in Virtual Client Engine** " -"([#1493](https://github.com/adap/flower/pull/1493))" +"Since Python 3.7 reached its end of life (EOL) on 2023-06-27, support for" +" Python 3.7 is now deprecated and will be removed in an upcoming release." msgstr "" -#: ../../source/ref-changelog.md:501 +#: ../../source/ref-changelog.md:238 msgid "" -"All `Client` exceptions happening in the VCE are now logged by default " -"and not just exposed to the configured `Strategy` (via the `failures` " -"argument)." +"**Add new** `FedTrimmedAvg` **strategy** " +"([#1769](https://github.com/adap/flower/pull/1769), " +"[#1853](https://github.com/adap/flower/pull/1853))" msgstr "" -#: ../../source/ref-changelog.md:503 +#: ../../source/ref-changelog.md:240 msgid "" -"**Improve Virtual Client Engine internals** " -"([#1401](https://github.com/adap/flower/pull/1401), " -"[#1453](https://github.com/adap/flower/pull/1453))" +"The new `FedTrimmedAvg` strategy implements Trimmed Mean by [Dong Yin, " +"2018](https://arxiv.org/abs/1803.01498)." msgstr "" -#: ../../source/ref-changelog.md:505 +#: ../../source/ref-changelog.md:242 msgid "" -"Some internals of the Virtual Client Engine have been revamped. The VCE " -"now uses Ray 2.0 under the hood, the value type of the `client_resources`" -" dictionary changed to `float` to allow fractions of resources to be " -"allocated." +"**Introduce start_driver** " +"([#1697](https://github.com/adap/flower/pull/1697))" msgstr "" -#: ../../source/ref-changelog.md:507 +#: ../../source/ref-changelog.md:244 msgid "" -"**Support optional** `Client`**/**`NumPyClient` **methods in Virtual " -"Client Engine**" +"In addition to `start_server` and using the raw Driver API, there is a " +"new `start_driver` function that allows for running `start_server` " +"scripts as a Flower driver with only a single-line code change. Check out" +" the `mt-pytorch` code example to see a working example using " +"`start_driver`." msgstr "" -#: ../../source/ref-changelog.md:509 +#: ../../source/ref-changelog.md:246 msgid "" -"The Virtual Client Engine now has full support for optional `Client` (and" -" `NumPyClient`) methods." +"**Add parameter aggregation to** `mt-pytorch` **code example** " +"([#1785](https://github.com/adap/flower/pull/1785))" msgstr "" -#: ../../source/ref-changelog.md:511 +#: ../../source/ref-changelog.md:248 msgid "" -"**Provide type information to packages using** `flwr` " -"([#1377](https://github.com/adap/flower/pull/1377))" +"The `mt-pytorch` example shows how to aggregate parameters when writing a" +" driver script. The included `driver.py` and `server.py` have been " +"aligned to demonstrate both the low-level way and the high-level way of " +"building server-side logic." msgstr "" -#: ../../source/ref-changelog.md:513 +#: ../../source/ref-changelog.md:250 msgid "" -"The package `flwr` is now bundled with a `py.typed` file indicating that " -"the package is typed. This enables typing support for projects or " -"packages that use `flwr` by enabling them to improve their code using " -"static type checkers like `mypy`." +"**Migrate experimental REST API to Starlette** " +"([2171](https://github.com/adap/flower/pull/2171))" msgstr "" -#: ../../source/ref-changelog.md:515 +#: ../../source/ref-changelog.md:252 msgid "" -"**Updated code example** " -"([#1344](https://github.com/adap/flower/pull/1344), " -"[#1347](https://github.com/adap/flower/pull/1347))" +"The (experimental) REST API used to be implemented in " +"[FastAPI](https://fastapi.tiangolo.com/), but it has now been migrated to" +" use [Starlette](https://www.starlette.io/) directly." msgstr "" -#: ../../source/ref-changelog.md:517 +#: ../../source/ref-changelog.md:254 msgid "" -"The code examples covering scikit-learn and PyTorch Lightning have been " -"updated to work with the latest version of Flower." +"Please note: The REST request-response API is still experimental and will" +" likely change significantly over time." msgstr "" -#: ../../source/ref-changelog.md:519 +#: ../../source/ref-changelog.md:256 msgid "" -"**Updated documentation** " -"([#1355](https://github.com/adap/flower/pull/1355), " -"[#1558](https://github.com/adap/flower/pull/1558), " -"[#1379](https://github.com/adap/flower/pull/1379), " -"[#1380](https://github.com/adap/flower/pull/1380), " -"[#1381](https://github.com/adap/flower/pull/1381), " -"[#1332](https://github.com/adap/flower/pull/1332), " -"[#1391](https://github.com/adap/flower/pull/1391), " -"[#1403](https://github.com/adap/flower/pull/1403), " -"[#1364](https://github.com/adap/flower/pull/1364), " -"[#1409](https://github.com/adap/flower/pull/1409), " -"[#1419](https://github.com/adap/flower/pull/1419), " -"[#1444](https://github.com/adap/flower/pull/1444), " -"[#1448](https://github.com/adap/flower/pull/1448), " -"[#1417](https://github.com/adap/flower/pull/1417), " -"[#1449](https://github.com/adap/flower/pull/1449), " -"[#1465](https://github.com/adap/flower/pull/1465), " -"[#1467](https://github.com/adap/flower/pull/1467))" +"**Introduce experimental gRPC request-response API** " +"([#1867](https://github.com/adap/flower/pull/1867), " +"[#1901](https://github.com/adap/flower/pull/1901))" msgstr "" -#: ../../source/ref-changelog.md:521 +#: ../../source/ref-changelog.md:258 msgid "" -"There have been so many documentation updates that it doesn't even make " -"sense to list them individually." +"In addition to the existing gRPC API (based on bidirectional streaming) " +"and the experimental REST API, there is now a new gRPC API that uses a " +"request-response model to communicate with client nodes." msgstr "" -#: ../../source/ref-changelog.md:523 +#: ../../source/ref-changelog.md:260 msgid "" -"**Restructured documentation** " -"([#1387](https://github.com/adap/flower/pull/1387))" +"Please note: The gRPC request-response API is still experimental and will" +" likely change significantly over time." msgstr "" -#: ../../source/ref-changelog.md:525 +#: ../../source/ref-changelog.md:262 msgid "" -"The documentation has been restructured to make it easier to navigate. " -"This is just the first step in a larger effort to make the Flower " -"documentation the best documentation of any project ever. Stay tuned!" +"**Replace the experimental** `start_client(rest=True)` **with the new** " +"`start_client(transport=\"rest\")` " +"([#1880](https://github.com/adap/flower/pull/1880))" msgstr "" -#: ../../source/ref-changelog.md:527 +#: ../../source/ref-changelog.md:264 msgid "" -"**Open in Colab button** " -"([#1389](https://github.com/adap/flower/pull/1389))" +"The (experimental) `start_client` argument `rest` was deprecated in " +"favour of a new argument `transport`. `start_client(transport=\"rest\")` " +"will yield the same behaviour as `start_client(rest=True)` did before. " +"All code should migrate to the new argument `transport`. The deprecated " +"argument `rest` will be removed in a future release." msgstr "" -#: ../../source/ref-changelog.md:529 +#: ../../source/ref-changelog.md:266 msgid "" -"The four parts of the Flower Federated Learning Tutorial now come with a " -"new `Open in Colab` button. No need to install anything on your local " -"machine, you can now use and learn about Flower in your browser, it's " -"only a single click away." +"**Add a new gRPC option** " +"([#2197](https://github.com/adap/flower/pull/2197))" msgstr "" -#: ../../source/ref-changelog.md:531 +#: ../../source/ref-changelog.md:268 msgid "" -"**Improved tutorial** ([#1468](https://github.com/adap/flower/pull/1468)," -" [#1470](https://github.com/adap/flower/pull/1470), " -"[#1472](https://github.com/adap/flower/pull/1472), " -"[#1473](https://github.com/adap/flower/pull/1473), " -"[#1474](https://github.com/adap/flower/pull/1474), " -"[#1475](https://github.com/adap/flower/pull/1475))" +"We now start a gRPC server with the `grpc.keepalive_permit_without_calls`" +" option set to 0 by default. This prevents the clients from sending " +"keepalive pings when there is no outstanding stream." msgstr "" -#: ../../source/ref-changelog.md:533 +#: ../../source/ref-changelog.md:270 msgid "" -"The Flower Federated Learning Tutorial has two brand-new parts covering " -"custom strategies (still WIP) and the distinction between `Client` and " -"`NumPyClient`. The existing parts one and two have also been improved " -"(many small changes and fixes)." +"**Improve example notebooks** " +"([#2005](https://github.com/adap/flower/pull/2005))" msgstr "" -#: ../../source/ref-changelog.md:539 -msgid "v1.0.0 (2022-07-28)" +#: ../../source/ref-changelog.md:272 +msgid "There's a new 30min Federated Learning PyTorch tutorial!" msgstr "" -#: ../../source/ref-changelog.md:541 -msgid "Highlights" +#: ../../source/ref-changelog.md:274 +msgid "" +"**Example updates** ([#1772](https://github.com/adap/flower/pull/1772), " +"[#1873](https://github.com/adap/flower/pull/1873), " +"[#1981](https://github.com/adap/flower/pull/1981), " +"[#1988](https://github.com/adap/flower/pull/1988), " +"[#1984](https://github.com/adap/flower/pull/1984), " +"[#1982](https://github.com/adap/flower/pull/1982), " +"[#2112](https://github.com/adap/flower/pull/2112), " +"[#2144](https://github.com/adap/flower/pull/2144), " +"[#2174](https://github.com/adap/flower/pull/2174), " +"[#2225](https://github.com/adap/flower/pull/2225), " +"[#2183](https://github.com/adap/flower/pull/2183))" msgstr "" -#: ../../source/ref-changelog.md:543 -msgid "Stable **Virtual Client Engine** (accessible via `start_simulation`)" +#: ../../source/ref-changelog.md:276 +msgid "" +"Many examples have received significant updates, including simplified " +"advanced-tensorflow and advanced-pytorch examples, improved macOS " +"compatibility of TensorFlow examples, and code examples for simulation. A" +" major upgrade is that all code examples now have a `requirements.txt` " +"(in addition to `pyproject.toml`)." msgstr "" -#: ../../source/ref-changelog.md:544 -msgid "All `Client`/`NumPyClient` methods are now optional" +#: ../../source/ref-changelog.md:278 +msgid "" +"**General improvements** " +"([#1872](https://github.com/adap/flower/pull/1872), " +"[#1866](https://github.com/adap/flower/pull/1866), " +"[#1884](https://github.com/adap/flower/pull/1884), " +"[#1837](https://github.com/adap/flower/pull/1837), " +"[#1477](https://github.com/adap/flower/pull/1477), " +"[#2171](https://github.com/adap/flower/pull/2171))" msgstr "" -#: ../../source/ref-changelog.md:545 -msgid "Configurable `get_parameters`" +#: ../../source/ref-changelog.md:284 ../../source/ref-changelog.md:348 +#: ../../source/ref-changelog.md:406 ../../source/ref-changelog.md:475 +#: ../../source/ref-changelog.md:537 +msgid "None" msgstr "" -#: ../../source/ref-changelog.md:546 -msgid "" -"Tons of small API cleanups resulting in a more coherent developer " -"experience" +#: ../../source/ref-changelog.md:286 +msgid "v1.4.0 (2023-04-21)" msgstr "" -#: ../../source/ref-changelog.md:550 +#: ../../source/ref-changelog.md:292 msgid "" -"We would like to give our **special thanks** to all the contributors who " -"made Flower 1.0 possible (in reverse [GitHub " -"Contributors](https://github.com/adap/flower/graphs/contributors) order):" +"`Adam Narozniak`, `Alexander Viala Bellander`, `Charles Beauville`, " +"`Chenyang Ma (Danny)`, `Daniel J. Beutel`, `Edoardo`, `Gautam Jajoo`, " +"`Iacob-Alexandru-Andrei`, `JDRanpariya`, `Jean Charle Yaacoub`, `Kunal " +"Sarkhel`, `L. Jiang`, `Lennart Behme`, `Max Kapsecker`, `Michał`, `Nic " +"Lane`, `Nikolaos Episkopos`, `Ragy`, `Saurav Maheshkar`, `Semo Yang`, " +"`Steve Laskaridis`, `Steven Hé (Sīchàng)`, `Taner Topal`" msgstr "" -#: ../../source/ref-changelog.md:552 +#: ../../source/ref-changelog.md:296 msgid "" -"[@rtaiello](https://github.com/rtaiello), " -"[@g-pichler](https://github.com/g-pichler), [@rob-" -"luke](https://github.com/rob-luke), [@andreea-zaharia](https://github.com" -"/andreea-zaharia), [@kinshukdua](https://github.com/kinshukdua), " -"[@nfnt](https://github.com/nfnt), " -"[@tatiana-s](https://github.com/tatiana-s), " -"[@TParcollet](https://github.com/TParcollet), " -"[@vballoli](https://github.com/vballoli), " -"[@negedng](https://github.com/negedng), " -"[@RISHIKESHAVAN](https://github.com/RISHIKESHAVAN), " -"[@hei411](https://github.com/hei411), " -"[@SebastianSpeitel](https://github.com/SebastianSpeitel), " -"[@AmitChaulwar](https://github.com/AmitChaulwar), " -"[@Rubiel1](https://github.com/Rubiel1), [@FANTOME-PAN](https://github.com" -"/FANTOME-PAN), [@Rono-BC](https://github.com/Rono-BC), " -"[@lbhm](https://github.com/lbhm), " -"[@sishtiaq](https://github.com/sishtiaq), " -"[@remde](https://github.com/remde), [@Jueun-Park](https://github.com" -"/Jueun-Park), [@architjen](https://github.com/architjen), " -"[@PratikGarai](https://github.com/PratikGarai), " -"[@mrinaald](https://github.com/mrinaald), " -"[@zliel](https://github.com/zliel), " -"[@MeiruiJiang](https://github.com/MeiruiJiang), " -"[@sancarlim](https://github.com/sancarlim), " -"[@gubertoli](https://github.com/gubertoli), " -"[@Vingt100](https://github.com/Vingt100), " -"[@MakGulati](https://github.com/MakGulati), " -"[@cozek](https://github.com/cozek), " -"[@jafermarq](https://github.com/jafermarq), " -"[@sisco0](https://github.com/sisco0), " -"[@akhilmathurs](https://github.com/akhilmathurs), " -"[@CanTuerk](https://github.com/CanTuerk), " -"[@mariaboerner1987](https://github.com/mariaboerner1987), " -"[@pedropgusmao](https://github.com/pedropgusmao), " -"[@tanertopal](https://github.com/tanertopal), " -"[@danieljanes](https://github.com/danieljanes)." -msgstr "" - -#: ../../source/ref-changelog.md:556 -msgid "" -"**All arguments must be passed as keyword arguments** " -"([#1338](https://github.com/adap/flower/pull/1338))" +"**Introduce support for XGBoost (**`FedXgbNnAvg` **strategy and " +"example)** ([#1694](https://github.com/adap/flower/pull/1694), " +"[#1709](https://github.com/adap/flower/pull/1709), " +"[#1715](https://github.com/adap/flower/pull/1715), " +"[#1717](https://github.com/adap/flower/pull/1717), " +"[#1763](https://github.com/adap/flower/pull/1763), " +"[#1795](https://github.com/adap/flower/pull/1795))" msgstr "" -#: ../../source/ref-changelog.md:558 +#: ../../source/ref-changelog.md:298 msgid "" -"Pass all arguments as keyword arguments, positional arguments are not " -"longer supported. Code that uses positional arguments (e.g., " -"`start_client(\"127.0.0.1:8080\", FlowerClient())`) must add the keyword " -"for each positional argument (e.g., " -"`start_client(server_address=\"127.0.0.1:8080\", " -"client=FlowerClient())`)." +"XGBoost is a tree-based ensemble machine learning algorithm that uses " +"gradient boosting to improve model accuracy. We added a new `FedXgbNnAvg`" +" " +"[strategy](https://github.com/adap/flower/tree/main/src/py/flwr/server/strategy/fedxgb_nn_avg.py)," +" and a [code example](https://github.com/adap/flower/tree/main/examples" +"/xgboost-quickstart) that demonstrates the usage of this new strategy in " +"an XGBoost project." msgstr "" -#: ../../source/ref-changelog.md:560 +#: ../../source/ref-changelog.md:300 msgid "" -"**Introduce configuration object** `ServerConfig` **in** `start_server` " -"**and** `start_simulation` " -"([#1317](https://github.com/adap/flower/pull/1317))" +"**Introduce iOS SDK (preview)** " +"([#1621](https://github.com/adap/flower/pull/1621), " +"[#1764](https://github.com/adap/flower/pull/1764))" msgstr "" -#: ../../source/ref-changelog.md:562 +#: ../../source/ref-changelog.md:302 msgid "" -"Instead of a config dictionary `{\"num_rounds\": 3, \"round_timeout\": " -"600.0}`, `start_server` and `start_simulation` now expect a configuration" -" object of type `flwr.server.ServerConfig`. `ServerConfig` takes the same" -" arguments that as the previous config dict, but it makes writing type-" -"safe code easier and the default parameters values more transparent." +"This is a major update for anyone wanting to implement Federated Learning" +" on iOS mobile devices. We now have a swift iOS SDK present under " +"[src/swift/flwr](https://github.com/adap/flower/tree/main/src/swift/flwr)" +" that will facilitate greatly the app creating process. To showcase its " +"use, the [iOS " +"example](https://github.com/adap/flower/tree/main/examples/ios) has also " +"been updated!" msgstr "" -#: ../../source/ref-changelog.md:564 +#: ../../source/ref-changelog.md:304 msgid "" -"**Rename built-in strategy parameters for clarity** " -"([#1334](https://github.com/adap/flower/pull/1334))" +"**Introduce new \"What is Federated Learning?\" tutorial** " +"([#1657](https://github.com/adap/flower/pull/1657), " +"[#1721](https://github.com/adap/flower/pull/1721))" msgstr "" -#: ../../source/ref-changelog.md:566 +#: ../../source/ref-changelog.md:306 msgid "" -"The following built-in strategy parameters were renamed to improve " -"readability and consistency with other API's:" -msgstr "" - -#: ../../source/ref-changelog.md:568 -msgid "`fraction_eval` --> `fraction_evaluate`" -msgstr "" - -#: ../../source/ref-changelog.md:569 -msgid "`min_eval_clients` --> `min_evaluate_clients`" -msgstr "" - -#: ../../source/ref-changelog.md:570 -msgid "`eval_fn` --> `evaluate_fn`" +"A new [entry-level tutorial](https://flower.ai/docs/framework/tutorial-" +"what-is-federated-learning.html) in our documentation explains the basics" +" of Fedetated Learning. It enables anyone who's unfamiliar with Federated" +" Learning to start their journey with Flower. Forward it to anyone who's " +"interested in Federated Learning!" msgstr "" -#: ../../source/ref-changelog.md:572 +#: ../../source/ref-changelog.md:308 msgid "" -"**Update default arguments of built-in strategies** " -"([#1278](https://github.com/adap/flower/pull/1278))" +"**Introduce new Flower Baseline: FedProx MNIST** " +"([#1513](https://github.com/adap/flower/pull/1513), " +"[#1680](https://github.com/adap/flower/pull/1680), " +"[#1681](https://github.com/adap/flower/pull/1681), " +"[#1679](https://github.com/adap/flower/pull/1679))" msgstr "" -#: ../../source/ref-changelog.md:574 +#: ../../source/ref-changelog.md:310 msgid "" -"All built-in strategies now use `fraction_fit=1.0` and " -"`fraction_evaluate=1.0`, which means they select *all* currently " -"available clients for training and evaluation. Projects that relied on " -"the previous default values can get the previous behaviour by " -"initializing the strategy in the following way:" -msgstr "" - -#: ../../source/ref-changelog.md:576 -msgid "`strategy = FedAvg(fraction_fit=0.1, fraction_evaluate=0.1)`" +"This new baseline replicates the MNIST+CNN task from the paper [Federated" +" Optimization in Heterogeneous Networks (Li et al., " +"2018)](https://arxiv.org/abs/1812.06127). It uses the `FedProx` strategy," +" which aims at making convergence more robust in heterogeneous settings." msgstr "" -#: ../../source/ref-changelog.md:578 +#: ../../source/ref-changelog.md:312 msgid "" -"**Add** `server_round` **to** `Strategy.evaluate` " -"([#1334](https://github.com/adap/flower/pull/1334))" +"**Introduce new Flower Baseline: FedAvg FEMNIST** " +"([#1655](https://github.com/adap/flower/pull/1655))" msgstr "" -#: ../../source/ref-changelog.md:580 +#: ../../source/ref-changelog.md:314 msgid "" -"The `Strategy` method `evaluate` now receives the current round of " -"federated learning/evaluation as the first parameter." +"This new baseline replicates an experiment evaluating the performance of " +"the FedAvg algorithm on the FEMNIST dataset from the paper [LEAF: A " +"Benchmark for Federated Settings (Caldas et al., " +"2018)](https://arxiv.org/abs/1812.01097)." msgstr "" -#: ../../source/ref-changelog.md:582 +#: ../../source/ref-changelog.md:316 msgid "" -"**Add** `server_round` **and** `config` **parameters to** `evaluate_fn` " -"([#1334](https://github.com/adap/flower/pull/1334))" +"**Introduce (experimental) REST API** " +"([#1594](https://github.com/adap/flower/pull/1594), " +"[#1690](https://github.com/adap/flower/pull/1690), " +"[#1695](https://github.com/adap/flower/pull/1695), " +"[#1712](https://github.com/adap/flower/pull/1712), " +"[#1802](https://github.com/adap/flower/pull/1802), " +"[#1770](https://github.com/adap/flower/pull/1770), " +"[#1733](https://github.com/adap/flower/pull/1733))" msgstr "" -#: ../../source/ref-changelog.md:584 +#: ../../source/ref-changelog.md:318 msgid "" -"The `evaluate_fn` passed to built-in strategies like `FedAvg` now takes " -"three parameters: (1) The current round of federated learning/evaluation " -"(`server_round`), (2) the model parameters to evaluate (`parameters`), " -"and (3) a config dictionary (`config`)." +"A new REST API has been introduced as an alternative to the gRPC-based " +"communication stack. In this initial version, the REST API only supports " +"anonymous clients." msgstr "" -#: ../../source/ref-changelog.md:586 +#: ../../source/ref-changelog.md:320 msgid "" -"**Rename** `rnd` **to** `server_round` " -"([#1321](https://github.com/adap/flower/pull/1321))" +"Please note: The REST API is still experimental and will likely change " +"significantly over time." msgstr "" -#: ../../source/ref-changelog.md:588 +#: ../../source/ref-changelog.md:322 msgid "" -"Several Flower methods and functions (`evaluate_fn`, `configure_fit`, " -"`aggregate_fit`, `configure_evaluate`, `aggregate_evaluate`) receive the " -"current round of federated learning/evaluation as their first parameter. " -"To improve reaability and avoid confusion with *random*, this parameter " -"has been renamed from `rnd` to `server_round`." +"**Improve the (experimental) Driver API** " +"([#1663](https://github.com/adap/flower/pull/1663), " +"[#1666](https://github.com/adap/flower/pull/1666), " +"[#1667](https://github.com/adap/flower/pull/1667), " +"[#1664](https://github.com/adap/flower/pull/1664), " +"[#1675](https://github.com/adap/flower/pull/1675), " +"[#1676](https://github.com/adap/flower/pull/1676), " +"[#1693](https://github.com/adap/flower/pull/1693), " +"[#1662](https://github.com/adap/flower/pull/1662), " +"[#1794](https://github.com/adap/flower/pull/1794))" msgstr "" -#: ../../source/ref-changelog.md:590 +#: ../../source/ref-changelog.md:324 msgid "" -"**Move** `flwr.dataset` **to** `flwr_baselines` " -"([#1273](https://github.com/adap/flower/pull/1273))" -msgstr "" - -#: ../../source/ref-changelog.md:592 -msgid "The experimental package `flwr.dataset` was migrated to Flower Baselines." +"The Driver API is still an experimental feature, but this release " +"introduces some major upgrades. One of the main improvements is the " +"introduction of an SQLite database to store server state on disk (instead" +" of in-memory). Another improvement is that tasks (instructions or " +"results) that have been delivered will now be deleted. This greatly " +"improves the memory efficiency of a long-running Flower server." msgstr "" -#: ../../source/ref-changelog.md:594 +#: ../../source/ref-changelog.md:326 msgid "" -"**Remove experimental strategies** " -"([#1280](https://github.com/adap/flower/pull/1280))" +"**Fix spilling issues related to Ray during simulations** " +"([#1698](https://github.com/adap/flower/pull/1698))" msgstr "" -#: ../../source/ref-changelog.md:596 +#: ../../source/ref-changelog.md:328 msgid "" -"Remove unmaintained experimental strategies (`FastAndSlow`, `FedFSv0`, " -"`FedFSv1`)." +"While running long simulations, `ray` was sometimes spilling huge amounts" +" of data that would make the training unable to continue. This is now " +"fixed! 🎉" msgstr "" -#: ../../source/ref-changelog.md:598 +#: ../../source/ref-changelog.md:330 msgid "" -"**Rename** `Weights` **to** `NDArrays` " -"([#1258](https://github.com/adap/flower/pull/1258), " -"[#1259](https://github.com/adap/flower/pull/1259))" +"**Add new example using** `TabNet` **and Flower** " +"([#1725](https://github.com/adap/flower/pull/1725))" msgstr "" -#: ../../source/ref-changelog.md:600 +#: ../../source/ref-changelog.md:332 msgid "" -"`flwr.common.Weights` was renamed to `flwr.common.NDArrays` to better " -"capture what this type is all about." +"TabNet is a powerful and flexible framework for training machine learning" +" models on tabular data. We now have a federated example using Flower: " +"[quickstart-tabnet](https://github.com/adap/flower/tree/main/examples" +"/quickstart-tabnet)." msgstr "" -#: ../../source/ref-changelog.md:602 +#: ../../source/ref-changelog.md:334 msgid "" -"**Remove antiquated** `force_final_distributed_eval` **from** " -"`start_server` ([#1258](https://github.com/adap/flower/pull/1258), " -"[#1259](https://github.com/adap/flower/pull/1259))" +"**Add new how-to guide for monitoring simulations** " +"([#1649](https://github.com/adap/flower/pull/1649))" msgstr "" -#: ../../source/ref-changelog.md:604 +#: ../../source/ref-changelog.md:336 msgid "" -"The `start_server` parameter `force_final_distributed_eval` has long been" -" a historic artefact, in this release it is finally gone for good." +"We now have a documentation guide to help users monitor their performance" +" during simulations." msgstr "" -#: ../../source/ref-changelog.md:606 +#: ../../source/ref-changelog.md:338 msgid "" -"**Make** `get_parameters` **configurable** " -"([#1242](https://github.com/adap/flower/pull/1242))" +"**Add training metrics to** `History` **object during simulations** " +"([#1696](https://github.com/adap/flower/pull/1696))" msgstr "" -#: ../../source/ref-changelog.md:608 +#: ../../source/ref-changelog.md:340 msgid "" -"The `get_parameters` method now accepts a configuration dictionary, just " -"like `get_properties`, `fit`, and `evaluate`." +"The `fit_metrics_aggregation_fn` can be used to aggregate training " +"metrics, but previous releases did not save the results in the `History` " +"object. This is now the case!" msgstr "" -#: ../../source/ref-changelog.md:610 +#: ../../source/ref-changelog.md:342 msgid "" -"**Replace** `num_rounds` **in** `start_simulation` **with new** `config` " -"**parameter** ([#1281](https://github.com/adap/flower/pull/1281))" -msgstr "" - -#: ../../source/ref-changelog.md:612 -msgid "" -"The `start_simulation` function now accepts a configuration dictionary " -"`config` instead of the `num_rounds` integer. This improves the " -"consistency between `start_simulation` and `start_server` and makes " -"transitioning between the two easier." -msgstr "" - -#: ../../source/ref-changelog.md:616 -msgid "" -"**Support Python 3.10** " -"([#1320](https://github.com/adap/flower/pull/1320))" +"**General improvements** " +"([#1659](https://github.com/adap/flower/pull/1659), " +"[#1646](https://github.com/adap/flower/pull/1646), " +"[#1647](https://github.com/adap/flower/pull/1647), " +"[#1471](https://github.com/adap/flower/pull/1471), " +"[#1648](https://github.com/adap/flower/pull/1648), " +"[#1651](https://github.com/adap/flower/pull/1651), " +"[#1652](https://github.com/adap/flower/pull/1652), " +"[#1653](https://github.com/adap/flower/pull/1653), " +"[#1659](https://github.com/adap/flower/pull/1659), " +"[#1665](https://github.com/adap/flower/pull/1665), " +"[#1670](https://github.com/adap/flower/pull/1670), " +"[#1672](https://github.com/adap/flower/pull/1672), " +"[#1677](https://github.com/adap/flower/pull/1677), " +"[#1684](https://github.com/adap/flower/pull/1684), " +"[#1683](https://github.com/adap/flower/pull/1683), " +"[#1686](https://github.com/adap/flower/pull/1686), " +"[#1682](https://github.com/adap/flower/pull/1682), " +"[#1685](https://github.com/adap/flower/pull/1685), " +"[#1692](https://github.com/adap/flower/pull/1692), " +"[#1705](https://github.com/adap/flower/pull/1705), " +"[#1708](https://github.com/adap/flower/pull/1708), " +"[#1711](https://github.com/adap/flower/pull/1711), " +"[#1713](https://github.com/adap/flower/pull/1713), " +"[#1714](https://github.com/adap/flower/pull/1714), " +"[#1718](https://github.com/adap/flower/pull/1718), " +"[#1716](https://github.com/adap/flower/pull/1716), " +"[#1723](https://github.com/adap/flower/pull/1723), " +"[#1735](https://github.com/adap/flower/pull/1735), " +"[#1678](https://github.com/adap/flower/pull/1678), " +"[#1750](https://github.com/adap/flower/pull/1750), " +"[#1753](https://github.com/adap/flower/pull/1753), " +"[#1736](https://github.com/adap/flower/pull/1736), " +"[#1766](https://github.com/adap/flower/pull/1766), " +"[#1760](https://github.com/adap/flower/pull/1760), " +"[#1775](https://github.com/adap/flower/pull/1775), " +"[#1776](https://github.com/adap/flower/pull/1776), " +"[#1777](https://github.com/adap/flower/pull/1777), " +"[#1779](https://github.com/adap/flower/pull/1779), " +"[#1784](https://github.com/adap/flower/pull/1784), " +"[#1773](https://github.com/adap/flower/pull/1773), " +"[#1755](https://github.com/adap/flower/pull/1755), " +"[#1789](https://github.com/adap/flower/pull/1789), " +"[#1788](https://github.com/adap/flower/pull/1788), " +"[#1798](https://github.com/adap/flower/pull/1798), " +"[#1799](https://github.com/adap/flower/pull/1799), " +"[#1739](https://github.com/adap/flower/pull/1739), " +"[#1800](https://github.com/adap/flower/pull/1800), " +"[#1804](https://github.com/adap/flower/pull/1804), " +"[#1805](https://github.com/adap/flower/pull/1805))" msgstr "" -#: ../../source/ref-changelog.md:618 -msgid "" -"The previous Flower release introduced experimental support for Python " -"3.10, this release declares Python 3.10 support as stable." +#: ../../source/ref-changelog.md:350 +msgid "v1.3.0 (2023-02-06)" msgstr "" -#: ../../source/ref-changelog.md:620 +#: ../../source/ref-changelog.md:356 msgid "" -"**Make all** `Client` **and** `NumPyClient` **methods optional** " -"([#1260](https://github.com/adap/flower/pull/1260), " -"[#1277](https://github.com/adap/flower/pull/1277))" +"`Adam Narozniak`, `Alexander Viala Bellander`, `Charles Beauville`, " +"`Daniel J. Beutel`, `JDRanpariya`, `Lennart Behme`, `Taner Topal`" msgstr "" -#: ../../source/ref-changelog.md:622 +#: ../../source/ref-changelog.md:360 msgid "" -"The `Client`/`NumPyClient` methods `get_properties`, `get_parameters`, " -"`fit`, and `evaluate` are all optional. This enables writing clients that" -" implement, for example, only `fit`, but no other method. No need to " -"implement `evaluate` when using centralized evaluation!" +"**Add support for** `workload_id` **and** `group_id` **in Driver API** " +"([#1595](https://github.com/adap/flower/pull/1595))" msgstr "" -#: ../../source/ref-changelog.md:624 +#: ../../source/ref-changelog.md:362 msgid "" -"**Enable passing a** `Server` **instance to** `start_simulation` " -"([#1281](https://github.com/adap/flower/pull/1281))" +"The (experimental) Driver API now supports a `workload_id` that can be " +"used to identify which workload a task belongs to. It also supports a new" +" `group_id` that can be used, for example, to indicate the current " +"training round. Both the `workload_id` and `group_id` enable client nodes" +" to decide whether they want to handle a task or not." msgstr "" -#: ../../source/ref-changelog.md:626 +#: ../../source/ref-changelog.md:364 msgid "" -"Similar to `start_server`, `start_simulation` now accepts a full `Server`" -" instance. This enables users to heavily customize the execution of " -"eperiments and opens the door to running, for example, async FL using the" -" Virtual Client Engine." +"**Make Driver API and Fleet API address configurable** " +"([#1637](https://github.com/adap/flower/pull/1637))" msgstr "" -#: ../../source/ref-changelog.md:628 +#: ../../source/ref-changelog.md:366 msgid "" -"**Update code examples** " -"([#1291](https://github.com/adap/flower/pull/1291), " -"[#1286](https://github.com/adap/flower/pull/1286), " -"[#1282](https://github.com/adap/flower/pull/1282))" +"The (experimental) long-running Flower server (Driver API and Fleet API) " +"can now configure the server address of both Driver API (via `--driver-" +"api-address`) and Fleet API (via `--fleet-api-address`) when starting:" msgstr "" -#: ../../source/ref-changelog.md:630 +#: ../../source/ref-changelog.md:368 msgid "" -"Many code examples received small or even large maintenance updates, " -"among them are" -msgstr "" - -#: ../../source/ref-changelog.md:632 -msgid "`scikit-learn`" -msgstr "" - -#: ../../source/ref-changelog.md:633 -msgid "`simulation_pytorch`" -msgstr "" - -#: ../../source/ref-changelog.md:634 -msgid "`quickstart_pytorch`" -msgstr "" - -#: ../../source/ref-changelog.md:635 -msgid "`quickstart_simulation`" -msgstr "" - -#: ../../source/ref-changelog.md:636 -msgid "`quickstart_tensorflow`" +"`flower-server --driver-api-address \"0.0.0.0:8081\" --fleet-api-address " +"\"0.0.0.0:8086\"`" msgstr "" -#: ../../source/ref-changelog.md:637 -msgid "`advanced_tensorflow`" +#: ../../source/ref-changelog.md:370 +msgid "Both IPv4 and IPv6 addresses are supported." msgstr "" -#: ../../source/ref-changelog.md:639 +#: ../../source/ref-changelog.md:372 msgid "" -"**Remove the obsolete simulation example** " -"([#1328](https://github.com/adap/flower/pull/1328))" +"**Add new example of Federated Learning using fastai and Flower** " +"([#1598](https://github.com/adap/flower/pull/1598))" msgstr "" -#: ../../source/ref-changelog.md:641 +#: ../../source/ref-changelog.md:374 msgid "" -"Removes the obsolete `simulation` example and renames " -"`quickstart_simulation` to `simulation_tensorflow` so it fits withs the " -"naming of `simulation_pytorch`" +"A new code example (`quickstart-fastai`) demonstrates federated learning " +"with [fastai](https://www.fast.ai/) and Flower. You can find it here: " +"[quickstart-fastai](https://github.com/adap/flower/tree/main/examples" +"/quickstart-fastai)." msgstr "" -#: ../../source/ref-changelog.md:643 +#: ../../source/ref-changelog.md:376 msgid "" -"**Update documentation** " -"([#1223](https://github.com/adap/flower/pull/1223), " -"[#1209](https://github.com/adap/flower/pull/1209), " -"[#1251](https://github.com/adap/flower/pull/1251), " -"[#1257](https://github.com/adap/flower/pull/1257), " -"[#1267](https://github.com/adap/flower/pull/1267), " -"[#1268](https://github.com/adap/flower/pull/1268), " -"[#1300](https://github.com/adap/flower/pull/1300), " -"[#1304](https://github.com/adap/flower/pull/1304), " -"[#1305](https://github.com/adap/flower/pull/1305), " -"[#1307](https://github.com/adap/flower/pull/1307))" +"**Make Android example compatible with** `flwr >= 1.0.0` **and the latest" +" versions of Android** " +"([#1603](https://github.com/adap/flower/pull/1603))" msgstr "" -#: ../../source/ref-changelog.md:645 +#: ../../source/ref-changelog.md:378 msgid "" -"One substantial documentation update fixes multiple smaller rendering " -"issues, makes titles more succinct to improve navigation, removes a " -"deprecated library, updates documentation dependencies, includes the " -"`flwr.common` module in the API reference, includes support for markdown-" -"based documentation, migrates the changelog from `.rst` to `.md`, and " -"fixes a number of smaller details!" -msgstr "" - -#: ../../source/ref-changelog.md:647 ../../source/ref-changelog.md:702 -#: ../../source/ref-changelog.md:771 ../../source/ref-changelog.md:810 -msgid "**Minor updates**" +"The Android code example has received a substantial update: the project " +"is compatible with Flower 1.0 (and later), the UI received a full " +"refresh, and the project is updated to be compatible with newer Android " +"tooling." msgstr "" -#: ../../source/ref-changelog.md:649 +#: ../../source/ref-changelog.md:380 msgid "" -"Add round number to fit and evaluate log messages " -"([#1266](https://github.com/adap/flower/pull/1266))" +"**Add new `FedProx` strategy** " +"([#1619](https://github.com/adap/flower/pull/1619))" msgstr "" -#: ../../source/ref-changelog.md:650 +#: ../../source/ref-changelog.md:382 msgid "" -"Add secure gRPC connection to the `advanced_tensorflow` code example " -"([#847](https://github.com/adap/flower/pull/847))" +"This " +"[strategy](https://github.com/adap/flower/blob/main/src/py/flwr/server/strategy/fedprox.py)" +" is almost identical to " +"[`FedAvg`](https://github.com/adap/flower/blob/main/src/py/flwr/server/strategy/fedavg.py)," +" but helps users replicate what is described in this " +"[paper](https://arxiv.org/abs/1812.06127). It essentially adds a " +"parameter called `proximal_mu` to regularize the local models with " +"respect to the global models." msgstr "" -#: ../../source/ref-changelog.md:651 +#: ../../source/ref-changelog.md:384 msgid "" -"Update developer tooling " -"([#1231](https://github.com/adap/flower/pull/1231), " -"[#1276](https://github.com/adap/flower/pull/1276), " -"[#1301](https://github.com/adap/flower/pull/1301), " -"[#1310](https://github.com/adap/flower/pull/1310))" +"**Add new metrics to telemetry events** " +"([#1640](https://github.com/adap/flower/pull/1640))" msgstr "" -#: ../../source/ref-changelog.md:652 +#: ../../source/ref-changelog.md:386 msgid "" -"Rename ProtoBuf messages to improve consistency " -"([#1214](https://github.com/adap/flower/pull/1214), " -"[#1258](https://github.com/adap/flower/pull/1258), " -"[#1259](https://github.com/adap/flower/pull/1259))" +"An updated event structure allows, for example, the clustering of events " +"within the same workload." msgstr "" -#: ../../source/ref-changelog.md:654 -msgid "v0.19.0 (2022-05-18)" +#: ../../source/ref-changelog.md:388 +msgid "" +"**Add new custom strategy tutorial section** " +"[#1623](https://github.com/adap/flower/pull/1623)" msgstr "" -#: ../../source/ref-changelog.md:658 +#: ../../source/ref-changelog.md:390 msgid "" -"**Flower Baselines (preview): FedOpt, FedBN, FedAvgM** " -"([#919](https://github.com/adap/flower/pull/919), " -"[#1127](https://github.com/adap/flower/pull/1127), " -"[#914](https://github.com/adap/flower/pull/914))" +"The Flower tutorial now has a new section that covers implementing a " +"custom strategy from scratch: [Open in " +"Colab](https://colab.research.google.com/github/adap/flower/blob/main/doc/source" +"/tutorial-build-a-strategy-from-scratch-pytorch.ipynb)" msgstr "" -#: ../../source/ref-changelog.md:660 +#: ../../source/ref-changelog.md:392 msgid "" -"The first preview release of Flower Baselines has arrived! We're " -"kickstarting Flower Baselines with implementations of FedOpt (FedYogi, " -"FedAdam, FedAdagrad), FedBN, and FedAvgM. Check the documentation on how " -"to use [Flower Baselines](https://flower.ai/docs/using-baselines.html). " -"With this first preview release we're also inviting the community to " -"[contribute their own baselines](https://flower.ai/docs/contributing-" -"baselines.html)." +"**Add new custom serialization tutorial section** " +"([#1622](https://github.com/adap/flower/pull/1622))" msgstr "" -#: ../../source/ref-changelog.md:662 +#: ../../source/ref-changelog.md:394 msgid "" -"**C++ client SDK (preview) and code example** " -"([#1111](https://github.com/adap/flower/pull/1111))" +"The Flower tutorial now has a new section that covers custom " +"serialization: [Open in " +"Colab](https://colab.research.google.com/github/adap/flower/blob/main/doc/source" +"/tutorial-customize-the-client-pytorch.ipynb)" msgstr "" -#: ../../source/ref-changelog.md:664 +#: ../../source/ref-changelog.md:396 msgid "" -"Preview support for Flower clients written in C++. The C++ preview " -"includes a Flower client SDK and a quickstart code example that " -"demonstrates a simple C++ client using the SDK." +"**General improvements** " +"([#1638](https://github.com/adap/flower/pull/1638), " +"[#1634](https://github.com/adap/flower/pull/1634), " +"[#1636](https://github.com/adap/flower/pull/1636), " +"[#1635](https://github.com/adap/flower/pull/1635), " +"[#1633](https://github.com/adap/flower/pull/1633), " +"[#1632](https://github.com/adap/flower/pull/1632), " +"[#1631](https://github.com/adap/flower/pull/1631), " +"[#1630](https://github.com/adap/flower/pull/1630), " +"[#1627](https://github.com/adap/flower/pull/1627), " +"[#1593](https://github.com/adap/flower/pull/1593), " +"[#1616](https://github.com/adap/flower/pull/1616), " +"[#1615](https://github.com/adap/flower/pull/1615), " +"[#1607](https://github.com/adap/flower/pull/1607), " +"[#1609](https://github.com/adap/flower/pull/1609), " +"[#1608](https://github.com/adap/flower/pull/1608), " +"[#1603](https://github.com/adap/flower/pull/1603), " +"[#1590](https://github.com/adap/flower/pull/1590), " +"[#1580](https://github.com/adap/flower/pull/1580), " +"[#1599](https://github.com/adap/flower/pull/1599), " +"[#1600](https://github.com/adap/flower/pull/1600), " +"[#1601](https://github.com/adap/flower/pull/1601), " +"[#1597](https://github.com/adap/flower/pull/1597), " +"[#1595](https://github.com/adap/flower/pull/1595), " +"[#1591](https://github.com/adap/flower/pull/1591), " +"[#1588](https://github.com/adap/flower/pull/1588), " +"[#1589](https://github.com/adap/flower/pull/1589), " +"[#1587](https://github.com/adap/flower/pull/1587), " +"[#1573](https://github.com/adap/flower/pull/1573), " +"[#1581](https://github.com/adap/flower/pull/1581), " +"[#1578](https://github.com/adap/flower/pull/1578), " +"[#1574](https://github.com/adap/flower/pull/1574), " +"[#1572](https://github.com/adap/flower/pull/1572), " +"[#1586](https://github.com/adap/flower/pull/1586))" msgstr "" -#: ../../source/ref-changelog.md:666 +#: ../../source/ref-changelog.md:400 msgid "" -"**Add experimental support for Python 3.10 and Python 3.11** " -"([#1135](https://github.com/adap/flower/pull/1135))" +"**Updated documentation** " +"([#1629](https://github.com/adap/flower/pull/1629), " +"[#1628](https://github.com/adap/flower/pull/1628), " +"[#1620](https://github.com/adap/flower/pull/1620), " +"[#1618](https://github.com/adap/flower/pull/1618), " +"[#1617](https://github.com/adap/flower/pull/1617), " +"[#1613](https://github.com/adap/flower/pull/1613), " +"[#1614](https://github.com/adap/flower/pull/1614))" msgstr "" -#: ../../source/ref-changelog.md:668 +#: ../../source/ref-changelog.md:402 ../../source/ref-changelog.md:469 msgid "" -"Python 3.10 is the latest stable release of Python and Python 3.11 is due" -" to be released in October. This Flower release adds experimental support" -" for both Python versions." +"As usual, the documentation has improved quite a bit. It is another step " +"in our effort to make the Flower documentation the best documentation of " +"any project. Stay tuned and as always, feel free to provide feedback!" msgstr "" -#: ../../source/ref-changelog.md:670 -msgid "" -"**Aggregate custom metrics through user-provided functions** " -"([#1144](https://github.com/adap/flower/pull/1144))" +#: ../../source/ref-changelog.md:408 +msgid "v1.2.0 (2023-01-13)" msgstr "" -#: ../../source/ref-changelog.md:672 +#: ../../source/ref-changelog.md:414 msgid "" -"Custom metrics (e.g., `accuracy`) can now be aggregated without having to" -" customize the strategy. Built-in strategies support two new arguments, " -"`fit_metrics_aggregation_fn` and `evaluate_metrics_aggregation_fn`, that " -"allow passing custom metric aggregation functions." +"`Adam Narozniak`, `Charles Beauville`, `Daniel J. Beutel`, `Edoardo`, `L." +" Jiang`, `Ragy`, `Taner Topal`, `dannymcy`" msgstr "" -#: ../../source/ref-changelog.md:674 +#: ../../source/ref-changelog.md:418 msgid "" -"**User-configurable round timeout** " -"([#1162](https://github.com/adap/flower/pull/1162))" +"**Introduce new Flower Baseline: FedAvg MNIST** " +"([#1497](https://github.com/adap/flower/pull/1497), " +"[#1552](https://github.com/adap/flower/pull/1552))" msgstr "" -#: ../../source/ref-changelog.md:676 +#: ../../source/ref-changelog.md:420 msgid "" -"A new configuration value allows the round timeout to be set for " -"`start_server` and `start_simulation`. If the `config` dictionary " -"contains a `round_timeout` key (with a `float` value in seconds), the " -"server will wait *at least* `round_timeout` seconds before it closes the " -"connection." +"Over the coming weeks, we will be releasing a number of new reference " +"implementations useful especially to FL newcomers. They will typically " +"revisit well known papers from the literature, and be suitable for " +"integration in your own application or for experimentation, in order to " +"deepen your knowledge of FL in general. Today's release is the first in " +"this series. [Read more.](https://flower.ai/blog/2023-01-12-fl-starter-" +"pack-fedavg-mnist-cnn/)" msgstr "" -#: ../../source/ref-changelog.md:678 +#: ../../source/ref-changelog.md:422 msgid "" -"**Enable both federated evaluation and centralized evaluation to be used " -"at the same time in all built-in strategies** " -"([#1091](https://github.com/adap/flower/pull/1091))" +"**Improve GPU support in simulations** " +"([#1555](https://github.com/adap/flower/pull/1555))" msgstr "" -#: ../../source/ref-changelog.md:680 +#: ../../source/ref-changelog.md:424 msgid "" -"Built-in strategies can now perform both federated evaluation (i.e., " -"client-side) and centralized evaluation (i.e., server-side) in the same " -"round. Federated evaluation can be disabled by setting `fraction_eval` to" -" `0.0`." +"The Ray-based Virtual Client Engine (`start_simulation`) has been updated" +" to improve GPU support. The update includes some of the hard-earned " +"lessons from scaling simulations in GPU cluster environments. New " +"defaults make running GPU-based simulations substantially more robust." msgstr "" -#: ../../source/ref-changelog.md:682 +#: ../../source/ref-changelog.md:426 msgid "" -"**Two new Jupyter Notebook tutorials** " -"([#1141](https://github.com/adap/flower/pull/1141))" +"**Improve GPU support in Jupyter Notebook tutorials** " +"([#1527](https://github.com/adap/flower/pull/1527), " +"[#1558](https://github.com/adap/flower/pull/1558))" msgstr "" -#: ../../source/ref-changelog.md:684 +#: ../../source/ref-changelog.md:428 msgid "" -"Two Jupyter Notebook tutorials (compatible with Google Colab) explain " -"basic and intermediate Flower features:" +"Some users reported that Jupyter Notebooks have not always been easy to " +"use on GPU instances. We listened and made improvements to all of our " +"Jupyter notebooks! Check out the updated notebooks here:" msgstr "" -#: ../../source/ref-changelog.md:686 +#: ../../source/ref-changelog.md:430 msgid "" -"*An Introduction to Federated Learning*: [Open in " -"Colab](https://colab.research.google.com/github/adap/flower/blob/main/tutorials/Flower-1" -"-Intro-to-FL-PyTorch.ipynb)" +"[An Introduction to Federated Learning](https://flower.ai/docs/framework" +"/tutorial-get-started-with-flower-pytorch.html)" msgstr "" -#: ../../source/ref-changelog.md:688 +#: ../../source/ref-changelog.md:431 msgid "" -"*Using Strategies in Federated Learning*: [Open in " -"Colab](https://colab.research.google.com/github/adap/flower/blob/main/tutorials/Flower-2" -"-Strategies-in-FL-PyTorch.ipynb)" +"[Strategies in Federated Learning](https://flower.ai/docs/framework" +"/tutorial-use-a-federated-learning-strategy-pytorch.html)" msgstr "" -#: ../../source/ref-changelog.md:690 +#: ../../source/ref-changelog.md:432 msgid "" -"**New FedAvgM strategy (Federated Averaging with Server Momentum)** " -"([#1076](https://github.com/adap/flower/pull/1076))" +"[Building a Strategy](https://flower.ai/docs/framework/tutorial-build-a" +"-strategy-from-scratch-pytorch.html)" msgstr "" -#: ../../source/ref-changelog.md:692 +#: ../../source/ref-changelog.md:433 msgid "" -"The new `FedAvgM` strategy implements Federated Averaging with Server " -"Momentum \\[Hsu et al., 2019\\]." +"[Client and NumPyClient](https://flower.ai/docs/framework/tutorial-" +"customize-the-client-pytorch.html)" msgstr "" -#: ../../source/ref-changelog.md:694 +#: ../../source/ref-changelog.md:435 msgid "" -"**New advanced PyTorch code example** " -"([#1007](https://github.com/adap/flower/pull/1007))" +"**Introduce optional telemetry** " +"([#1533](https://github.com/adap/flower/pull/1533), " +"[#1544](https://github.com/adap/flower/pull/1544), " +"[#1584](https://github.com/adap/flower/pull/1584))" msgstr "" -#: ../../source/ref-changelog.md:696 +#: ../../source/ref-changelog.md:437 msgid "" -"A new code example (`advanced_pytorch`) demonstrates advanced Flower " -"concepts with PyTorch." +"After a [request for " +"feedback](https://github.com/adap/flower/issues/1534) from the community," +" the Flower open-source project introduces optional collection of " +"*anonymous* usage metrics to make well-informed decisions to improve " +"Flower. Doing this enables the Flower team to understand how Flower is " +"used and what challenges users might face." msgstr "" -#: ../../source/ref-changelog.md:698 +#: ../../source/ref-changelog.md:439 msgid "" -"**New JAX code example** " -"([#906](https://github.com/adap/flower/pull/906), " -"[#1143](https://github.com/adap/flower/pull/1143))" +"**Flower is a friendly framework for collaborative AI and data science.**" +" Staying true to this statement, Flower makes it easy to disable " +"telemetry for users who do not want to share anonymous usage metrics. " +"[Read more.](https://flower.ai/docs/telemetry.html)." msgstr "" -#: ../../source/ref-changelog.md:700 +#: ../../source/ref-changelog.md:441 msgid "" -"A new code example (`jax_from_centralized_to_federated`) shows federated " -"learning with JAX and Flower." +"**Introduce (experimental) Driver API** " +"([#1520](https://github.com/adap/flower/pull/1520), " +"[#1525](https://github.com/adap/flower/pull/1525), " +"[#1545](https://github.com/adap/flower/pull/1545), " +"[#1546](https://github.com/adap/flower/pull/1546), " +"[#1550](https://github.com/adap/flower/pull/1550), " +"[#1551](https://github.com/adap/flower/pull/1551), " +"[#1567](https://github.com/adap/flower/pull/1567))" msgstr "" -#: ../../source/ref-changelog.md:704 +#: ../../source/ref-changelog.md:443 msgid "" -"New option to keep Ray running if Ray was already initialized in " -"`start_simulation` ([#1177](https://github.com/adap/flower/pull/1177))" +"Flower now has a new (experimental) Driver API which will enable fully " +"programmable, async, and multi-tenant Federated Learning and Federated " +"Analytics applications. Phew, that's a lot! Going forward, the Driver API" +" will be the abstraction that many upcoming features will be built on - " +"and you can start building those things now, too." msgstr "" -#: ../../source/ref-changelog.md:705 +#: ../../source/ref-changelog.md:445 msgid "" -"Add support for custom `ClientManager` as a `start_simulation` parameter " -"([#1171](https://github.com/adap/flower/pull/1171))" +"The Driver API also enables a new execution mode in which the server runs" +" indefinitely. Multiple individual workloads can run concurrently and " +"start and stop their execution independent of the server. This is " +"especially useful for users who want to deploy Flower in production." msgstr "" -#: ../../source/ref-changelog.md:706 +#: ../../source/ref-changelog.md:447 msgid "" -"New documentation for [implementing " -"strategies](https://flower.ai/docs/framework/how-to-implement-" -"strategies.html) ([#1097](https://github.com/adap/flower/pull/1097), " -"[#1175](https://github.com/adap/flower/pull/1175))" +"To learn more, check out the `mt-pytorch` code example. We look forward " +"to you feedback!" msgstr "" -#: ../../source/ref-changelog.md:707 +#: ../../source/ref-changelog.md:449 msgid "" -"New mobile-friendly documentation theme " -"([#1174](https://github.com/adap/flower/pull/1174))" +"Please note: *The Driver API is still experimental and will likely change" +" significantly over time.*" msgstr "" -#: ../../source/ref-changelog.md:708 +#: ../../source/ref-changelog.md:451 msgid "" -"Limit version range for (optional) `ray` dependency to include only " -"compatible releases (`>=1.9.2,<1.12.0`) " -"([#1205](https://github.com/adap/flower/pull/1205))" +"**Add new Federated Analytics with Pandas example** " +"([#1469](https://github.com/adap/flower/pull/1469), " +"[#1535](https://github.com/adap/flower/pull/1535))" msgstr "" -#: ../../source/ref-changelog.md:712 +#: ../../source/ref-changelog.md:453 msgid "" -"**Remove deprecated support for Python 3.6** " -"([#871](https://github.com/adap/flower/pull/871))" +"A new code example (`quickstart-pandas`) demonstrates federated analytics" +" with Pandas and Flower. You can find it here: [quickstart-" +"pandas](https://github.com/adap/flower/tree/main/examples/quickstart-" +"pandas)." msgstr "" -#: ../../source/ref-changelog.md:713 +#: ../../source/ref-changelog.md:455 msgid "" -"**Remove deprecated KerasClient** " -"([#857](https://github.com/adap/flower/pull/857))" +"**Add new strategies: Krum and MultiKrum** " +"([#1481](https://github.com/adap/flower/pull/1481))" msgstr "" -#: ../../source/ref-changelog.md:714 +#: ../../source/ref-changelog.md:457 msgid "" -"**Remove deprecated no-op extra installs** " -"([#973](https://github.com/adap/flower/pull/973))" +"Edoardo, a computer science student at the Sapienza University of Rome, " +"contributed a new `Krum` strategy that enables users to easily use Krum " +"and MultiKrum in their workloads." msgstr "" -#: ../../source/ref-changelog.md:715 +#: ../../source/ref-changelog.md:459 msgid "" -"**Remove deprecated proto fields from** `FitRes` **and** `EvaluateRes` " -"([#869](https://github.com/adap/flower/pull/869))" +"**Update C++ example to be compatible with Flower v1.2.0** " +"([#1495](https://github.com/adap/flower/pull/1495))" msgstr "" -#: ../../source/ref-changelog.md:716 +#: ../../source/ref-changelog.md:461 msgid "" -"**Remove deprecated QffedAvg strategy (replaced by QFedAvg)** " -"([#1107](https://github.com/adap/flower/pull/1107))" -msgstr "" +"The C++ code example has received a substantial update to make it " +"compatible with the latest version of Flower." +msgstr "" -#: ../../source/ref-changelog.md:717 +#: ../../source/ref-changelog.md:463 msgid "" -"**Remove deprecated DefaultStrategy strategy** " -"([#1142](https://github.com/adap/flower/pull/1142))" +"**General improvements** " +"([#1491](https://github.com/adap/flower/pull/1491), " +"[#1504](https://github.com/adap/flower/pull/1504), " +"[#1506](https://github.com/adap/flower/pull/1506), " +"[#1514](https://github.com/adap/flower/pull/1514), " +"[#1522](https://github.com/adap/flower/pull/1522), " +"[#1523](https://github.com/adap/flower/pull/1523), " +"[#1526](https://github.com/adap/flower/pull/1526), " +"[#1528](https://github.com/adap/flower/pull/1528), " +"[#1547](https://github.com/adap/flower/pull/1547), " +"[#1549](https://github.com/adap/flower/pull/1549), " +"[#1560](https://github.com/adap/flower/pull/1560), " +"[#1564](https://github.com/adap/flower/pull/1564), " +"[#1566](https://github.com/adap/flower/pull/1566))" msgstr "" -#: ../../source/ref-changelog.md:718 +#: ../../source/ref-changelog.md:467 msgid "" -"**Remove deprecated support for eval_fn accuracy return value** " -"([#1142](https://github.com/adap/flower/pull/1142))" +"**Updated documentation** " +"([#1494](https://github.com/adap/flower/pull/1494), " +"[#1496](https://github.com/adap/flower/pull/1496), " +"[#1500](https://github.com/adap/flower/pull/1500), " +"[#1503](https://github.com/adap/flower/pull/1503), " +"[#1505](https://github.com/adap/flower/pull/1505), " +"[#1524](https://github.com/adap/flower/pull/1524), " +"[#1518](https://github.com/adap/flower/pull/1518), " +"[#1519](https://github.com/adap/flower/pull/1519), " +"[#1515](https://github.com/adap/flower/pull/1515))" msgstr "" -#: ../../source/ref-changelog.md:719 +#: ../../source/ref-changelog.md:471 msgid "" -"**Remove deprecated support for passing initial parameters as NumPy " -"ndarrays** ([#1142](https://github.com/adap/flower/pull/1142))" +"One highlight is the new [first time contributor " +"guide](https://flower.ai/docs/first-time-contributors.html): if you've " +"never contributed on GitHub before, this is the perfect place to start!" msgstr "" -#: ../../source/ref-changelog.md:721 -msgid "v0.18.0 (2022-02-28)" +#: ../../source/ref-changelog.md:477 +msgid "v1.1.0 (2022-10-31)" msgstr "" -#: ../../source/ref-changelog.md:725 +#: ../../source/ref-changelog.md:481 msgid "" -"**Improved Virtual Client Engine compatibility with Jupyter Notebook / " -"Google Colab** ([#866](https://github.com/adap/flower/pull/866), " -"[#872](https://github.com/adap/flower/pull/872), " -"[#833](https://github.com/adap/flower/pull/833), " -"[#1036](https://github.com/adap/flower/pull/1036))" +"We would like to give our **special thanks** to all the contributors who " +"made the new version of Flower possible (in `git shortlog` order):" msgstr "" -#: ../../source/ref-changelog.md:727 +#: ../../source/ref-changelog.md:483 msgid "" -"Simulations (using the Virtual Client Engine through `start_simulation`) " -"now work more smoothly on Jupyter Notebooks (incl. Google Colab) after " -"installing Flower with the `simulation` extra (`pip install " -"flwr[simulation]`)." +"`Akis Linardos`, `Christopher S`, `Daniel J. Beutel`, `George`, `Jan " +"Schlicht`, `Mohammad Fares`, `Pedro Porto Buarque de Gusmão`, `Philipp " +"Wiesner`, `Rob Luke`, `Taner Topal`, `VasundharaAgarwal`, " +"`danielnugraha`, `edogab33`" msgstr "" -#: ../../source/ref-changelog.md:729 +#: ../../source/ref-changelog.md:487 msgid "" -"**New Jupyter Notebook code example** " -"([#833](https://github.com/adap/flower/pull/833))" +"**Introduce Differential Privacy wrappers (preview)** " +"([#1357](https://github.com/adap/flower/pull/1357), " +"[#1460](https://github.com/adap/flower/pull/1460))" msgstr "" -#: ../../source/ref-changelog.md:731 +#: ../../source/ref-changelog.md:489 msgid "" -"A new code example (`quickstart_simulation`) demonstrates Flower " -"simulations using the Virtual Client Engine through Jupyter Notebook " -"(incl. Google Colab)." +"The first (experimental) preview of pluggable Differential Privacy " +"wrappers enables easy configuration and usage of differential privacy " +"(DP). The pluggable DP wrappers enable framework-agnostic **and** " +"strategy-agnostic usage of both client-side DP and server-side DP. Head " +"over to the Flower docs, a new explainer goes into more detail." msgstr "" -#: ../../source/ref-changelog.md:733 +#: ../../source/ref-changelog.md:491 msgid "" -"**Client properties (feature preview)** " -"([#795](https://github.com/adap/flower/pull/795))" +"**New iOS CoreML code example** " +"([#1289](https://github.com/adap/flower/pull/1289))" msgstr "" -#: ../../source/ref-changelog.md:735 +#: ../../source/ref-changelog.md:493 msgid "" -"Clients can implement a new method `get_properties` to enable server-side" -" strategies to query client properties." +"Flower goes iOS! A massive new code example shows how Flower clients can " +"be built for iOS. The code example contains both Flower iOS SDK " +"components that can be used for many tasks, and one task example running " +"on CoreML." msgstr "" -#: ../../source/ref-changelog.md:737 +#: ../../source/ref-changelog.md:495 msgid "" -"**Experimental Android support with TFLite** " -"([#865](https://github.com/adap/flower/pull/865))" +"**New FedMedian strategy** " +"([#1461](https://github.com/adap/flower/pull/1461))" msgstr "" -#: ../../source/ref-changelog.md:739 +#: ../../source/ref-changelog.md:497 msgid "" -"Android support has finally arrived in `main`! Flower is both client-" -"agnostic and framework-agnostic by design. One can integrate arbitrary " -"client platforms and with this release, using Flower on Android has " -"become a lot easier." +"The new `FedMedian` strategy implements Federated Median (FedMedian) by " +"[Yin et al., 2018](https://arxiv.org/pdf/1803.01498v1.pdf)." msgstr "" -#: ../../source/ref-changelog.md:741 +#: ../../source/ref-changelog.md:499 msgid "" -"The example uses TFLite on the client side, along with a new " -"`FedAvgAndroid` strategy. The Android client and `FedAvgAndroid` are " -"still experimental, but they are a first step towards a fully-fledged " -"Android SDK and a unified `FedAvg` implementation that integrated the new" -" functionality from `FedAvgAndroid`." +"**Log** `Client` **exceptions in Virtual Client Engine** " +"([#1493](https://github.com/adap/flower/pull/1493))" msgstr "" -#: ../../source/ref-changelog.md:743 +#: ../../source/ref-changelog.md:501 msgid "" -"**Make gRPC keepalive time user-configurable and decrease default " -"keepalive time** ([#1069](https://github.com/adap/flower/pull/1069))" +"All `Client` exceptions happening in the VCE are now logged by default " +"and not just exposed to the configured `Strategy` (via the `failures` " +"argument)." msgstr "" -#: ../../source/ref-changelog.md:745 +#: ../../source/ref-changelog.md:503 msgid "" -"The default gRPC keepalive time has been reduced to increase the " -"compatibility of Flower with more cloud environments (for example, " -"Microsoft Azure). Users can configure the keepalive time to customize the" -" gRPC stack based on specific requirements." +"**Improve Virtual Client Engine internals** " +"([#1401](https://github.com/adap/flower/pull/1401), " +"[#1453](https://github.com/adap/flower/pull/1453))" msgstr "" -#: ../../source/ref-changelog.md:747 +#: ../../source/ref-changelog.md:505 msgid "" -"**New differential privacy example using Opacus and PyTorch** " -"([#805](https://github.com/adap/flower/pull/805))" +"Some internals of the Virtual Client Engine have been revamped. The VCE " +"now uses Ray 2.0 under the hood, the value type of the `client_resources`" +" dictionary changed to `float` to allow fractions of resources to be " +"allocated." msgstr "" -#: ../../source/ref-changelog.md:749 +#: ../../source/ref-changelog.md:507 msgid "" -"A new code example (`opacus`) demonstrates differentially-private " -"federated learning with Opacus, PyTorch, and Flower." +"**Support optional** `Client`**/**`NumPyClient` **methods in Virtual " +"Client Engine**" msgstr "" -#: ../../source/ref-changelog.md:751 +#: ../../source/ref-changelog.md:509 msgid "" -"**New Hugging Face Transformers code example** " -"([#863](https://github.com/adap/flower/pull/863))" +"The Virtual Client Engine now has full support for optional `Client` (and" +" `NumPyClient`) methods." msgstr "" -#: ../../source/ref-changelog.md:753 +#: ../../source/ref-changelog.md:511 msgid "" -"A new code example (`quickstart_huggingface`) demonstrates usage of " -"Hugging Face Transformers with Flower." +"**Provide type information to packages using** `flwr` " +"([#1377](https://github.com/adap/flower/pull/1377))" msgstr "" -#: ../../source/ref-changelog.md:755 +#: ../../source/ref-changelog.md:513 msgid "" -"**New MLCube code example** " -"([#779](https://github.com/adap/flower/pull/779), " -"[#1034](https://github.com/adap/flower/pull/1034), " -"[#1065](https://github.com/adap/flower/pull/1065), " -"[#1090](https://github.com/adap/flower/pull/1090))" +"The package `flwr` is now bundled with a `py.typed` file indicating that " +"the package is typed. This enables typing support for projects or " +"packages that use `flwr` by enabling them to improve their code using " +"static type checkers like `mypy`." msgstr "" -#: ../../source/ref-changelog.md:757 +#: ../../source/ref-changelog.md:515 msgid "" -"A new code example (`quickstart_mlcube`) demonstrates usage of MLCube " -"with Flower." +"**Updated code example** " +"([#1344](https://github.com/adap/flower/pull/1344), " +"[#1347](https://github.com/adap/flower/pull/1347))" msgstr "" -#: ../../source/ref-changelog.md:759 +#: ../../source/ref-changelog.md:517 msgid "" -"**SSL-enabled server and client** " -"([#842](https://github.com/adap/flower/pull/842), " -"[#844](https://github.com/adap/flower/pull/844), " -"[#845](https://github.com/adap/flower/pull/845), " -"[#847](https://github.com/adap/flower/pull/847), " -"[#993](https://github.com/adap/flower/pull/993), " -"[#994](https://github.com/adap/flower/pull/994))" +"The code examples covering scikit-learn and PyTorch Lightning have been " +"updated to work with the latest version of Flower." msgstr "" -#: ../../source/ref-changelog.md:761 +#: ../../source/ref-changelog.md:519 msgid "" -"SSL enables secure encrypted connections between clients and servers. " -"This release open-sources the Flower secure gRPC implementation to make " -"encrypted communication channels accessible to all Flower users." +"**Updated documentation** " +"([#1355](https://github.com/adap/flower/pull/1355), " +"[#1558](https://github.com/adap/flower/pull/1558), " +"[#1379](https://github.com/adap/flower/pull/1379), " +"[#1380](https://github.com/adap/flower/pull/1380), " +"[#1381](https://github.com/adap/flower/pull/1381), " +"[#1332](https://github.com/adap/flower/pull/1332), " +"[#1391](https://github.com/adap/flower/pull/1391), " +"[#1403](https://github.com/adap/flower/pull/1403), " +"[#1364](https://github.com/adap/flower/pull/1364), " +"[#1409](https://github.com/adap/flower/pull/1409), " +"[#1419](https://github.com/adap/flower/pull/1419), " +"[#1444](https://github.com/adap/flower/pull/1444), " +"[#1448](https://github.com/adap/flower/pull/1448), " +"[#1417](https://github.com/adap/flower/pull/1417), " +"[#1449](https://github.com/adap/flower/pull/1449), " +"[#1465](https://github.com/adap/flower/pull/1465), " +"[#1467](https://github.com/adap/flower/pull/1467))" msgstr "" -#: ../../source/ref-changelog.md:763 +#: ../../source/ref-changelog.md:521 msgid "" -"**Updated** `FedAdam` **and** `FedYogi` **strategies** " -"([#885](https://github.com/adap/flower/pull/885), " -"[#895](https://github.com/adap/flower/pull/895))" +"There have been so many documentation updates that it doesn't even make " +"sense to list them individually." msgstr "" -#: ../../source/ref-changelog.md:765 +#: ../../source/ref-changelog.md:523 msgid "" -"`FedAdam` and `FedAdam` match the latest version of the Adaptive " -"Federated Optimization paper." +"**Restructured documentation** " +"([#1387](https://github.com/adap/flower/pull/1387))" msgstr "" -#: ../../source/ref-changelog.md:767 +#: ../../source/ref-changelog.md:525 msgid "" -"**Initialize** `start_simulation` **with a list of client IDs** " -"([#860](https://github.com/adap/flower/pull/860))" +"The documentation has been restructured to make it easier to navigate. " +"This is just the first step in a larger effort to make the Flower " +"documentation the best documentation of any project ever. Stay tuned!" msgstr "" -#: ../../source/ref-changelog.md:769 +#: ../../source/ref-changelog.md:527 msgid "" -"`start_simulation` can now be called with a list of client IDs " -"(`clients_ids`, type: `List[str]`). Those IDs will be passed to the " -"`client_fn` whenever a client needs to be initialized, which can make it " -"easier to load data partitions that are not accessible through `int` " -"identifiers." +"**Open in Colab button** " +"([#1389](https://github.com/adap/flower/pull/1389))" msgstr "" -#: ../../source/ref-changelog.md:773 +#: ../../source/ref-changelog.md:529 msgid "" -"Update `num_examples` calculation in PyTorch code examples in " -"([#909](https://github.com/adap/flower/pull/909))" +"The four parts of the Flower Federated Learning Tutorial now come with a " +"new `Open in Colab` button. No need to install anything on your local " +"machine, you can now use and learn about Flower in your browser, it's " +"only a single click away." msgstr "" -#: ../../source/ref-changelog.md:774 +#: ../../source/ref-changelog.md:531 msgid "" -"Expose Flower version through `flwr.__version__` " -"([#952](https://github.com/adap/flower/pull/952))" -msgstr "" - -#: ../../source/ref-changelog.md:775 -msgid "" -"`start_server` in `app.py` now returns a `History` object containing " -"metrics from training ([#974](https://github.com/adap/flower/pull/974))" +"**Improved tutorial** ([#1468](https://github.com/adap/flower/pull/1468)," +" [#1470](https://github.com/adap/flower/pull/1470), " +"[#1472](https://github.com/adap/flower/pull/1472), " +"[#1473](https://github.com/adap/flower/pull/1473), " +"[#1474](https://github.com/adap/flower/pull/1474), " +"[#1475](https://github.com/adap/flower/pull/1475))" msgstr "" -#: ../../source/ref-changelog.md:776 +#: ../../source/ref-changelog.md:533 msgid "" -"Make `max_workers` (used by `ThreadPoolExecutor`) configurable " -"([#978](https://github.com/adap/flower/pull/978))" +"The Flower Federated Learning Tutorial has two brand-new parts covering " +"custom strategies (still WIP) and the distinction between `Client` and " +"`NumPyClient`. The existing parts one and two have also been improved " +"(many small changes and fixes)." msgstr "" -#: ../../source/ref-changelog.md:777 -msgid "" -"Increase sleep time after server start to three seconds in all code " -"examples ([#1086](https://github.com/adap/flower/pull/1086))" +#: ../../source/ref-changelog.md:539 +msgid "v1.0.0 (2022-07-28)" msgstr "" -#: ../../source/ref-changelog.md:778 -msgid "" -"Added a new FAQ section to the documentation " -"([#948](https://github.com/adap/flower/pull/948))" +#: ../../source/ref-changelog.md:541 +msgid "Highlights" msgstr "" -#: ../../source/ref-changelog.md:779 -msgid "" -"And many more under-the-hood changes, library updates, documentation " -"changes, and tooling improvements!" +#: ../../source/ref-changelog.md:543 +msgid "Stable **Virtual Client Engine** (accessible via `start_simulation`)" msgstr "" -#: ../../source/ref-changelog.md:783 -msgid "" -"**Removed** `flwr_example` **and** `flwr_experimental` **from release " -"build** ([#869](https://github.com/adap/flower/pull/869))" +#: ../../source/ref-changelog.md:544 +msgid "All `Client`/`NumPyClient` methods are now optional" msgstr "" -#: ../../source/ref-changelog.md:785 -msgid "" -"The packages `flwr_example` and `flwr_experimental` have been deprecated " -"since Flower 0.12.0 and they are not longer included in Flower release " -"builds. The associated extras (`baseline`, `examples-pytorch`, `examples-" -"tensorflow`, `http-logger`, `ops`) are now no-op and will be removed in " -"an upcoming release." +#: ../../source/ref-changelog.md:545 +msgid "Configurable `get_parameters`" msgstr "" -#: ../../source/ref-changelog.md:787 -msgid "v0.17.0 (2021-09-24)" +#: ../../source/ref-changelog.md:546 +msgid "" +"Tons of small API cleanups resulting in a more coherent developer " +"experience" msgstr "" -#: ../../source/ref-changelog.md:791 +#: ../../source/ref-changelog.md:550 msgid "" -"**Experimental virtual client engine** " -"([#781](https://github.com/adap/flower/pull/781) " -"[#790](https://github.com/adap/flower/pull/790) " -"[#791](https://github.com/adap/flower/pull/791))" +"We would like to give our **special thanks** to all the contributors who " +"made Flower 1.0 possible (in reverse [GitHub " +"Contributors](https://github.com/adap/flower/graphs/contributors) order):" msgstr "" -#: ../../source/ref-changelog.md:793 +#: ../../source/ref-changelog.md:552 msgid "" -"One of Flower's goals is to enable research at scale. This release " -"enables a first (experimental) peek at a major new feature, codenamed the" -" virtual client engine. Virtual clients enable simulations that scale to " -"a (very) large number of clients on a single machine or compute cluster. " -"The easiest way to test the new functionality is to look at the two new " -"code examples called `quickstart_simulation` and `simulation_pytorch`." +"[@rtaiello](https://github.com/rtaiello), " +"[@g-pichler](https://github.com/g-pichler), [@rob-" +"luke](https://github.com/rob-luke), [@andreea-zaharia](https://github.com" +"/andreea-zaharia), [@kinshukdua](https://github.com/kinshukdua), " +"[@nfnt](https://github.com/nfnt), " +"[@tatiana-s](https://github.com/tatiana-s), " +"[@TParcollet](https://github.com/TParcollet), " +"[@vballoli](https://github.com/vballoli), " +"[@negedng](https://github.com/negedng), " +"[@RISHIKESHAVAN](https://github.com/RISHIKESHAVAN), " +"[@hei411](https://github.com/hei411), " +"[@SebastianSpeitel](https://github.com/SebastianSpeitel), " +"[@AmitChaulwar](https://github.com/AmitChaulwar), " +"[@Rubiel1](https://github.com/Rubiel1), [@FANTOME-PAN](https://github.com" +"/FANTOME-PAN), [@Rono-BC](https://github.com/Rono-BC), " +"[@lbhm](https://github.com/lbhm), " +"[@sishtiaq](https://github.com/sishtiaq), " +"[@remde](https://github.com/remde), [@Jueun-Park](https://github.com" +"/Jueun-Park), [@architjen](https://github.com/architjen), " +"[@PratikGarai](https://github.com/PratikGarai), " +"[@mrinaald](https://github.com/mrinaald), " +"[@zliel](https://github.com/zliel), " +"[@MeiruiJiang](https://github.com/MeiruiJiang), " +"[@sancarlim](https://github.com/sancarlim), " +"[@gubertoli](https://github.com/gubertoli), " +"[@Vingt100](https://github.com/Vingt100), " +"[@MakGulati](https://github.com/MakGulati), " +"[@cozek](https://github.com/cozek), " +"[@jafermarq](https://github.com/jafermarq), " +"[@sisco0](https://github.com/sisco0), " +"[@akhilmathurs](https://github.com/akhilmathurs), " +"[@CanTuerk](https://github.com/CanTuerk), " +"[@mariaboerner1987](https://github.com/mariaboerner1987), " +"[@pedropgusmao](https://github.com/pedropgusmao), " +"[@tanertopal](https://github.com/tanertopal), " +"[@danieljanes](https://github.com/danieljanes)." msgstr "" -#: ../../source/ref-changelog.md:795 +#: ../../source/ref-changelog.md:556 msgid "" -"The feature is still experimental, so there's no stability guarantee for " -"the API. It's also not quite ready for prime time and comes with a few " -"known caveats. However, those who are curious are encouraged to try it " -"out and share their thoughts." +"**All arguments must be passed as keyword arguments** " +"([#1338](https://github.com/adap/flower/pull/1338))" msgstr "" -#: ../../source/ref-changelog.md:797 +#: ../../source/ref-changelog.md:558 msgid "" -"**New built-in strategies** " -"([#828](https://github.com/adap/flower/pull/828) " -"[#822](https://github.com/adap/flower/pull/822))" +"Pass all arguments as keyword arguments, positional arguments are not " +"longer supported. Code that uses positional arguments (e.g., " +"`start_client(\"127.0.0.1:8080\", FlowerClient())`) must add the keyword " +"for each positional argument (e.g., " +"`start_client(server_address=\"127.0.0.1:8080\", " +"client=FlowerClient())`)." msgstr "" -#: ../../source/ref-changelog.md:799 +#: ../../source/ref-changelog.md:560 msgid "" -"FedYogi - Federated learning strategy using Yogi on server-side. " -"Implementation based on https://arxiv.org/abs/2003.00295" +"**Introduce configuration object** `ServerConfig` **in** `start_server` " +"**and** `start_simulation` " +"([#1317](https://github.com/adap/flower/pull/1317))" msgstr "" -#: ../../source/ref-changelog.md:800 +#: ../../source/ref-changelog.md:562 msgid "" -"FedAdam - Federated learning strategy using Adam on server-side. " -"Implementation based on https://arxiv.org/abs/2003.00295" +"Instead of a config dictionary `{\"num_rounds\": 3, \"round_timeout\": " +"600.0}`, `start_server` and `start_simulation` now expect a configuration" +" object of type `flwr.server.ServerConfig`. `ServerConfig` takes the same" +" arguments that as the previous config dict, but it makes writing type-" +"safe code easier and the default parameters values more transparent." msgstr "" -#: ../../source/ref-changelog.md:802 +#: ../../source/ref-changelog.md:564 msgid "" -"**New PyTorch Lightning code example** " -"([#617](https://github.com/adap/flower/pull/617))" +"**Rename built-in strategy parameters for clarity** " +"([#1334](https://github.com/adap/flower/pull/1334))" msgstr "" -#: ../../source/ref-changelog.md:804 +#: ../../source/ref-changelog.md:566 msgid "" -"**New Variational Auto-Encoder code example** " -"([#752](https://github.com/adap/flower/pull/752))" +"The following built-in strategy parameters were renamed to improve " +"readability and consistency with other API's:" msgstr "" -#: ../../source/ref-changelog.md:806 -msgid "" -"**New scikit-learn code example** " -"([#748](https://github.com/adap/flower/pull/748))" +#: ../../source/ref-changelog.md:568 +msgid "`fraction_eval` --> `fraction_evaluate`" msgstr "" -#: ../../source/ref-changelog.md:808 -msgid "" -"**New experimental TensorBoard strategy** " -"([#789](https://github.com/adap/flower/pull/789))" +#: ../../source/ref-changelog.md:569 +msgid "`min_eval_clients` --> `min_evaluate_clients`" msgstr "" -#: ../../source/ref-changelog.md:812 -msgid "" -"Improved advanced TensorFlow code example " -"([#769](https://github.com/adap/flower/pull/769))" +#: ../../source/ref-changelog.md:570 +msgid "`eval_fn` --> `evaluate_fn`" msgstr "" -#: ../../source/ref-changelog.md:813 +#: ../../source/ref-changelog.md:572 msgid "" -"Warning when `min_available_clients` is misconfigured " -"([#830](https://github.com/adap/flower/pull/830))" +"**Update default arguments of built-in strategies** " +"([#1278](https://github.com/adap/flower/pull/1278))" msgstr "" -#: ../../source/ref-changelog.md:814 +#: ../../source/ref-changelog.md:574 msgid "" -"Improved gRPC server docs " -"([#841](https://github.com/adap/flower/pull/841))" +"All built-in strategies now use `fraction_fit=1.0` and " +"`fraction_evaluate=1.0`, which means they select *all* currently " +"available clients for training and evaluation. Projects that relied on " +"the previous default values can get the previous behaviour by " +"initializing the strategy in the following way:" msgstr "" -#: ../../source/ref-changelog.md:815 -msgid "" -"Improved error message in `NumPyClient` " -"([#851](https://github.com/adap/flower/pull/851))" +#: ../../source/ref-changelog.md:576 +msgid "`strategy = FedAvg(fraction_fit=0.1, fraction_evaluate=0.1)`" msgstr "" -#: ../../source/ref-changelog.md:816 +#: ../../source/ref-changelog.md:578 msgid "" -"Improved PyTorch quickstart code example " -"([#852](https://github.com/adap/flower/pull/852))" +"**Add** `server_round` **to** `Strategy.evaluate` " +"([#1334](https://github.com/adap/flower/pull/1334))" msgstr "" -#: ../../source/ref-changelog.md:820 +#: ../../source/ref-changelog.md:580 msgid "" -"**Disabled final distributed evaluation** " -"([#800](https://github.com/adap/flower/pull/800))" +"The `Strategy` method `evaluate` now receives the current round of " +"federated learning/evaluation as the first parameter." msgstr "" -#: ../../source/ref-changelog.md:822 +#: ../../source/ref-changelog.md:582 msgid "" -"Prior behaviour was to perform a final round of distributed evaluation on" -" all connected clients, which is often not required (e.g., when using " -"server-side evaluation). The prior behaviour can be enabled by passing " -"`force_final_distributed_eval=True` to `start_server`." +"**Add** `server_round` **and** `config` **parameters to** `evaluate_fn` " +"([#1334](https://github.com/adap/flower/pull/1334))" msgstr "" -#: ../../source/ref-changelog.md:824 +#: ../../source/ref-changelog.md:584 msgid "" -"**Renamed q-FedAvg strategy** " -"([#802](https://github.com/adap/flower/pull/802))" +"The `evaluate_fn` passed to built-in strategies like `FedAvg` now takes " +"three parameters: (1) The current round of federated learning/evaluation " +"(`server_round`), (2) the model parameters to evaluate (`parameters`), " +"and (3) a config dictionary (`config`)." msgstr "" -#: ../../source/ref-changelog.md:826 +#: ../../source/ref-changelog.md:586 msgid "" -"The strategy named `QffedAvg` was renamed to `QFedAvg` to better reflect " -"the notation given in the original paper (q-FFL is the optimization " -"objective, q-FedAvg is the proposed solver). Note the original (now " -"deprecated) `QffedAvg` class is still available for compatibility reasons" -" (it will be removed in a future release)." +"**Rename** `rnd` **to** `server_round` " +"([#1321](https://github.com/adap/flower/pull/1321))" msgstr "" -#: ../../source/ref-changelog.md:828 +#: ../../source/ref-changelog.md:588 msgid "" -"**Deprecated and renamed code example** `simulation_pytorch` **to** " -"`simulation_pytorch_legacy` " -"([#791](https://github.com/adap/flower/pull/791))" +"Several Flower methods and functions (`evaluate_fn`, `configure_fit`, " +"`aggregate_fit`, `configure_evaluate`, `aggregate_evaluate`) receive the " +"current round of federated learning/evaluation as their first parameter. " +"To improve reaability and avoid confusion with *random*, this parameter " +"has been renamed from `rnd` to `server_round`." msgstr "" -#: ../../source/ref-changelog.md:830 +#: ../../source/ref-changelog.md:590 msgid "" -"This example has been replaced by a new example. The new example is based" -" on the experimental virtual client engine, which will become the new " -"default way of doing most types of large-scale simulations in Flower. The" -" existing example was kept for reference purposes, but it might be " -"removed in the future." +"**Move** `flwr.dataset` **to** `flwr_baselines` " +"([#1273](https://github.com/adap/flower/pull/1273))" msgstr "" -#: ../../source/ref-changelog.md:832 -msgid "v0.16.0 (2021-05-11)" +#: ../../source/ref-changelog.md:592 +msgid "The experimental package `flwr.dataset` was migrated to Flower Baselines." msgstr "" -#: ../../source/ref-changelog.md:836 +#: ../../source/ref-changelog.md:594 msgid "" -"**New built-in strategies** " -"([#549](https://github.com/adap/flower/pull/549))" -msgstr "" - -#: ../../source/ref-changelog.md:838 -msgid "(abstract) FedOpt" +"**Remove experimental strategies** " +"([#1280](https://github.com/adap/flower/pull/1280))" msgstr "" -#: ../../source/ref-changelog.md:841 +#: ../../source/ref-changelog.md:596 msgid "" -"**Custom metrics for server and strategies** " -"([#717](https://github.com/adap/flower/pull/717))" +"Remove unmaintained experimental strategies (`FastAndSlow`, `FedFSv0`, " +"`FedFSv1`)." msgstr "" -#: ../../source/ref-changelog.md:843 +#: ../../source/ref-changelog.md:598 msgid "" -"The Flower server is now fully task-agnostic, all remaining instances of " -"task-specific metrics (such as `accuracy`) have been replaced by custom " -"metrics dictionaries. Flower 0.15 introduced the capability to pass a " -"dictionary containing custom metrics from client to server. As of this " -"release, custom metrics replace task-specific metrics on the server." +"**Rename** `Weights` **to** `NDArrays` " +"([#1258](https://github.com/adap/flower/pull/1258), " +"[#1259](https://github.com/adap/flower/pull/1259))" msgstr "" -#: ../../source/ref-changelog.md:845 +#: ../../source/ref-changelog.md:600 msgid "" -"Custom metric dictionaries are now used in two user-facing APIs: they are" -" returned from Strategy methods `aggregate_fit`/`aggregate_evaluate` and " -"they enable evaluation functions passed to built-in strategies (via " -"`eval_fn`) to return more than two evaluation metrics. Strategies can " -"even return *aggregated* metrics dictionaries for the server to keep " -"track of." +"`flwr.common.Weights` was renamed to `flwr.common.NDArrays` to better " +"capture what this type is all about." msgstr "" -#: ../../source/ref-changelog.md:847 +#: ../../source/ref-changelog.md:602 msgid "" -"Strategy implementations should migrate their `aggregate_fit` and " -"`aggregate_evaluate` methods to the new return type (e.g., by simply " -"returning an empty `{}`), server-side evaluation functions should migrate" -" from `return loss, accuracy` to `return loss, {\"accuracy\": accuracy}`." +"**Remove antiquated** `force_final_distributed_eval` **from** " +"`start_server` ([#1258](https://github.com/adap/flower/pull/1258), " +"[#1259](https://github.com/adap/flower/pull/1259))" msgstr "" -#: ../../source/ref-changelog.md:849 +#: ../../source/ref-changelog.md:604 msgid "" -"Flower 0.15-style return types are deprecated (but still supported), " -"compatibility will be removed in a future release." +"The `start_server` parameter `force_final_distributed_eval` has long been" +" a historic artefact, in this release it is finally gone for good." msgstr "" -#: ../../source/ref-changelog.md:851 +#: ../../source/ref-changelog.md:606 msgid "" -"**Migration warnings for deprecated functionality** " -"([#690](https://github.com/adap/flower/pull/690))" +"**Make** `get_parameters` **configurable** " +"([#1242](https://github.com/adap/flower/pull/1242))" msgstr "" -#: ../../source/ref-changelog.md:853 +#: ../../source/ref-changelog.md:608 msgid "" -"Earlier versions of Flower were often migrated to new APIs, while " -"maintaining compatibility with legacy APIs. This release introduces " -"detailed warning messages if usage of deprecated APIs is detected. The " -"new warning messages often provide details on how to migrate to more " -"recent APIs, thus easing the transition from one release to another." +"The `get_parameters` method now accepts a configuration dictionary, just " +"like `get_properties`, `fit`, and `evaluate`." msgstr "" -#: ../../source/ref-changelog.md:855 +#: ../../source/ref-changelog.md:610 msgid "" -"Improved docs and docstrings " -"([#691](https://github.com/adap/flower/pull/691) " -"[#692](https://github.com/adap/flower/pull/692) " -"[#713](https://github.com/adap/flower/pull/713))" +"**Replace** `num_rounds` **in** `start_simulation` **with new** `config` " +"**parameter** ([#1281](https://github.com/adap/flower/pull/1281))" msgstr "" -#: ../../source/ref-changelog.md:857 -msgid "MXNet example and documentation" +#: ../../source/ref-changelog.md:612 +msgid "" +"The `start_simulation` function now accepts a configuration dictionary " +"`config` instead of the `num_rounds` integer. This improves the " +"consistency between `start_simulation` and `start_server` and makes " +"transitioning between the two easier." msgstr "" -#: ../../source/ref-changelog.md:859 +#: ../../source/ref-changelog.md:616 msgid "" -"FedBN implementation in example PyTorch: From Centralized To Federated " -"([#696](https://github.com/adap/flower/pull/696) " -"[#702](https://github.com/adap/flower/pull/702) " -"[#705](https://github.com/adap/flower/pull/705))" +"**Support Python 3.10** " +"([#1320](https://github.com/adap/flower/pull/1320))" msgstr "" -#: ../../source/ref-changelog.md:863 +#: ../../source/ref-changelog.md:618 msgid "" -"**Serialization-agnostic server** " -"([#721](https://github.com/adap/flower/pull/721))" +"The previous Flower release introduced experimental support for Python " +"3.10, this release declares Python 3.10 support as stable." msgstr "" -#: ../../source/ref-changelog.md:865 +#: ../../source/ref-changelog.md:620 msgid "" -"The Flower server is now fully serialization-agnostic. Prior usage of " -"class `Weights` (which represents parameters as deserialized NumPy " -"ndarrays) was replaced by class `Parameters` (e.g., in `Strategy`). " -"`Parameters` objects are fully serialization-agnostic and represents " -"parameters as byte arrays, the `tensor_type` attributes indicates how " -"these byte arrays should be interpreted (e.g., for " -"serialization/deserialization)." +"**Make all** `Client` **and** `NumPyClient` **methods optional** " +"([#1260](https://github.com/adap/flower/pull/1260), " +"[#1277](https://github.com/adap/flower/pull/1277))" msgstr "" -#: ../../source/ref-changelog.md:867 +#: ../../source/ref-changelog.md:622 msgid "" -"Built-in strategies implement this approach by handling serialization and" -" deserialization to/from `Weights` internally. Custom/3rd-party Strategy " -"implementations should update to the slightly changed Strategy method " -"definitions. Strategy authors can consult PR " -"[#721](https://github.com/adap/flower/pull/721) to see how strategies can" -" easily migrate to the new format." +"The `Client`/`NumPyClient` methods `get_properties`, `get_parameters`, " +"`fit`, and `evaluate` are all optional. This enables writing clients that" +" implement, for example, only `fit`, but no other method. No need to " +"implement `evaluate` when using centralized evaluation!" msgstr "" -#: ../../source/ref-changelog.md:869 +#: ../../source/ref-changelog.md:624 msgid "" -"Deprecated `flwr.server.Server.evaluate`, use " -"`flwr.server.Server.evaluate_round` instead " -"([#717](https://github.com/adap/flower/pull/717))" +"**Enable passing a** `Server` **instance to** `start_simulation` " +"([#1281](https://github.com/adap/flower/pull/1281))" msgstr "" -#: ../../source/ref-changelog.md:871 -msgid "v0.15.0 (2021-03-12)" +#: ../../source/ref-changelog.md:626 +msgid "" +"Similar to `start_server`, `start_simulation` now accepts a full `Server`" +" instance. This enables users to heavily customize the execution of " +"eperiments and opens the door to running, for example, async FL using the" +" Virtual Client Engine." msgstr "" -#: ../../source/ref-changelog.md:875 +#: ../../source/ref-changelog.md:628 msgid "" -"**Server-side parameter initialization** " -"([#658](https://github.com/adap/flower/pull/658))" +"**Update code examples** " +"([#1291](https://github.com/adap/flower/pull/1291), " +"[#1286](https://github.com/adap/flower/pull/1286), " +"[#1282](https://github.com/adap/flower/pull/1282))" msgstr "" -#: ../../source/ref-changelog.md:877 +#: ../../source/ref-changelog.md:630 msgid "" -"Model parameters can now be initialized on the server-side. Server-side " -"parameter initialization works via a new `Strategy` method called " -"`initialize_parameters`." +"Many code examples received small or even large maintenance updates, " +"among them are" msgstr "" -#: ../../source/ref-changelog.md:879 -msgid "" -"Built-in strategies support a new constructor argument called " -"`initial_parameters` to set the initial parameters. Built-in strategies " -"will provide these initial parameters to the server on startup and then " -"delete them to free the memory afterwards." +#: ../../source/ref-changelog.md:632 +msgid "`scikit-learn`" msgstr "" -#: ../../source/ref-changelog.md:898 -msgid "" -"If no initial parameters are provided to the strategy, the server will " -"continue to use the current behaviour (namely, it will ask one of the " -"connected clients for its parameters and use these as the initial global " -"parameters)." +#: ../../source/ref-changelog.md:633 +msgid "`simulation_pytorch`" msgstr "" -#: ../../source/ref-changelog.md:900 -msgid "Deprecations" +#: ../../source/ref-changelog.md:634 +msgid "`quickstart_pytorch`" msgstr "" -#: ../../source/ref-changelog.md:902 -msgid "" -"Deprecate `flwr.server.strategy.DefaultStrategy` (migrate to " -"`flwr.server.strategy.FedAvg`, which is equivalent)" +#: ../../source/ref-changelog.md:635 +msgid "`quickstart_simulation`" msgstr "" -#: ../../source/ref-changelog.md:904 -msgid "v0.14.0 (2021-02-18)" +#: ../../source/ref-changelog.md:636 +msgid "`quickstart_tensorflow`" msgstr "" -#: ../../source/ref-changelog.md:908 -msgid "" -"**Generalized** `Client.fit` **and** `Client.evaluate` **return values** " -"([#610](https://github.com/adap/flower/pull/610) " -"[#572](https://github.com/adap/flower/pull/572) " -"[#633](https://github.com/adap/flower/pull/633))" +#: ../../source/ref-changelog.md:637 +msgid "`advanced_tensorflow`" msgstr "" -#: ../../source/ref-changelog.md:910 +#: ../../source/ref-changelog.md:639 msgid "" -"Clients can now return an additional dictionary mapping `str` keys to " -"values of the following types: `bool`, `bytes`, `float`, `int`, `str`. " -"This means one can return almost arbitrary values from `fit`/`evaluate` " -"and make use of them on the server side!" +"**Remove the obsolete simulation example** " +"([#1328](https://github.com/adap/flower/pull/1328))" msgstr "" -#: ../../source/ref-changelog.md:912 +#: ../../source/ref-changelog.md:641 msgid "" -"This improvement also allowed for more consistent return types between " -"`fit` and `evaluate`: `evaluate` should now return a tuple `(float, int, " -"dict)` representing the loss, number of examples, and a dictionary " -"holding arbitrary problem-specific values like accuracy." +"Removes the obsolete `simulation` example and renames " +"`quickstart_simulation` to `simulation_tensorflow` so it fits withs the " +"naming of `simulation_pytorch`" msgstr "" -#: ../../source/ref-changelog.md:914 +#: ../../source/ref-changelog.md:643 msgid "" -"In case you wondered: this feature is compatible with existing projects, " -"the additional dictionary return value is optional. New code should " -"however migrate to the new return types to be compatible with upcoming " -"Flower releases (`fit`: `List[np.ndarray], int, Dict[str, Scalar]`, " -"`evaluate`: `float, int, Dict[str, Scalar]`). See the example below for " -"details." +"**Update documentation** " +"([#1223](https://github.com/adap/flower/pull/1223), " +"[#1209](https://github.com/adap/flower/pull/1209), " +"[#1251](https://github.com/adap/flower/pull/1251), " +"[#1257](https://github.com/adap/flower/pull/1257), " +"[#1267](https://github.com/adap/flower/pull/1267), " +"[#1268](https://github.com/adap/flower/pull/1268), " +"[#1300](https://github.com/adap/flower/pull/1300), " +"[#1304](https://github.com/adap/flower/pull/1304), " +"[#1305](https://github.com/adap/flower/pull/1305), " +"[#1307](https://github.com/adap/flower/pull/1307))" msgstr "" -#: ../../source/ref-changelog.md:916 +#: ../../source/ref-changelog.md:645 msgid "" -"*Code example:* note the additional dictionary return values in both " -"`FlwrClient.fit` and `FlwrClient.evaluate`:" +"One substantial documentation update fixes multiple smaller rendering " +"issues, makes titles more succinct to improve navigation, removes a " +"deprecated library, updates documentation dependencies, includes the " +"`flwr.common` module in the API reference, includes support for markdown-" +"based documentation, migrates the changelog from `.rst` to `.md`, and " +"fixes a number of smaller details!" msgstr "" -#: ../../source/ref-changelog.md:931 -msgid "" -"**Generalized** `config` **argument in** `Client.fit` **and** " -"`Client.evaluate` ([#595](https://github.com/adap/flower/pull/595))" +#: ../../source/ref-changelog.md:647 ../../source/ref-changelog.md:702 +#: ../../source/ref-changelog.md:771 ../../source/ref-changelog.md:810 +msgid "**Minor updates**" msgstr "" -#: ../../source/ref-changelog.md:933 +#: ../../source/ref-changelog.md:649 msgid "" -"The `config` argument used to be of type `Dict[str, str]`, which means " -"that dictionary values were expected to be strings. The new release " -"generalizes this to enable values of the following types: `bool`, " -"`bytes`, `float`, `int`, `str`." +"Add round number to fit and evaluate log messages " +"([#1266](https://github.com/adap/flower/pull/1266))" msgstr "" -#: ../../source/ref-changelog.md:935 +#: ../../source/ref-changelog.md:650 msgid "" -"This means one can now pass almost arbitrary values to `fit`/`evaluate` " -"using the `config` dictionary. Yay, no more `str(epochs)` on the server-" -"side and `int(config[\"epochs\"])` on the client side!" +"Add secure gRPC connection to the `advanced_tensorflow` code example " +"([#847](https://github.com/adap/flower/pull/847))" msgstr "" -#: ../../source/ref-changelog.md:937 +#: ../../source/ref-changelog.md:651 msgid "" -"*Code example:* note that the `config` dictionary now contains non-`str` " -"values in both `Client.fit` and `Client.evaluate`:" +"Update developer tooling " +"([#1231](https://github.com/adap/flower/pull/1231), " +"[#1276](https://github.com/adap/flower/pull/1276), " +"[#1301](https://github.com/adap/flower/pull/1301), " +"[#1310](https://github.com/adap/flower/pull/1310))" msgstr "" -#: ../../source/ref-changelog.md:954 -msgid "v0.13.0 (2021-01-08)" +#: ../../source/ref-changelog.md:652 +msgid "" +"Rename ProtoBuf messages to improve consistency " +"([#1214](https://github.com/adap/flower/pull/1214), " +"[#1258](https://github.com/adap/flower/pull/1258), " +"[#1259](https://github.com/adap/flower/pull/1259))" msgstr "" -#: ../../source/ref-changelog.md:958 -msgid "" -"New example: PyTorch From Centralized To Federated " -"([#549](https://github.com/adap/flower/pull/549))" +#: ../../source/ref-changelog.md:654 +msgid "v0.19.0 (2022-05-18)" msgstr "" -#: ../../source/ref-changelog.md:959 -msgid "Improved documentation" +#: ../../source/ref-changelog.md:658 +msgid "" +"**Flower Baselines (preview): FedOpt, FedBN, FedAvgM** " +"([#919](https://github.com/adap/flower/pull/919), " +"[#1127](https://github.com/adap/flower/pull/1127), " +"[#914](https://github.com/adap/flower/pull/914))" msgstr "" -#: ../../source/ref-changelog.md:960 -msgid "New documentation theme ([#551](https://github.com/adap/flower/pull/551))" +#: ../../source/ref-changelog.md:660 +msgid "" +"The first preview release of Flower Baselines has arrived! We're " +"kickstarting Flower Baselines with implementations of FedOpt (FedYogi, " +"FedAdam, FedAdagrad), FedBN, and FedAvgM. Check the documentation on how " +"to use [Flower Baselines](https://flower.ai/docs/using-baselines.html). " +"With this first preview release we're also inviting the community to " +"[contribute their own baselines](https://flower.ai/docs/baselines/how-to-" +"contribute-baselines.html)." msgstr "" -#: ../../source/ref-changelog.md:961 -msgid "New API reference ([#554](https://github.com/adap/flower/pull/554))" +#: ../../source/ref-changelog.md:662 +msgid "" +"**C++ client SDK (preview) and code example** " +"([#1111](https://github.com/adap/flower/pull/1111))" msgstr "" -#: ../../source/ref-changelog.md:962 +#: ../../source/ref-changelog.md:664 msgid "" -"Updated examples documentation " -"([#549](https://github.com/adap/flower/pull/549))" +"Preview support for Flower clients written in C++. The C++ preview " +"includes a Flower client SDK and a quickstart code example that " +"demonstrates a simple C++ client using the SDK." msgstr "" -#: ../../source/ref-changelog.md:963 +#: ../../source/ref-changelog.md:666 msgid "" -"Removed obsolete documentation " -"([#548](https://github.com/adap/flower/pull/548))" +"**Add experimental support for Python 3.10 and Python 3.11** " +"([#1135](https://github.com/adap/flower/pull/1135))" msgstr "" -#: ../../source/ref-changelog.md:965 -msgid "Bugfix:" +#: ../../source/ref-changelog.md:668 +msgid "" +"Python 3.10 is the latest stable release of Python and Python 3.11 is due" +" to be released in October. This Flower release adds experimental support" +" for both Python versions." msgstr "" -#: ../../source/ref-changelog.md:967 +#: ../../source/ref-changelog.md:670 msgid "" -"`Server.fit` does not disconnect clients when finished, disconnecting the" -" clients is now handled in `flwr.server.start_server` " -"([#553](https://github.com/adap/flower/pull/553) " -"[#540](https://github.com/adap/flower/issues/540))." +"**Aggregate custom metrics through user-provided functions** " +"([#1144](https://github.com/adap/flower/pull/1144))" msgstr "" -#: ../../source/ref-changelog.md:969 -msgid "v0.12.0 (2020-12-07)" +#: ../../source/ref-changelog.md:672 +msgid "" +"Custom metrics (e.g., `accuracy`) can now be aggregated without having to" +" customize the strategy. Built-in strategies support two new arguments, " +"`fit_metrics_aggregation_fn` and `evaluate_metrics_aggregation_fn`, that " +"allow passing custom metric aggregation functions." msgstr "" -#: ../../source/ref-changelog.md:971 ../../source/ref-changelog.md:987 -msgid "Important changes:" +#: ../../source/ref-changelog.md:674 +msgid "" +"**User-configurable round timeout** " +"([#1162](https://github.com/adap/flower/pull/1162))" msgstr "" -#: ../../source/ref-changelog.md:973 +#: ../../source/ref-changelog.md:676 msgid "" -"Added an example for embedded devices " -"([#507](https://github.com/adap/flower/pull/507))" +"A new configuration value allows the round timeout to be set for " +"`start_server` and `start_simulation`. If the `config` dictionary " +"contains a `round_timeout` key (with a `float` value in seconds), the " +"server will wait *at least* `round_timeout` seconds before it closes the " +"connection." msgstr "" -#: ../../source/ref-changelog.md:974 +#: ../../source/ref-changelog.md:678 msgid "" -"Added a new NumPyClient (in addition to the existing KerasClient) " -"([#504](https://github.com/adap/flower/pull/504) " -"[#508](https://github.com/adap/flower/pull/508))" +"**Enable both federated evaluation and centralized evaluation to be used " +"at the same time in all built-in strategies** " +"([#1091](https://github.com/adap/flower/pull/1091))" msgstr "" -#: ../../source/ref-changelog.md:975 +#: ../../source/ref-changelog.md:680 msgid "" -"Deprecated `flwr_example` package and started to migrate examples into " -"the top-level `examples` directory " -"([#494](https://github.com/adap/flower/pull/494) " -"[#512](https://github.com/adap/flower/pull/512))" +"Built-in strategies can now perform both federated evaluation (i.e., " +"client-side) and centralized evaluation (i.e., server-side) in the same " +"round. Federated evaluation can be disabled by setting `fraction_eval` to" +" `0.0`." msgstr "" -#: ../../source/ref-changelog.md:977 -msgid "v0.11.0 (2020-11-30)" +#: ../../source/ref-changelog.md:682 +msgid "" +"**Two new Jupyter Notebook tutorials** " +"([#1141](https://github.com/adap/flower/pull/1141))" msgstr "" -#: ../../source/ref-changelog.md:979 -msgid "Incompatible changes:" +#: ../../source/ref-changelog.md:684 +msgid "" +"Two Jupyter Notebook tutorials (compatible with Google Colab) explain " +"basic and intermediate Flower features:" msgstr "" -#: ../../source/ref-changelog.md:981 +#: ../../source/ref-changelog.md:686 msgid "" -"Renamed strategy methods " -"([#486](https://github.com/adap/flower/pull/486)) to unify the naming of " -"Flower's public APIs. Other public methods/functions (e.g., every method " -"in `Client`, but also `Strategy.evaluate`) do not use the `on_` prefix, " -"which is why we're removing it from the four methods in Strategy. To " -"migrate rename the following `Strategy` methods accordingly:" +"*An Introduction to Federated Learning*: [Open in " +"Colab](https://colab.research.google.com/github/adap/flower/blob/main/tutorials/Flower-1" +"-Intro-to-FL-PyTorch.ipynb)" msgstr "" -#: ../../source/ref-changelog.md:982 -msgid "`on_configure_evaluate` => `configure_evaluate`" +#: ../../source/ref-changelog.md:688 +msgid "" +"*Using Strategies in Federated Learning*: [Open in " +"Colab](https://colab.research.google.com/github/adap/flower/blob/main/tutorials/Flower-2" +"-Strategies-in-FL-PyTorch.ipynb)" msgstr "" -#: ../../source/ref-changelog.md:983 -msgid "`on_aggregate_evaluate` => `aggregate_evaluate`" +#: ../../source/ref-changelog.md:690 +msgid "" +"**New FedAvgM strategy (Federated Averaging with Server Momentum)** " +"([#1076](https://github.com/adap/flower/pull/1076))" msgstr "" -#: ../../source/ref-changelog.md:984 -msgid "`on_configure_fit` => `configure_fit`" +#: ../../source/ref-changelog.md:692 +msgid "" +"The new `FedAvgM` strategy implements Federated Averaging with Server " +"Momentum \\[Hsu et al., 2019\\]." msgstr "" -#: ../../source/ref-changelog.md:985 -msgid "`on_aggregate_fit` => `aggregate_fit`" +#: ../../source/ref-changelog.md:694 +msgid "" +"**New advanced PyTorch code example** " +"([#1007](https://github.com/adap/flower/pull/1007))" msgstr "" -#: ../../source/ref-changelog.md:989 +#: ../../source/ref-changelog.md:696 msgid "" -"Deprecated `DefaultStrategy` " -"([#479](https://github.com/adap/flower/pull/479)). To migrate use " -"`FedAvg` instead." +"A new code example (`advanced_pytorch`) demonstrates advanced Flower " +"concepts with PyTorch." msgstr "" -#: ../../source/ref-changelog.md:990 +#: ../../source/ref-changelog.md:698 msgid "" -"Simplified examples and baselines " -"([#484](https://github.com/adap/flower/pull/484))." +"**New JAX code example** " +"([#906](https://github.com/adap/flower/pull/906), " +"[#1143](https://github.com/adap/flower/pull/1143))" msgstr "" -#: ../../source/ref-changelog.md:991 +#: ../../source/ref-changelog.md:700 msgid "" -"Removed presently unused `on_conclude_round` from strategy interface " -"([#483](https://github.com/adap/flower/pull/483))." +"A new code example (`jax_from_centralized_to_federated`) shows federated " +"learning with JAX and Flower." msgstr "" -#: ../../source/ref-changelog.md:992 +#: ../../source/ref-changelog.md:704 msgid "" -"Set minimal Python version to 3.6.1 instead of 3.6.9 " -"([#471](https://github.com/adap/flower/pull/471))." +"New option to keep Ray running if Ray was already initialized in " +"`start_simulation` ([#1177](https://github.com/adap/flower/pull/1177))" msgstr "" -#: ../../source/ref-changelog.md:993 +#: ../../source/ref-changelog.md:705 msgid "" -"Improved `Strategy` docstrings " -"([#470](https://github.com/adap/flower/pull/470))." +"Add support for custom `ClientManager` as a `start_simulation` parameter " +"([#1171](https://github.com/adap/flower/pull/1171))" msgstr "" -#: ../../source/ref-example-projects.rst:2 -msgid "Example projects" +#: ../../source/ref-changelog.md:706 +msgid "" +"New documentation for [implementing " +"strategies](https://flower.ai/docs/framework/how-to-implement-" +"strategies.html) ([#1097](https://github.com/adap/flower/pull/1097), " +"[#1175](https://github.com/adap/flower/pull/1175))" msgstr "" -#: ../../source/ref-example-projects.rst:4 +#: ../../source/ref-changelog.md:707 msgid "" -"Flower comes with a number of usage examples. The examples demonstrate " -"how Flower can be used to federate different kinds of existing machine " -"learning pipelines, usually leveraging popular machine learning " -"frameworks such as `PyTorch `_ or `TensorFlow " -"`_." +"New mobile-friendly documentation theme " +"([#1174](https://github.com/adap/flower/pull/1174))" msgstr "" -#: ../../source/ref-example-projects.rst:11 +#: ../../source/ref-changelog.md:708 msgid "" -"Flower usage examples used to be bundled with Flower in a package called " -"``flwr_example``. We are migrating those examples to standalone projects " -"to make them easier to use. All new examples are based in the directory " -"`examples `_." +"Limit version range for (optional) `ray` dependency to include only " +"compatible releases (`>=1.9.2,<1.12.0`) " +"([#1205](https://github.com/adap/flower/pull/1205))" msgstr "" -#: ../../source/ref-example-projects.rst:16 -msgid "The following examples are available as standalone projects." +#: ../../source/ref-changelog.md:712 +msgid "" +"**Remove deprecated support for Python 3.6** " +"([#871](https://github.com/adap/flower/pull/871))" msgstr "" -#: ../../source/ref-example-projects.rst:20 -msgid "Quickstart TensorFlow/Keras" +#: ../../source/ref-changelog.md:713 +msgid "" +"**Remove deprecated KerasClient** " +"([#857](https://github.com/adap/flower/pull/857))" msgstr "" -#: ../../source/ref-example-projects.rst:22 +#: ../../source/ref-changelog.md:714 msgid "" -"The TensorFlow/Keras quickstart example shows CIFAR-10 image " -"classification with MobileNetV2:" +"**Remove deprecated no-op extra installs** " +"([#973](https://github.com/adap/flower/pull/973))" msgstr "" -#: ../../source/ref-example-projects.rst:25 +#: ../../source/ref-changelog.md:715 msgid "" -"`Quickstart TensorFlow (Code) " -"`_" +"**Remove deprecated proto fields from** `FitRes` **and** `EvaluateRes` " +"([#869](https://github.com/adap/flower/pull/869))" msgstr "" -#: ../../source/ref-example-projects.rst:26 +#: ../../source/ref-changelog.md:716 msgid "" -"`Quickstart TensorFlow (Tutorial) `_" +"**Remove deprecated QffedAvg strategy (replaced by QFedAvg)** " +"([#1107](https://github.com/adap/flower/pull/1107))" msgstr "" -#: ../../source/ref-example-projects.rst:27 +#: ../../source/ref-changelog.md:717 msgid "" -"`Quickstart TensorFlow (Blog Post) `_" +"**Remove deprecated DefaultStrategy strategy** " +"([#1142](https://github.com/adap/flower/pull/1142))" msgstr "" -#: ../../source/ref-example-projects.rst:31 -#: ../../source/tutorial-quickstart-pytorch.rst:5 -msgid "Quickstart PyTorch" -msgstr "" - -#: ../../source/ref-example-projects.rst:33 +#: ../../source/ref-changelog.md:718 msgid "" -"The PyTorch quickstart example shows CIFAR-10 image classification with a" -" simple Convolutional Neural Network:" +"**Remove deprecated support for eval_fn accuracy return value** " +"([#1142](https://github.com/adap/flower/pull/1142))" msgstr "" -#: ../../source/ref-example-projects.rst:36 +#: ../../source/ref-changelog.md:719 msgid "" -"`Quickstart PyTorch (Code) " -"`_" +"**Remove deprecated support for passing initial parameters as NumPy " +"ndarrays** ([#1142](https://github.com/adap/flower/pull/1142))" msgstr "" -#: ../../source/ref-example-projects.rst:37 -msgid "" -"`Quickstart PyTorch (Tutorial) `_" +#: ../../source/ref-changelog.md:721 +msgid "v0.18.0 (2022-02-28)" msgstr "" -#: ../../source/ref-example-projects.rst:41 -msgid "PyTorch: From Centralized To Federated" +#: ../../source/ref-changelog.md:725 +msgid "" +"**Improved Virtual Client Engine compatibility with Jupyter Notebook / " +"Google Colab** ([#866](https://github.com/adap/flower/pull/866), " +"[#872](https://github.com/adap/flower/pull/872), " +"[#833](https://github.com/adap/flower/pull/833), " +"[#1036](https://github.com/adap/flower/pull/1036))" msgstr "" -#: ../../source/ref-example-projects.rst:43 +#: ../../source/ref-changelog.md:727 msgid "" -"This example shows how a regular PyTorch project can be federated using " -"Flower:" +"Simulations (using the Virtual Client Engine through `start_simulation`) " +"now work more smoothly on Jupyter Notebooks (incl. Google Colab) after " +"installing Flower with the `simulation` extra (`pip install " +"flwr[simulation]`)." msgstr "" -#: ../../source/ref-example-projects.rst:45 +#: ../../source/ref-changelog.md:729 msgid "" -"`PyTorch: From Centralized To Federated (Code) " -"`_" +"**New Jupyter Notebook code example** " +"([#833](https://github.com/adap/flower/pull/833))" msgstr "" -#: ../../source/ref-example-projects.rst:46 +#: ../../source/ref-changelog.md:731 msgid "" -"`PyTorch: From Centralized To Federated (Tutorial) " -"`_" +"A new code example (`quickstart_simulation`) demonstrates Flower " +"simulations using the Virtual Client Engine through Jupyter Notebook " +"(incl. Google Colab)." msgstr "" -#: ../../source/ref-example-projects.rst:50 -msgid "Federated Learning on Raspberry Pi and Nvidia Jetson" +#: ../../source/ref-changelog.md:733 +msgid "" +"**Client properties (feature preview)** " +"([#795](https://github.com/adap/flower/pull/795))" msgstr "" -#: ../../source/ref-example-projects.rst:52 +#: ../../source/ref-changelog.md:735 msgid "" -"This example shows how Flower can be used to build a federated learning " -"system that run across Raspberry Pi and Nvidia Jetson:" +"Clients can implement a new method `get_properties` to enable server-side" +" strategies to query client properties." msgstr "" -#: ../../source/ref-example-projects.rst:54 +#: ../../source/ref-changelog.md:737 msgid "" -"`Federated Learning on Raspberry Pi and Nvidia Jetson (Code) " -"`_" +"**Experimental Android support with TFLite** " +"([#865](https://github.com/adap/flower/pull/865))" msgstr "" -#: ../../source/ref-example-projects.rst:55 +#: ../../source/ref-changelog.md:739 msgid "" -"`Federated Learning on Raspberry Pi and Nvidia Jetson (Blog Post) " -"`_" +"Android support has finally arrived in `main`! Flower is both client-" +"agnostic and framework-agnostic by design. One can integrate arbitrary " +"client platforms and with this release, using Flower on Android has " +"become a lot easier." msgstr "" -#: ../../source/ref-example-projects.rst:60 -msgid "Legacy Examples (`flwr_example`)" +#: ../../source/ref-changelog.md:741 +msgid "" +"The example uses TFLite on the client side, along with a new " +"`FedAvgAndroid` strategy. The Android client and `FedAvgAndroid` are " +"still experimental, but they are a first step towards a fully-fledged " +"Android SDK and a unified `FedAvg` implementation that integrated the new" +" functionality from `FedAvgAndroid`." msgstr "" -#: ../../source/ref-example-projects.rst:63 +#: ../../source/ref-changelog.md:743 msgid "" -"The useage examples in `flwr_example` are deprecated and will be removed " -"in the future. New examples are provided as standalone projects in " -"`examples `_." +"**Make gRPC keepalive time user-configurable and decrease default " +"keepalive time** ([#1069](https://github.com/adap/flower/pull/1069))" msgstr "" -#: ../../source/ref-example-projects.rst:69 -msgid "Extra Dependencies" +#: ../../source/ref-changelog.md:745 +msgid "" +"The default gRPC keepalive time has been reduced to increase the " +"compatibility of Flower with more cloud environments (for example, " +"Microsoft Azure). Users can configure the keepalive time to customize the" +" gRPC stack based on specific requirements." msgstr "" -#: ../../source/ref-example-projects.rst:71 +#: ../../source/ref-changelog.md:747 msgid "" -"The core Flower framework keeps a minimal set of dependencies. The " -"examples demonstrate Flower in the context of different machine learning " -"frameworks, so additional dependencies need to be installed before an " -"example can be run." +"**New differential privacy example using Opacus and PyTorch** " +"([#805](https://github.com/adap/flower/pull/805))" msgstr "" -#: ../../source/ref-example-projects.rst:75 -msgid "For PyTorch examples::" +#: ../../source/ref-changelog.md:749 +msgid "" +"A new code example (`opacus`) demonstrates differentially-private " +"federated learning with Opacus, PyTorch, and Flower." msgstr "" -#: ../../source/ref-example-projects.rst:79 -msgid "For TensorFlow examples::" +#: ../../source/ref-changelog.md:751 +msgid "" +"**New Hugging Face Transformers code example** " +"([#863](https://github.com/adap/flower/pull/863))" msgstr "" -#: ../../source/ref-example-projects.rst:83 -msgid "For both PyTorch and TensorFlow examples::" +#: ../../source/ref-changelog.md:753 +msgid "" +"A new code example (`quickstart_huggingface`) demonstrates usage of " +"Hugging Face Transformers with Flower." msgstr "" -#: ../../source/ref-example-projects.rst:87 +#: ../../source/ref-changelog.md:755 msgid "" -"Please consult :code:`pyproject.toml` for a full list of possible extras " -"(section :code:`[tool.poetry.extras]`)." +"**New MLCube code example** " +"([#779](https://github.com/adap/flower/pull/779), " +"[#1034](https://github.com/adap/flower/pull/1034), " +"[#1065](https://github.com/adap/flower/pull/1065), " +"[#1090](https://github.com/adap/flower/pull/1090))" msgstr "" -#: ../../source/ref-example-projects.rst:92 -msgid "PyTorch Examples" +#: ../../source/ref-changelog.md:757 +msgid "" +"A new code example (`quickstart_mlcube`) demonstrates usage of MLCube " +"with Flower." msgstr "" -#: ../../source/ref-example-projects.rst:94 +#: ../../source/ref-changelog.md:759 msgid "" -"Our PyTorch examples are based on PyTorch 1.7. They should work with " -"other releases as well. So far, we provide the following examples." +"**SSL-enabled server and client** " +"([#842](https://github.com/adap/flower/pull/842), " +"[#844](https://github.com/adap/flower/pull/844), " +"[#845](https://github.com/adap/flower/pull/845), " +"[#847](https://github.com/adap/flower/pull/847), " +"[#993](https://github.com/adap/flower/pull/993), " +"[#994](https://github.com/adap/flower/pull/994))" msgstr "" -#: ../../source/ref-example-projects.rst:98 -msgid "CIFAR-10 Image Classification" +#: ../../source/ref-changelog.md:761 +msgid "" +"SSL enables secure encrypted connections between clients and servers. " +"This release open-sources the Flower secure gRPC implementation to make " +"encrypted communication channels accessible to all Flower users." msgstr "" -#: ../../source/ref-example-projects.rst:100 +#: ../../source/ref-changelog.md:763 msgid "" -"`CIFAR-10 and CIFAR-100 `_ " -"are popular RGB image datasets. The Flower CIFAR-10 example uses PyTorch " -"to train a simple CNN classifier in a federated learning setup with two " -"clients." +"**Updated** `FedAdam` **and** `FedYogi` **strategies** " +"([#885](https://github.com/adap/flower/pull/885), " +"[#895](https://github.com/adap/flower/pull/895))" msgstr "" -#: ../../source/ref-example-projects.rst:104 -#: ../../source/ref-example-projects.rst:121 -#: ../../source/ref-example-projects.rst:146 -msgid "First, start a Flower server:" +#: ../../source/ref-changelog.md:765 +msgid "" +"`FedAdam` and `FedAdam` match the latest version of the Adaptive " +"Federated Optimization paper." msgstr "" -#: ../../source/ref-example-projects.rst:106 -msgid "$ ./src/py/flwr_example/pytorch_cifar/run-server.sh" +#: ../../source/ref-changelog.md:767 +msgid "" +"**Initialize** `start_simulation` **with a list of client IDs** " +"([#860](https://github.com/adap/flower/pull/860))" msgstr "" -#: ../../source/ref-example-projects.rst:108 -#: ../../source/ref-example-projects.rst:125 -#: ../../source/ref-example-projects.rst:150 -msgid "Then, start the two clients in a new terminal window:" +#: ../../source/ref-changelog.md:769 +msgid "" +"`start_simulation` can now be called with a list of client IDs " +"(`clients_ids`, type: `List[str]`). Those IDs will be passed to the " +"`client_fn` whenever a client needs to be initialized, which can make it " +"easier to load data partitions that are not accessible through `int` " +"identifiers." msgstr "" -#: ../../source/ref-example-projects.rst:110 -msgid "$ ./src/py/flwr_example/pytorch_cifar/run-clients.sh" +#: ../../source/ref-changelog.md:773 +msgid "" +"Update `num_examples` calculation in PyTorch code examples in " +"([#909](https://github.com/adap/flower/pull/909))" msgstr "" -#: ../../source/ref-example-projects.rst:112 -msgid "For more details, see :code:`src/py/flwr_example/pytorch_cifar`." +#: ../../source/ref-changelog.md:774 +msgid "" +"Expose Flower version through `flwr.__version__` " +"([#952](https://github.com/adap/flower/pull/952))" msgstr "" -#: ../../source/ref-example-projects.rst:115 -msgid "ImageNet-2012 Image Classification" +#: ../../source/ref-changelog.md:775 +msgid "" +"`start_server` in `app.py` now returns a `History` object containing " +"metrics from training ([#974](https://github.com/adap/flower/pull/974))" msgstr "" -#: ../../source/ref-example-projects.rst:117 +#: ../../source/ref-changelog.md:776 msgid "" -"`ImageNet-2012 `_ is one of the major computer" -" vision datasets. The Flower ImageNet example uses PyTorch to train a " -"ResNet-18 classifier in a federated learning setup with ten clients." +"Make `max_workers` (used by `ThreadPoolExecutor`) configurable " +"([#978](https://github.com/adap/flower/pull/978))" msgstr "" -#: ../../source/ref-example-projects.rst:123 -msgid "$ ./src/py/flwr_example/pytorch_imagenet/run-server.sh" +#: ../../source/ref-changelog.md:777 +msgid "" +"Increase sleep time after server start to three seconds in all code " +"examples ([#1086](https://github.com/adap/flower/pull/1086))" msgstr "" -#: ../../source/ref-example-projects.rst:127 -msgid "$ ./src/py/flwr_example/pytorch_imagenet/run-clients.sh" +#: ../../source/ref-changelog.md:778 +msgid "" +"Added a new FAQ section to the documentation " +"([#948](https://github.com/adap/flower/pull/948))" msgstr "" -#: ../../source/ref-example-projects.rst:129 -msgid "For more details, see :code:`src/py/flwr_example/pytorch_imagenet`." +#: ../../source/ref-changelog.md:779 +msgid "" +"And many more under-the-hood changes, library updates, documentation " +"changes, and tooling improvements!" msgstr "" -#: ../../source/ref-example-projects.rst:133 -msgid "TensorFlow Examples" +#: ../../source/ref-changelog.md:783 +msgid "" +"**Removed** `flwr_example` **and** `flwr_experimental` **from release " +"build** ([#869](https://github.com/adap/flower/pull/869))" msgstr "" -#: ../../source/ref-example-projects.rst:135 +#: ../../source/ref-changelog.md:785 msgid "" -"Our TensorFlow examples are based on TensorFlow 2.0 or newer. So far, we " -"provide the following examples." +"The packages `flwr_example` and `flwr_experimental` have been deprecated " +"since Flower 0.12.0 and they are not longer included in Flower release " +"builds. The associated extras (`baseline`, `examples-pytorch`, `examples-" +"tensorflow`, `http-logger`, `ops`) are now no-op and will be removed in " +"an upcoming release." msgstr "" -#: ../../source/ref-example-projects.rst:139 -msgid "Fashion-MNIST Image Classification" +#: ../../source/ref-changelog.md:787 +msgid "v0.17.0 (2021-09-24)" msgstr "" -#: ../../source/ref-example-projects.rst:141 +#: ../../source/ref-changelog.md:791 msgid "" -"`Fashion-MNIST `_ is " -"often used as the \"Hello, world!\" of machine learning. We follow this " -"tradition and provide an example which samples random local datasets from" -" Fashion-MNIST and trains a simple image classification model over those " -"partitions." -msgstr "" - -#: ../../source/ref-example-projects.rst:148 -msgid "$ ./src/py/flwr_example/tensorflow_fashion_mnist/run-server.sh" -msgstr "" - -#: ../../source/ref-example-projects.rst:152 -msgid "$ ./src/py/flwr_example/tensorflow_fashion_mnist/run-clients.sh" +"**Experimental virtual client engine** " +"([#781](https://github.com/adap/flower/pull/781) " +"[#790](https://github.com/adap/flower/pull/790) " +"[#791](https://github.com/adap/flower/pull/791))" msgstr "" -#: ../../source/ref-example-projects.rst:154 +#: ../../source/ref-changelog.md:793 msgid "" -"For more details, see " -":code:`src/py/flwr_example/tensorflow_fashion_mnist`." +"One of Flower's goals is to enable research at scale. This release " +"enables a first (experimental) peek at a major new feature, codenamed the" +" virtual client engine. Virtual clients enable simulations that scale to " +"a (very) large number of clients on a single machine or compute cluster. " +"The easiest way to test the new functionality is to look at the two new " +"code examples called `quickstart_simulation` and `simulation_pytorch`." msgstr "" -#: ../../source/ref-faq.rst:4 +#: ../../source/ref-changelog.md:795 msgid "" -"This page collects answers to commonly asked questions about Federated " -"Learning with Flower." +"The feature is still experimental, so there's no stability guarantee for " +"the API. It's also not quite ready for prime time and comes with a few " +"known caveats. However, those who are curious are encouraged to try it " +"out and share their thoughts." msgstr "" -#: ../../source/ref-faq.rst -msgid ":fa:`eye,mr-1` Can Flower run on Juptyter Notebooks / Google Colab?" +#: ../../source/ref-changelog.md:797 +msgid "" +"**New built-in strategies** " +"([#828](https://github.com/adap/flower/pull/828) " +"[#822](https://github.com/adap/flower/pull/822))" msgstr "" -#: ../../source/ref-faq.rst:8 +#: ../../source/ref-changelog.md:799 msgid "" -"Yes, it can! Flower even comes with a few under-the-hood optimizations to" -" make it work even better on Colab. Here's a quickstart example:" +"FedYogi - Federated learning strategy using Yogi on server-side. " +"Implementation based on https://arxiv.org/abs/2003.00295" msgstr "" -#: ../../source/ref-faq.rst:10 +#: ../../source/ref-changelog.md:800 msgid "" -"`Flower simulation PyTorch " -"`_" +"FedAdam - Federated learning strategy using Adam on server-side. " +"Implementation based on https://arxiv.org/abs/2003.00295" msgstr "" -#: ../../source/ref-faq.rst:11 +#: ../../source/ref-changelog.md:802 msgid "" -"`Flower simulation TensorFlow/Keras " -"`_" +"**New PyTorch Lightning code example** " +"([#617](https://github.com/adap/flower/pull/617))" msgstr "" -#: ../../source/ref-faq.rst -msgid ":fa:`eye,mr-1` How can I run Federated Learning on a Raspberry Pi?" +#: ../../source/ref-changelog.md:804 +msgid "" +"**New Variational Auto-Encoder code example** " +"([#752](https://github.com/adap/flower/pull/752))" msgstr "" -#: ../../source/ref-faq.rst:15 +#: ../../source/ref-changelog.md:806 msgid "" -"Find the `blog post about federated learning on embedded device here " -"`_" -" and the corresponding `GitHub code example " -"`_." +"**New scikit-learn code example** " +"([#748](https://github.com/adap/flower/pull/748))" msgstr "" -#: ../../source/ref-faq.rst -msgid ":fa:`eye,mr-1` Does Flower support federated learning on Android devices?" +#: ../../source/ref-changelog.md:808 +msgid "" +"**New experimental TensorBoard strategy** " +"([#789](https://github.com/adap/flower/pull/789))" msgstr "" -#: ../../source/ref-faq.rst:19 +#: ../../source/ref-changelog.md:812 msgid "" -"Yes, it does. Please take a look at our `blog post " -"`_ or check out the code examples:" +"Improved advanced TensorFlow code example " +"([#769](https://github.com/adap/flower/pull/769))" msgstr "" -#: ../../source/ref-faq.rst:21 +#: ../../source/ref-changelog.md:813 msgid "" -"`Android Kotlin example `_" +"Warning when `min_available_clients` is misconfigured " +"([#830](https://github.com/adap/flower/pull/830))" msgstr "" -#: ../../source/ref-faq.rst:22 -msgid "`Android Java example `_" +#: ../../source/ref-changelog.md:814 +msgid "" +"Improved gRPC server docs " +"([#841](https://github.com/adap/flower/pull/841))" msgstr "" -#: ../../source/ref-faq.rst -msgid ":fa:`eye,mr-1` Can I combine federated learning with blockchain?" +#: ../../source/ref-changelog.md:815 +msgid "" +"Improved error message in `NumPyClient` " +"([#851](https://github.com/adap/flower/pull/851))" msgstr "" -#: ../../source/ref-faq.rst:26 +#: ../../source/ref-changelog.md:816 msgid "" -"Yes, of course. A list of available examples using Flower within a " -"blockchain environment is available here:" +"Improved PyTorch quickstart code example " +"([#852](https://github.com/adap/flower/pull/852))" msgstr "" -#: ../../source/ref-faq.rst:28 +#: ../../source/ref-changelog.md:820 msgid "" -"`Flower meets Nevermined GitHub Repository `_." +"**Disabled final distributed evaluation** " +"([#800](https://github.com/adap/flower/pull/800))" msgstr "" -#: ../../source/ref-faq.rst:29 +#: ../../source/ref-changelog.md:822 msgid "" -"`Flower meets Nevermined YouTube video " -"`_." +"Prior behaviour was to perform a final round of distributed evaluation on" +" all connected clients, which is often not required (e.g., when using " +"server-side evaluation). The prior behaviour can be enabled by passing " +"`force_final_distributed_eval=True` to `start_server`." msgstr "" -#: ../../source/ref-faq.rst:30 +#: ../../source/ref-changelog.md:824 msgid "" -"`Flower meets KOSMoS `_." +"**Renamed q-FedAvg strategy** " +"([#802](https://github.com/adap/flower/pull/802))" msgstr "" -#: ../../source/ref-faq.rst:31 +#: ../../source/ref-changelog.md:826 msgid "" -"`Flower meets Talan blog post `_ ." +"The strategy named `QffedAvg` was renamed to `QFedAvg` to better reflect " +"the notation given in the original paper (q-FFL is the optimization " +"objective, q-FedAvg is the proposed solver). Note the original (now " +"deprecated) `QffedAvg` class is still available for compatibility reasons" +" (it will be removed in a future release)." msgstr "" -#: ../../source/ref-faq.rst:32 +#: ../../source/ref-changelog.md:828 msgid "" -"`Flower meets Talan GitHub Repository " -"`_ ." +"**Deprecated and renamed code example** `simulation_pytorch` **to** " +"`simulation_pytorch_legacy` " +"([#791](https://github.com/adap/flower/pull/791))" msgstr "" -#: ../../source/ref-telemetry.md:1 -msgid "Telemetry" +#: ../../source/ref-changelog.md:830 +msgid "" +"This example has been replaced by a new example. The new example is based" +" on the experimental virtual client engine, which will become the new " +"default way of doing most types of large-scale simulations in Flower. The" +" existing example was kept for reference purposes, but it might be " +"removed in the future." msgstr "" -#: ../../source/ref-telemetry.md:3 -msgid "" -"The Flower open-source project collects **anonymous** usage metrics to " -"make well-informed decisions to improve Flower. Doing this enables the " -"Flower team to understand how Flower is used and what challenges users " -"might face." +#: ../../source/ref-changelog.md:832 +msgid "v0.16.0 (2021-05-11)" msgstr "" -#: ../../source/ref-telemetry.md:5 +#: ../../source/ref-changelog.md:836 msgid "" -"**Flower is a friendly framework for collaborative AI and data science.**" -" Staying true to this statement, Flower makes it easy to disable " -"telemetry for users that do not want to share anonymous usage metrics." +"**New built-in strategies** " +"([#549](https://github.com/adap/flower/pull/549))" msgstr "" -#: ../../source/ref-telemetry.md:7 -msgid "Principles" +#: ../../source/ref-changelog.md:838 +msgid "(abstract) FedOpt" msgstr "" -#: ../../source/ref-telemetry.md:9 -msgid "We follow strong principles guarding anonymous usage metrics collection:" +#: ../../source/ref-changelog.md:841 +msgid "" +"**Custom metrics for server and strategies** " +"([#717](https://github.com/adap/flower/pull/717))" msgstr "" -#: ../../source/ref-telemetry.md:11 +#: ../../source/ref-changelog.md:843 msgid "" -"**Optional:** You will always be able to disable telemetry; read on to " -"learn “[How to opt-out](#how-to-opt-out)”." +"The Flower server is now fully task-agnostic, all remaining instances of " +"task-specific metrics (such as `accuracy`) have been replaced by custom " +"metrics dictionaries. Flower 0.15 introduced the capability to pass a " +"dictionary containing custom metrics from client to server. As of this " +"release, custom metrics replace task-specific metrics on the server." msgstr "" -#: ../../source/ref-telemetry.md:12 +#: ../../source/ref-changelog.md:845 msgid "" -"**Anonymous:** The reported usage metrics are anonymous and do not " -"contain any personally identifiable information (PII). See “[Collected " -"metrics](#collected-metrics)” to understand what metrics are being " -"reported." +"Custom metric dictionaries are now used in two user-facing APIs: they are" +" returned from Strategy methods `aggregate_fit`/`aggregate_evaluate` and " +"they enable evaluation functions passed to built-in strategies (via " +"`eval_fn`) to return more than two evaluation metrics. Strategies can " +"even return *aggregated* metrics dictionaries for the server to keep " +"track of." msgstr "" -#: ../../source/ref-telemetry.md:13 +#: ../../source/ref-changelog.md:847 msgid "" -"**Transparent:** You can easily inspect what anonymous metrics are being " -"reported; see the section “[How to inspect what is being reported](#how-" -"to-inspect-what-is-being-reported)”" +"Strategy implementations should migrate their `aggregate_fit` and " +"`aggregate_evaluate` methods to the new return type (e.g., by simply " +"returning an empty `{}`), server-side evaluation functions should migrate" +" from `return loss, accuracy` to `return loss, {\"accuracy\": accuracy}`." msgstr "" -#: ../../source/ref-telemetry.md:14 +#: ../../source/ref-changelog.md:849 msgid "" -"**Open for feedback:** You can always reach out to us if you have " -"feedback; see the section “[How to contact us](#how-to-contact-us)” for " -"details." +"Flower 0.15-style return types are deprecated (but still supported), " +"compatibility will be removed in a future release." msgstr "" -#: ../../source/ref-telemetry.md:16 -msgid "How to opt-out" +#: ../../source/ref-changelog.md:851 +msgid "" +"**Migration warnings for deprecated functionality** " +"([#690](https://github.com/adap/flower/pull/690))" msgstr "" -#: ../../source/ref-telemetry.md:18 +#: ../../source/ref-changelog.md:853 msgid "" -"When Flower starts, it will check for an environment variable called " -"`FLWR_TELEMETRY_ENABLED`. Telemetry can easily be disabled by setting " -"`FLWR_TELEMETRY_ENABLED=0`. Assuming you are starting a Flower server or " -"client, simply do so by prepending your command as in:" +"Earlier versions of Flower were often migrated to new APIs, while " +"maintaining compatibility with legacy APIs. This release introduces " +"detailed warning messages if usage of deprecated APIs is detected. The " +"new warning messages often provide details on how to migrate to more " +"recent APIs, thus easing the transition from one release to another." msgstr "" -#: ../../source/ref-telemetry.md:24 +#: ../../source/ref-changelog.md:855 msgid "" -"Alternatively, you can export `FLWR_TELEMETRY_ENABLED=0` in, for example," -" `.bashrc` (or whatever configuration file applies to your environment) " -"to disable Flower telemetry permanently." +"Improved docs and docstrings " +"([#691](https://github.com/adap/flower/pull/691) " +"[#692](https://github.com/adap/flower/pull/692) " +"[#713](https://github.com/adap/flower/pull/713))" msgstr "" -#: ../../source/ref-telemetry.md:26 -msgid "Collected metrics" +#: ../../source/ref-changelog.md:857 +msgid "MXNet example and documentation" msgstr "" -#: ../../source/ref-telemetry.md:28 -msgid "Flower telemetry collects the following metrics:" +#: ../../source/ref-changelog.md:859 +msgid "" +"FedBN implementation in example PyTorch: From Centralized To Federated " +"([#696](https://github.com/adap/flower/pull/696) " +"[#702](https://github.com/adap/flower/pull/702) " +"[#705](https://github.com/adap/flower/pull/705))" msgstr "" -#: ../../source/ref-telemetry.md:30 +#: ../../source/ref-changelog.md:863 msgid "" -"**Flower version.** Understand which versions of Flower are currently " -"being used. This helps us to decide whether we should invest effort into " -"releasing a patch version for an older version of Flower or instead use " -"the bandwidth to build new features." +"**Serialization-agnostic server** " +"([#721](https://github.com/adap/flower/pull/721))" msgstr "" -#: ../../source/ref-telemetry.md:32 +#: ../../source/ref-changelog.md:865 msgid "" -"**Operating system.** Enables us to answer questions such as: *Should we " -"create more guides for Linux, macOS, or Windows?*" +"The Flower server is now fully serialization-agnostic. Prior usage of " +"class `Weights` (which represents parameters as deserialized NumPy " +"ndarrays) was replaced by class `Parameters` (e.g., in `Strategy`). " +"`Parameters` objects are fully serialization-agnostic and represents " +"parameters as byte arrays, the `tensor_type` attributes indicates how " +"these byte arrays should be interpreted (e.g., for " +"serialization/deserialization)." msgstr "" -#: ../../source/ref-telemetry.md:34 +#: ../../source/ref-changelog.md:867 msgid "" -"**Python version.** Knowing the Python version helps us, for example, to " -"decide whether we should invest effort into supporting old versions of " -"Python or stop supporting them and start taking advantage of new Python " -"features." +"Built-in strategies implement this approach by handling serialization and" +" deserialization to/from `Weights` internally. Custom/3rd-party Strategy " +"implementations should update to the slightly changed Strategy method " +"definitions. Strategy authors can consult PR " +"[#721](https://github.com/adap/flower/pull/721) to see how strategies can" +" easily migrate to the new format." msgstr "" -#: ../../source/ref-telemetry.md:36 +#: ../../source/ref-changelog.md:869 msgid "" -"**Hardware properties.** Understanding the hardware environment that " -"Flower is being used in helps to decide whether we should, for example, " -"put more effort into supporting low-resource environments." +"Deprecated `flwr.server.Server.evaluate`, use " +"`flwr.server.Server.evaluate_round` instead " +"([#717](https://github.com/adap/flower/pull/717))" msgstr "" -#: ../../source/ref-telemetry.md:38 -msgid "" -"**Execution mode.** Knowing what execution mode Flower starts in enables " -"us to understand how heavily certain features are being used and better " -"prioritize based on that." +#: ../../source/ref-changelog.md:871 +msgid "v0.15.0 (2021-03-12)" msgstr "" -#: ../../source/ref-telemetry.md:40 +#: ../../source/ref-changelog.md:875 msgid "" -"**Cluster.** Flower telemetry assigns a random in-memory cluster ID each " -"time a Flower workload starts. This allows us to understand which device " -"types not only start Flower workloads but also successfully complete " -"them." +"**Server-side parameter initialization** " +"([#658](https://github.com/adap/flower/pull/658))" msgstr "" -#: ../../source/ref-telemetry.md:42 +#: ../../source/ref-changelog.md:877 msgid "" -"**Source.** Flower telemetry tries to store a random source ID in " -"`~/.flwr/source` the first time a telemetry event is generated. The " -"source ID is important to identify whether an issue is recurring or " -"whether an issue is triggered by multiple clusters running concurrently " -"(which often happens in simulation). For example, if a device runs " -"multiple workloads at the same time, and this results in an issue, then, " -"in order to reproduce the issue, multiple workloads must be started at " -"the same time." +"Model parameters can now be initialized on the server-side. Server-side " +"parameter initialization works via a new `Strategy` method called " +"`initialize_parameters`." msgstr "" -#: ../../source/ref-telemetry.md:44 +#: ../../source/ref-changelog.md:879 msgid "" -"You may delete the source ID at any time. If you wish for all events " -"logged under a specific source ID to be deleted, you can send a deletion " -"request mentioning the source ID to `telemetry@flower.ai`. All events " -"related to that source ID will then be permanently deleted." +"Built-in strategies support a new constructor argument called " +"`initial_parameters` to set the initial parameters. Built-in strategies " +"will provide these initial parameters to the server on startup and then " +"delete them to free the memory afterwards." msgstr "" -#: ../../source/ref-telemetry.md:46 +#: ../../source/ref-changelog.md:898 msgid "" -"We will not collect any personally identifiable information. If you think" -" any of the metrics collected could be misused in any way, please [get in" -" touch with us](#how-to-contact-us). We will update this page to reflect " -"any changes to the metrics collected and publish changes in the " -"changelog." +"If no initial parameters are provided to the strategy, the server will " +"continue to use the current behaviour (namely, it will ask one of the " +"connected clients for its parameters and use these as the initial global " +"parameters)." msgstr "" -#: ../../source/ref-telemetry.md:48 +#: ../../source/ref-changelog.md:900 +msgid "Deprecations" +msgstr "" + +#: ../../source/ref-changelog.md:902 msgid "" -"If you think other metrics would be helpful for us to better guide our " -"decisions, please let us know! We will carefully review them; if we are " -"confident that they do not compromise user privacy, we may add them." +"Deprecate `flwr.server.strategy.DefaultStrategy` (migrate to " +"`flwr.server.strategy.FedAvg`, which is equivalent)" msgstr "" -#: ../../source/ref-telemetry.md:50 -msgid "How to inspect what is being reported" +#: ../../source/ref-changelog.md:904 +msgid "v0.14.0 (2021-02-18)" msgstr "" -#: ../../source/ref-telemetry.md:52 +#: ../../source/ref-changelog.md:908 msgid "" -"We wanted to make it very easy for you to inspect what anonymous usage " -"metrics are reported. You can view all the reported telemetry information" -" by setting the environment variable `FLWR_TELEMETRY_LOGGING=1`. Logging " -"is disabled by default. You may use logging independently from " -"`FLWR_TELEMETRY_ENABLED` so that you can inspect the telemetry feature " -"without sending any metrics." +"**Generalized** `Client.fit` **and** `Client.evaluate` **return values** " +"([#610](https://github.com/adap/flower/pull/610) " +"[#572](https://github.com/adap/flower/pull/572) " +"[#633](https://github.com/adap/flower/pull/633))" msgstr "" -#: ../../source/ref-telemetry.md:58 +#: ../../source/ref-changelog.md:910 msgid "" -"The inspect Flower telemetry without sending any anonymous usage metrics," -" use both environment variables:" +"Clients can now return an additional dictionary mapping `str` keys to " +"values of the following types: `bool`, `bytes`, `float`, `int`, `str`. " +"This means one can return almost arbitrary values from `fit`/`evaluate` " +"and make use of them on the server side!" msgstr "" -#: ../../source/ref-telemetry.md:64 -msgid "How to contact us" +#: ../../source/ref-changelog.md:912 +msgid "" +"This improvement also allowed for more consistent return types between " +"`fit` and `evaluate`: `evaluate` should now return a tuple `(float, int, " +"dict)` representing the loss, number of examples, and a dictionary " +"holding arbitrary problem-specific values like accuracy." msgstr "" -#: ../../source/ref-telemetry.md:66 +#: ../../source/ref-changelog.md:914 msgid "" -"We want to hear from you. If you have any feedback or ideas on how to " -"improve the way we handle anonymous usage metrics, reach out to us via " -"[Slack](https://flower.ai/join-slack/) (channel `#telemetry`) or email " -"(`telemetry@flower.ai`)." +"In case you wondered: this feature is compatible with existing projects, " +"the additional dictionary return value is optional. New code should " +"however migrate to the new return types to be compatible with upcoming " +"Flower releases (`fit`: `List[np.ndarray], int, Dict[str, Scalar]`, " +"`evaluate`: `float, int, Dict[str, Scalar]`). See the example below for " +"details." msgstr "" -#: ../../source/tutorial-quickstart-android.rst:-1 +#: ../../source/ref-changelog.md:916 msgid "" -"Read this Federated Learning quickstart tutorial for creating an Android " -"app using Flower." +"*Code example:* note the additional dictionary return values in both " +"`FlwrClient.fit` and `FlwrClient.evaluate`:" msgstr "" -#: ../../source/tutorial-quickstart-android.rst:5 -msgid "Quickstart Android" +#: ../../source/ref-changelog.md:931 +msgid "" +"**Generalized** `config` **argument in** `Client.fit` **and** " +"`Client.evaluate` ([#595](https://github.com/adap/flower/pull/595))" msgstr "" -#: ../../source/tutorial-quickstart-android.rst:10 +#: ../../source/ref-changelog.md:933 msgid "" -"Let's build a federated learning system using TFLite and Flower on " -"Android!" +"The `config` argument used to be of type `Dict[str, str]`, which means " +"that dictionary values were expected to be strings. The new release " +"generalizes this to enable values of the following types: `bool`, " +"`bytes`, `float`, `int`, `str`." msgstr "" -#: ../../source/tutorial-quickstart-android.rst:12 +#: ../../source/ref-changelog.md:935 msgid "" -"Please refer to the `full code example " -"`_ to learn " -"more." +"This means one can now pass almost arbitrary values to `fit`/`evaluate` " +"using the `config` dictionary. Yay, no more `str(epochs)` on the server-" +"side and `int(config[\"epochs\"])` on the client side!" msgstr "" -#: ../../source/tutorial-quickstart-fastai.rst:-1 +#: ../../source/ref-changelog.md:937 msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower " -"with FastAI to train a vision model on CIFAR-10." +"*Code example:* note that the `config` dictionary now contains non-`str` " +"values in both `Client.fit` and `Client.evaluate`:" msgstr "" -#: ../../source/tutorial-quickstart-fastai.rst:5 -msgid "Quickstart fastai" +#: ../../source/ref-changelog.md:954 +msgid "v0.13.0 (2021-01-08)" msgstr "" -#: ../../source/tutorial-quickstart-fastai.rst:10 -msgid "Let's build a federated learning system using fastai and Flower!" +#: ../../source/ref-changelog.md:958 +msgid "" +"New example: PyTorch From Centralized To Federated " +"([#549](https://github.com/adap/flower/pull/549))" msgstr "" -#: ../../source/tutorial-quickstart-fastai.rst:12 -msgid "" -"Please refer to the `full code example " -"`_ " -"to learn more." +#: ../../source/ref-changelog.md:959 +msgid "Improved documentation" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:-1 -msgid "" -"Check out this Federating Learning quickstart tutorial for using Flower " -"with HuggingFace Transformers in order to fine-tune an LLM." +#: ../../source/ref-changelog.md:960 +msgid "New documentation theme ([#551](https://github.com/adap/flower/pull/551))" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:5 -msgid "Quickstart 🤗 Transformers" +#: ../../source/ref-changelog.md:961 +msgid "New API reference ([#554](https://github.com/adap/flower/pull/554))" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:10 +#: ../../source/ref-changelog.md:962 msgid "" -"Let's build a federated learning system using Hugging Face Transformers " -"and Flower!" +"Updated examples documentation " +"([#549](https://github.com/adap/flower/pull/549))" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:12 +#: ../../source/ref-changelog.md:963 msgid "" -"We will leverage Hugging Face to federate the training of language models" -" over multiple clients using Flower. More specifically, we will fine-tune" -" a pre-trained Transformer model (distilBERT) for sequence classification" -" over a dataset of IMDB ratings. The end goal is to detect if a movie " -"rating is positive or negative." +"Removed obsolete documentation " +"([#548](https://github.com/adap/flower/pull/548))" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:18 -msgid "Dependencies" +#: ../../source/ref-changelog.md:965 +msgid "Bugfix:" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:20 +#: ../../source/ref-changelog.md:967 msgid "" -"To follow along this tutorial you will need to install the following " -"packages: :code:`datasets`, :code:`evaluate`, :code:`flwr`, " -":code:`torch`, and :code:`transformers`. This can be done using " -":code:`pip`:" +"`Server.fit` does not disconnect clients when finished, disconnecting the" +" clients is now handled in `flwr.server.start_server` " +"([#553](https://github.com/adap/flower/pull/553) " +"[#540](https://github.com/adap/flower/issues/540))." msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:30 -msgid "Standard Hugging Face workflow" +#: ../../source/ref-changelog.md:969 +msgid "v0.12.0 (2020-12-07)" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:33 -msgid "Handling the data" +#: ../../source/ref-changelog.md:971 ../../source/ref-changelog.md:987 +msgid "Important changes:" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:35 +#: ../../source/ref-changelog.md:973 msgid "" -"To fetch the IMDB dataset, we will use Hugging Face's :code:`datasets` " -"library. We then need to tokenize the data and create :code:`PyTorch` " -"dataloaders, this is all done in the :code:`load_data` function:" -msgstr "" - -#: ../../source/tutorial-quickstart-huggingface.rst:81 -msgid "Training and testing the model" +"Added an example for embedded devices " +"([#507](https://github.com/adap/flower/pull/507))" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:83 +#: ../../source/ref-changelog.md:974 msgid "" -"Once we have a way of creating our trainloader and testloader, we can " -"take care of the training and testing. This is very similar to any " -":code:`PyTorch` training or testing loop:" +"Added a new NumPyClient (in addition to the existing KerasClient) " +"([#504](https://github.com/adap/flower/pull/504) " +"[#508](https://github.com/adap/flower/pull/508))" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:121 -msgid "Creating the model itself" -msgstr "" - -#: ../../source/tutorial-quickstart-huggingface.rst:123 +#: ../../source/ref-changelog.md:975 msgid "" -"To create the model itself, we will just load the pre-trained distillBERT" -" model using Hugging Face’s :code:`AutoModelForSequenceClassification` :" +"Deprecated `flwr_example` package and started to migrate examples into " +"the top-level `examples` directory " +"([#494](https://github.com/adap/flower/pull/494) " +"[#512](https://github.com/adap/flower/pull/512))" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:136 -msgid "Federating the example" +#: ../../source/ref-changelog.md:977 +msgid "v0.11.0 (2020-11-30)" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:139 -msgid "Creating the IMDBClient" +#: ../../source/ref-changelog.md:979 +msgid "Incompatible changes:" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:141 +#: ../../source/ref-changelog.md:981 msgid "" -"To federate our example to multiple clients, we first need to write our " -"Flower client class (inheriting from :code:`flwr.client.NumPyClient`). " -"This is very easy, as our model is a standard :code:`PyTorch` model:" +"Renamed strategy methods " +"([#486](https://github.com/adap/flower/pull/486)) to unify the naming of " +"Flower's public APIs. Other public methods/functions (e.g., every method " +"in `Client`, but also `Strategy.evaluate`) do not use the `on_` prefix, " +"which is why we're removing it from the four methods in Strategy. To " +"migrate rename the following `Strategy` methods accordingly:" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:169 -msgid "" -"The :code:`get_parameters` function lets the server get the client's " -"parameters. Inversely, the :code:`set_parameters` function allows the " -"server to send its parameters to the client. Finally, the :code:`fit` " -"function trains the model locally for the client, and the " -":code:`evaluate` function tests the model locally and returns the " -"relevant metrics." +#: ../../source/ref-changelog.md:982 +msgid "`on_configure_evaluate` => `configure_evaluate`" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:175 -msgid "Starting the server" +#: ../../source/ref-changelog.md:983 +msgid "`on_aggregate_evaluate` => `aggregate_evaluate`" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:177 -msgid "" -"Now that we have a way to instantiate clients, we need to create our " -"server in order to aggregate the results. Using Flower, this can be done " -"very easily by first choosing a strategy (here, we are using " -":code:`FedAvg`, which will define the global weights as the average of " -"all the clients' weights at each round) and then using the " -":code:`flwr.server.start_server` function:" +#: ../../source/ref-changelog.md:984 +msgid "`on_configure_fit` => `configure_fit`" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:205 -msgid "" -"The :code:`weighted_average` function is there to provide a way to " -"aggregate the metrics distributed amongst the clients (basically this " -"allows us to display a nice average accuracy and loss for every round)." +#: ../../source/ref-changelog.md:985 +msgid "`on_aggregate_fit` => `aggregate_fit`" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:209 -msgid "Putting everything together" +#: ../../source/ref-changelog.md:989 +msgid "" +"Deprecated `DefaultStrategy` " +"([#479](https://github.com/adap/flower/pull/479)). To migrate use " +"`FedAvg` instead." msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:211 -msgid "We can now start client instances using:" +#: ../../source/ref-changelog.md:990 +msgid "" +"Simplified examples and baselines " +"([#484](https://github.com/adap/flower/pull/484))." msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:221 +#: ../../source/ref-changelog.md:991 msgid "" -"And they will be able to connect to the server and start the federated " -"training." +"Removed presently unused `on_conclude_round` from strategy interface " +"([#483](https://github.com/adap/flower/pull/483))." msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:223 +#: ../../source/ref-changelog.md:992 msgid "" -"If you want to check out everything put together, you should check out " -"the full code example: [https://github.com/adap/flower/tree/main/examples" -"/quickstart-" -"huggingface](https://github.com/adap/flower/tree/main/examples" -"/quickstart-huggingface)." +"Set minimal Python version to 3.6.1 instead of 3.6.9 " +"([#471](https://github.com/adap/flower/pull/471))." msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:227 +#: ../../source/ref-changelog.md:993 msgid "" -"Of course, this is a very basic example, and a lot can be added or " -"modified, it was just to showcase how simply we could federate a Hugging " -"Face workflow using Flower." +"Improved `Strategy` docstrings " +"([#470](https://github.com/adap/flower/pull/470))." +msgstr "" + +#: ../../source/ref-example-projects.rst:2 +msgid "Example projects" msgstr "" -#: ../../source/tutorial-quickstart-huggingface.rst:230 +#: ../../source/ref-example-projects.rst:4 msgid "" -"Note that in this example we used :code:`PyTorch`, but we could have very" -" well used :code:`TensorFlow`." +"Flower comes with a number of usage examples. The examples demonstrate " +"how Flower can be used to federate different kinds of existing machine " +"learning pipelines, usually leveraging popular machine learning " +"frameworks such as `PyTorch `_ or `TensorFlow " +"`_." msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:-1 +#: ../../source/ref-example-projects.rst:11 msgid "" -"Read this Federated Learning quickstart tutorial for creating an iOS app " -"using Flower to train a neural network on MNIST." +"Flower usage examples used to be bundled with Flower in a package called " +"``flwr_example``. We are migrating those examples to standalone projects " +"to make them easier to use. All new examples are based in the directory " +"`examples `_." msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:5 -msgid "Quickstart iOS" +#: ../../source/ref-example-projects.rst:16 +msgid "The following examples are available as standalone projects." msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:10 -msgid "" -"In this tutorial we will learn how to train a Neural Network on MNIST " -"using Flower and CoreML on iOS devices." +#: ../../source/ref-example-projects.rst:20 +msgid "Quickstart TensorFlow/Keras" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:12 +#: ../../source/ref-example-projects.rst:22 msgid "" -"First of all, for running the Flower Python server, it is recommended to " -"create a virtual environment and run everything within a `virtualenv " -"`_. For the Flower " -"client implementation in iOS, it is recommended to use Xcode as our IDE." +"The TensorFlow/Keras quickstart example shows CIFAR-10 image " +"classification with MobileNetV2:" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:15 +#: ../../source/ref-example-projects.rst:25 msgid "" -"Our example consists of one Python *server* and two iPhone *clients* that" -" all have the same model." +"`Quickstart TensorFlow (Code) " +"`_" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:17 -msgid "" -"*Clients* are responsible for generating individual weight updates for " -"the model based on their local datasets. These updates are then sent to " -"the *server* which will aggregate them to produce a better model. " -"Finally, the *server* sends this improved version of the model back to " -"each *client*. A complete cycle of weight updates is called a *round*." +#: ../../source/ref-example-projects.rst:26 +msgid ":doc:`Quickstart TensorFlow (Tutorial) `" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:21 +#: ../../source/ref-example-projects.rst:27 msgid "" -"Now that we have a rough idea of what is going on, let's get started to " -"setup our Flower server environment. We first need to install Flower. You" -" can do this by using pip:" +"`Quickstart TensorFlow (Blog Post) `_" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:27 -msgid "Or Poetry:" +#: ../../source/ref-example-projects.rst:31 +#: ../../source/tutorial-quickstart-pytorch.rst:5 +msgid "Quickstart PyTorch" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:36 +#: ../../source/ref-example-projects.rst:33 msgid "" -"Now that we have all our dependencies installed, let's run a simple " -"distributed training using CoreML as our local training pipeline and " -"MNIST as our dataset. For simplicity reasons we will use the complete " -"Flower client with CoreML, that has been implemented and stored inside " -"the Swift SDK. The client implementation can be seen below:" +"The PyTorch quickstart example shows CIFAR-10 image classification with a" +" simple Convolutional Neural Network:" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:72 +#: ../../source/ref-example-projects.rst:36 msgid "" -"Let's create a new application project in Xcode and add :code:`flwr` as a" -" dependency in your project. For our application, we will store the logic" -" of our app in :code:`FLiOSModel.swift` and the UI elements in " -":code:`ContentView.swift`. We will focus more on :code:`FLiOSModel.swift`" -" in this quickstart. Please refer to the `full code example " -"`_ to learn more " -"about the app." +"`Quickstart PyTorch (Code) " +"`_" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:75 -msgid "Import Flower and CoreML related packages in :code:`FLiOSModel.swift`:" +#: ../../source/ref-example-projects.rst:37 +msgid ":doc:`Quickstart PyTorch (Tutorial) `" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:83 -msgid "" -"Then add the mlmodel to the project simply by drag-and-drop, the mlmodel " -"will be bundled inside the application during deployment to your iOS " -"device. We need to pass the url to access mlmodel and run CoreML machine " -"learning processes, it can be retrieved by calling the function " -":code:`Bundle.main.url`. For the MNIST dataset, we need to preprocess it " -"into :code:`MLBatchProvider` object. The preprocessing is done inside " -":code:`DataLoader.swift`." +#: ../../source/ref-example-projects.rst:41 +msgid "PyTorch: From Centralized To Federated" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:99 +#: ../../source/ref-example-projects.rst:43 msgid "" -"Since CoreML does not allow the model parameters to be seen before " -"training, and accessing the model parameters during or after the training" -" can only be done by specifying the layer name, we need to know this " -"informations beforehand, through looking at the model specification, " -"which are written as proto files. The implementation can be seen in " -":code:`MLModelInspect`." +"This example shows how a regular PyTorch project can be federated using " +"Flower:" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:102 +#: ../../source/ref-example-projects.rst:45 msgid "" -"After we have all of the necessary informations, let's create our Flower " -"client." +"`PyTorch: From Centralized To Federated (Code) " +"`_" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:117 +#: ../../source/ref-example-projects.rst:46 msgid "" -"Then start the Flower gRPC client and start communicating to the server " -"by passing our Flower client to the function :code:`startFlwrGRPC`." +":doc:`PyTorch: From Centralized To Federated (Tutorial) `" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:124 -msgid "" -"That's it for the client. We only have to implement :code:`Client` or " -"call the provided :code:`MLFlwrClient` and call :code:`startFlwrGRPC()`. " -"The attribute :code:`hostname` and :code:`port` tells the client which " -"server to connect to. This can be done by entering the hostname and port " -"in the application before clicking the start button to start the " -"federated learning process." +#: ../../source/ref-example-projects.rst:50 +msgid "Federated Learning on Raspberry Pi and Nvidia Jetson" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:131 -#: ../../source/tutorial-quickstart-mxnet.rst:228 -#: ../../source/tutorial-quickstart-pytorch.rst:205 -#: ../../source/tutorial-quickstart-tensorflow.rst:100 +#: ../../source/ref-example-projects.rst:52 msgid "" -"For simple workloads we can start a Flower server and leave all the " -"configuration possibilities at their default values. In a file named " -":code:`server.py`, import Flower and start the server:" +"This example shows how Flower can be used to build a federated learning " +"system that run across Raspberry Pi and Nvidia Jetson:" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:142 -#: ../../source/tutorial-quickstart-mxnet.rst:239 -#: ../../source/tutorial-quickstart-pytorch.rst:216 -#: ../../source/tutorial-quickstart-scikitlearn.rst:215 -#: ../../source/tutorial-quickstart-tensorflow.rst:112 -msgid "Train the model, federated!" +#: ../../source/ref-example-projects.rst:54 +msgid "" +"`Federated Learning on Raspberry Pi and Nvidia Jetson (Code) " +"`_" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:144 -#: ../../source/tutorial-quickstart-pytorch.rst:218 -#: ../../source/tutorial-quickstart-tensorflow.rst:114 -#: ../../source/tutorial-quickstart-xgboost.rst:525 +#: ../../source/ref-example-projects.rst:55 msgid "" -"With both client and server ready, we can now run everything and see " -"federated learning in action. FL systems usually have a server and " -"multiple clients. We therefore have to start the server first:" +"`Federated Learning on Raspberry Pi and Nvidia Jetson (Blog Post) " +"`_" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:152 -msgid "" -"Once the server is running we can start the clients in different " -"terminals. Build and run the client through your Xcode, one through Xcode" -" Simulator and the other by deploying it to your iPhone. To see more " -"about how to deploy your app to iPhone or Simulator visit `here " -"`_." +#: ../../source/ref-example-projects.rst:60 +msgid "Legacy Examples (`flwr_example`)" msgstr "" -#: ../../source/tutorial-quickstart-ios.rst:156 +#: ../../source/ref-example-projects.rst:63 msgid "" -"Congratulations! You've successfully built and run your first federated " -"learning system in your ios device. The full `source code " -"`_ for this " -"example can be found in :code:`examples/ios`." +"The usage examples in `flwr_example` are deprecated and will be removed " +"in the future. New examples are provided as standalone projects in " +"`examples `_." msgstr "" -#: ../../source/tutorial-quickstart-jax.rst:-1 +#: ../../source/ref-example-projects.rst:69 +msgid "Extra Dependencies" +msgstr "" + +#: ../../source/ref-example-projects.rst:71 msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower " -"with Jax to train a linear regression model on a scikit-learn dataset." +"The core Flower framework keeps a minimal set of dependencies. The " +"examples demonstrate Flower in the context of different machine learning " +"frameworks, so additional dependencies need to be installed before an " +"example can be run." msgstr "" -#: ../../source/tutorial-quickstart-jax.rst:5 -msgid "Quickstart JAX" +#: ../../source/ref-example-projects.rst:75 +msgid "For PyTorch examples::" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:-1 -msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower " -"with MXNet to train a Sequential model on MNIST." +#: ../../source/ref-example-projects.rst:79 +msgid "For TensorFlow examples::" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:5 -msgid "Quickstart MXNet" +#: ../../source/ref-example-projects.rst:83 +msgid "For both PyTorch and TensorFlow examples::" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:7 +#: ../../source/ref-example-projects.rst:87 msgid "" -"MXNet is no longer maintained and has been moved into `Attic " -"`_. As a result, we would " -"encourage you to use other ML frameworks alongise Flower, for example, " -"PyTorch. This tutorial might be removed in future versions of Flower." +"Please consult :code:`pyproject.toml` for a full list of possible extras " +"(section :code:`[tool.poetry.extras]`)." msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:12 -msgid "" -"In this tutorial, we will learn how to train a :code:`Sequential` model " -"on MNIST using Flower and MXNet." +#: ../../source/ref-example-projects.rst:92 +msgid "PyTorch Examples" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:14 -#: ../../source/tutorial-quickstart-scikitlearn.rst:12 +#: ../../source/ref-example-projects.rst:94 msgid "" -"It is recommended to create a virtual environment and run everything " -"within this `virtualenv `_." +"Our PyTorch examples are based on PyTorch 1.7. They should work with " +"other releases as well. So far, we provide the following examples." msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:18 -#: ../../source/tutorial-quickstart-scikitlearn.rst:16 -msgid "" -"*Clients* are responsible for generating individual model parameter " -"updates for the model based on their local datasets. These updates are " -"then sent to the *server* which will aggregate them to produce an updated" -" global model. Finally, the *server* sends this improved version of the " -"model back to each *client*. A complete cycle of parameters updates is " -"called a *round*." +#: ../../source/ref-example-projects.rst:98 +msgid "CIFAR-10 Image Classification" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:22 -#: ../../source/tutorial-quickstart-scikitlearn.rst:20 +#: ../../source/ref-example-projects.rst:100 msgid "" -"Now that we have a rough idea of what is going on, let's get started. We " -"first need to install Flower. You can do this by running:" +"`CIFAR-10 and CIFAR-100 `_ " +"are popular RGB image datasets. The Flower CIFAR-10 example uses PyTorch " +"to train a simple CNN classifier in a federated learning setup with two " +"clients." msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:28 -msgid "Since we want to use MXNet, let's go ahead and install it:" +#: ../../source/ref-example-projects.rst:104 +#: ../../source/ref-example-projects.rst:121 +#: ../../source/ref-example-projects.rst:146 +msgid "First, start a Flower server:" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:38 -msgid "" -"Now that we have all our dependencies installed, let's run a simple " -"distributed training with two clients and one server. Our training " -"procedure and network architecture are based on MXNet´s `Hand-written " -"Digit Recognition tutorial " -"`_." +#: ../../source/ref-example-projects.rst:106 +msgid "$ ./src/py/flwr_example/pytorch_cifar/run-server.sh" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:40 -msgid "" -"In a file called :code:`client.py`, import Flower and MXNet related " -"packages:" +#: ../../source/ref-example-projects.rst:108 +#: ../../source/ref-example-projects.rst:125 +#: ../../source/ref-example-projects.rst:150 +msgid "Then, start the two clients in a new terminal window:" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:55 -msgid "In addition, define the device allocation in MXNet with:" +#: ../../source/ref-example-projects.rst:110 +msgid "$ ./src/py/flwr_example/pytorch_cifar/run-clients.sh" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:61 -msgid "" -"We use MXNet to load MNIST, a popular image classification dataset of " -"handwritten digits for machine learning. The MXNet utility " -":code:`mx.test_utils.get_mnist()` downloads the training and test data." +#: ../../source/ref-example-projects.rst:112 +msgid "For more details, see :code:`src/py/flwr_example/pytorch_cifar`." msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:75 -msgid "" -"Define the training and loss with MXNet. We train the model by looping " -"over the dataset, measure the corresponding loss, and optimize it." +#: ../../source/ref-example-projects.rst:115 +msgid "ImageNet-2012 Image Classification" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:113 +#: ../../source/ref-example-projects.rst:117 msgid "" -"Next, we define the validation of our machine learning model. We loop " -"over the test set and measure both loss and accuracy on the test set." +"`ImageNet-2012 `_ is one of the major " +"computer vision datasets. The Flower ImageNet example uses PyTorch to " +"train a ResNet-18 classifier in a federated learning setup with ten " +"clients." msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:137 -msgid "" -"After defining the training and testing of a MXNet machine learning " -"model, we use these functions to implement a Flower client." +#: ../../source/ref-example-projects.rst:123 +msgid "$ ./src/py/flwr_example/pytorch_imagenet/run-server.sh" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:139 -msgid "Our Flower clients will use a simple :code:`Sequential` model:" +#: ../../source/ref-example-projects.rst:127 +msgid "$ ./src/py/flwr_example/pytorch_imagenet/run-clients.sh" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:158 -msgid "" -"After loading the dataset with :code:`load_data()` we perform one forward" -" propagation to initialize the model and model parameters with " -":code:`model(init)`. Next, we implement a Flower client." +#: ../../source/ref-example-projects.rst:129 +msgid "For more details, see :code:`src/py/flwr_example/pytorch_imagenet`." msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:160 -#: ../../source/tutorial-quickstart-pytorch.rst:144 -#: ../../source/tutorial-quickstart-tensorflow.rst:54 -msgid "" -"The Flower server interacts with clients through an interface called " -":code:`Client`. When the server selects a particular client for training," -" it sends training instructions over the network. The client receives " -"those instructions and calls one of the :code:`Client` methods to run " -"your code (i.e., to train the neural network we defined earlier)." +#: ../../source/ref-example-projects.rst:133 +msgid "TensorFlow Examples" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:166 +#: ../../source/ref-example-projects.rst:135 msgid "" -"Flower provides a convenience class called :code:`NumPyClient` which " -"makes it easier to implement the :code:`Client` interface when your " -"workload uses MXNet. Implementing :code:`NumPyClient` usually means " -"defining the following methods (:code:`set_parameters` is optional " -"though):" -msgstr "" - -#: ../../source/tutorial-quickstart-mxnet.rst:172 -#: ../../source/tutorial-quickstart-pytorch.rst:156 -#: ../../source/tutorial-quickstart-scikitlearn.rst:109 -msgid "return the model weight as a list of NumPy ndarrays" +"Our TensorFlow examples are based on TensorFlow 2.0 or newer. So far, we " +"provide the following examples." msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:173 -#: ../../source/tutorial-quickstart-pytorch.rst:157 -#: ../../source/tutorial-quickstart-scikitlearn.rst:111 -msgid ":code:`set_parameters` (optional)" +#: ../../source/ref-example-projects.rst:139 +msgid "Fashion-MNIST Image Classification" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:174 -#: ../../source/tutorial-quickstart-pytorch.rst:158 -#: ../../source/tutorial-quickstart-scikitlearn.rst:111 +#: ../../source/ref-example-projects.rst:141 msgid "" -"update the local model weights with the parameters received from the " -"server" +"`Fashion-MNIST `_ is " +"often used as the \"Hello, world!\" of machine learning. We follow this " +"tradition and provide an example which samples random local datasets from" +" Fashion-MNIST and trains a simple image classification model over those " +"partitions." msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:176 -#: ../../source/tutorial-quickstart-pytorch.rst:160 -#: ../../source/tutorial-quickstart-scikitlearn.rst:114 -msgid "set the local model weights" +#: ../../source/ref-example-projects.rst:148 +msgid "$ ./src/py/flwr_example/tensorflow_fashion_mnist/run-server.sh" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:177 -#: ../../source/tutorial-quickstart-pytorch.rst:161 -#: ../../source/tutorial-quickstart-scikitlearn.rst:115 -msgid "train the local model" +#: ../../source/ref-example-projects.rst:152 +msgid "$ ./src/py/flwr_example/tensorflow_fashion_mnist/run-clients.sh" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:178 -#: ../../source/tutorial-quickstart-pytorch.rst:162 -#: ../../source/tutorial-quickstart-scikitlearn.rst:116 -msgid "receive the updated local model weights" +#: ../../source/ref-example-projects.rst:154 +msgid "" +"For more details, see " +":code:`src/py/flwr_example/tensorflow_fashion_mnist`." msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:180 -#: ../../source/tutorial-quickstart-pytorch.rst:164 -#: ../../source/tutorial-quickstart-scikitlearn.rst:118 -msgid "test the local model" +#: ../../source/ref-faq.rst:4 +msgid "" +"This page collects answers to commonly asked questions about Federated " +"Learning with Flower." msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:182 -msgid "They can be implemented in the following way:" +#: ../../source/ref-faq.rst +msgid ":fa:`eye,mr-1` Can Flower run on Jupyter Notebooks / Google Colab?" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:212 +#: ../../source/ref-faq.rst:8 msgid "" -"We can now create an instance of our class :code:`MNISTClient` and add " -"one line to actually run this client:" +"Yes, it can! Flower even comes with a few under-the-hood optimizations to" +" make it work even better on Colab. Here's a quickstart example:" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:219 +#: ../../source/ref-faq.rst:10 msgid "" -"That's it for the client. We only have to implement :code:`Client` or " -":code:`NumPyClient` and call :code:`fl.client.start_client()` or " -":code:`fl.client.start_numpy_client()`. The string " -":code:`\"0.0.0.0:8080\"` tells the client which server to connect to. In " -"our case we can run the server and the client on the same machine, " -"therefore we use :code:`\"0.0.0.0:8080\"`. If we run a truly federated " -"workload with the server and clients running on different machines, all " -"that needs to change is the :code:`server_address` we pass to the client." +"`Flower simulation PyTorch " +"`_" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:241 +#: ../../source/ref-faq.rst:11 msgid "" -"With both client and server ready, we can now run everything and see " -"federated learning in action. Federated learning systems usually have a " -"server and multiple clients. We therefore have to start the server first:" +"`Flower simulation TensorFlow/Keras " +"`_" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:249 -#: ../../source/tutorial-quickstart-pytorch.rst:226 -#: ../../source/tutorial-quickstart-scikitlearn.rst:224 -#: ../../source/tutorial-quickstart-tensorflow.rst:122 -#: ../../source/tutorial-quickstart-xgboost.rst:533 +#: ../../source/ref-faq.rst +msgid ":fa:`eye,mr-1` How can I run Federated Learning on a Raspberry Pi?" +msgstr "" + +#: ../../source/ref-faq.rst:15 msgid "" -"Once the server is running we can start the clients in different " -"terminals. Open a new terminal and start the first client:" +"Find the `blog post about federated learning on embedded device here " +"`_" +" and the corresponding `GitHub code example " +"`_." msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:256 -#: ../../source/tutorial-quickstart-pytorch.rst:233 -#: ../../source/tutorial-quickstart-scikitlearn.rst:231 -#: ../../source/tutorial-quickstart-tensorflow.rst:129 -#: ../../source/tutorial-quickstart-xgboost.rst:540 -msgid "Open another terminal and start the second client:" +#: ../../source/ref-faq.rst +msgid ":fa:`eye,mr-1` Does Flower support federated learning on Android devices?" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:262 -#: ../../source/tutorial-quickstart-pytorch.rst:239 -#: ../../source/tutorial-quickstart-scikitlearn.rst:237 -#: ../../source/tutorial-quickstart-xgboost.rst:546 +#: ../../source/ref-faq.rst:19 msgid "" -"Each client will have its own dataset. You should now see how the " -"training does in the very first terminal (the one that started the " -"server):" +"Yes, it does. Please take a look at our `blog post " +"`_ or check out the code examples:" msgstr "" -#: ../../source/tutorial-quickstart-mxnet.rst:294 +#: ../../source/ref-faq.rst:21 msgid "" -"Congratulations! You've successfully built and run your first federated " -"learning system. The full `source code " -"`_ for this example can be found in :code:`examples" -"/quickstart-mxnet`." +"`Android Kotlin example `_" msgstr "" -#: ../../source/tutorial-quickstart-pandas.rst:-1 -msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower " -"with Pandas to perform Federated Analytics." +#: ../../source/ref-faq.rst:22 +msgid "`Android Java example `_" msgstr "" -#: ../../source/tutorial-quickstart-pandas.rst:5 -msgid "Quickstart Pandas" +#: ../../source/ref-faq.rst +msgid ":fa:`eye,mr-1` Can I combine federated learning with blockchain?" msgstr "" -#: ../../source/tutorial-quickstart-pandas.rst:10 -msgid "Let's build a federated analytics system using Pandas and Flower!" +#: ../../source/ref-faq.rst:26 +msgid "" +"Yes, of course. A list of available examples using Flower within a " +"blockchain environment is available here:" msgstr "" -#: ../../source/tutorial-quickstart-pandas.rst:12 +#: ../../source/ref-faq.rst:28 msgid "" -"Please refer to the `full code example " -"`_ " -"to learn more." +"`Flower meets Nevermined GitHub Repository `_." msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:-1 +#: ../../source/ref-faq.rst:29 msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower " -"with PyTorch to train a CNN model on MNIST." +"`Flower meets Nevermined YouTube video " +"`_." msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:13 +#: ../../source/ref-faq.rst:30 msgid "" -"In this tutorial we will learn how to train a Convolutional Neural " -"Network on CIFAR10 using Flower and PyTorch." +"`Flower meets KOSMoS `_." msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:15 -#: ../../source/tutorial-quickstart-xgboost.rst:39 +#: ../../source/ref-faq.rst:31 msgid "" -"First of all, it is recommended to create a virtual environment and run " -"everything within a `virtualenv `_." +"`Flower meets Talan blog post `_ ." msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:29 +#: ../../source/ref-faq.rst:32 msgid "" -"Since we want to use PyTorch to solve a computer vision task, let's go " -"ahead and install PyTorch and the **torchvision** library:" +"`Flower meets Talan GitHub Repository " +"`_ ." msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:39 +#: ../../source/ref-telemetry.md:1 +msgid "Telemetry" +msgstr "" + +#: ../../source/ref-telemetry.md:3 msgid "" -"Now that we have all our dependencies installed, let's run a simple " -"distributed training with two clients and one server. Our training " -"procedure and network architecture are based on PyTorch's `Deep Learning " -"with PyTorch " -"`_." +"The Flower open-source project collects **anonymous** usage metrics to " +"make well-informed decisions to improve Flower. Doing this enables the " +"Flower team to understand how Flower is used and what challenges users " +"might face." msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:41 +#: ../../source/ref-telemetry.md:5 msgid "" -"In a file called :code:`client.py`, import Flower and PyTorch related " -"packages:" +"**Flower is a friendly framework for collaborative AI and data science.**" +" Staying true to this statement, Flower makes it easy to disable " +"telemetry for users that do not want to share anonymous usage metrics." msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:56 -msgid "In addition, we define the device allocation in PyTorch with:" +#: ../../source/ref-telemetry.md:7 +msgid "Principles" msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:62 -msgid "" -"We use PyTorch to load CIFAR10, a popular colored image classification " -"dataset for machine learning. The PyTorch :code:`DataLoader()` downloads " -"the training and test data that are then normalized." +#: ../../source/ref-telemetry.md:9 +msgid "We follow strong principles guarding anonymous usage metrics collection:" msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:78 +#: ../../source/ref-telemetry.md:11 msgid "" -"Define the loss and optimizer with PyTorch. The training of the dataset " -"is done by looping over the dataset, measure the corresponding loss and " -"optimize it." +"**Optional:** You will always be able to disable telemetry; read on to " +"learn “[How to opt-out](#how-to-opt-out)”." msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:94 +#: ../../source/ref-telemetry.md:12 msgid "" -"Define then the validation of the machine learning network. We loop over" -" the test set and measure the loss and accuracy of the test set." +"**Anonymous:** The reported usage metrics are anonymous and do not " +"contain any personally identifiable information (PII). See “[Collected " +"metrics](#collected-metrics)” to understand what metrics are being " +"reported." msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:113 +#: ../../source/ref-telemetry.md:13 msgid "" -"After defining the training and testing of a PyTorch machine learning " -"model, we use the functions for the Flower clients." +"**Transparent:** You can easily inspect what anonymous metrics are being " +"reported; see the section “[How to inspect what is being reported](#how-" +"to-inspect-what-is-being-reported)”" msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:115 +#: ../../source/ref-telemetry.md:14 msgid "" -"The Flower clients will use a simple CNN adapted from 'PyTorch: A 60 " -"Minute Blitz':" +"**Open for feedback:** You can always reach out to us if you have " +"feedback; see the section “[How to contact us](#how-to-contact-us)” for " +"details." msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:142 +#: ../../source/ref-telemetry.md:16 +msgid "How to opt-out" +msgstr "" + +#: ../../source/ref-telemetry.md:18 msgid "" -"After loading the data set with :code:`load_data()` we define the Flower " -"interface." +"When Flower starts, it will check for an environment variable called " +"`FLWR_TELEMETRY_ENABLED`. Telemetry can easily be disabled by setting " +"`FLWR_TELEMETRY_ENABLED=0`. Assuming you are starting a Flower server or " +"client, simply do so by prepending your command as in:" msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:150 +#: ../../source/ref-telemetry.md:24 msgid "" -"Flower provides a convenience class called :code:`NumPyClient` which " -"makes it easier to implement the :code:`Client` interface when your " -"workload uses PyTorch. Implementing :code:`NumPyClient` usually means " -"defining the following methods (:code:`set_parameters` is optional " -"though):" +"Alternatively, you can export `FLWR_TELEMETRY_ENABLED=0` in, for example," +" `.bashrc` (or whatever configuration file applies to your environment) " +"to disable Flower telemetry permanently." msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:166 -msgid "which can be implemented in the following way:" +#: ../../source/ref-telemetry.md:26 +msgid "Collected metrics" msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:189 -#: ../../source/tutorial-quickstart-tensorflow.rst:82 -msgid "" -"We can now create an instance of our class :code:`CifarClient` and add " -"one line to actually run this client:" +#: ../../source/ref-telemetry.md:28 +msgid "Flower telemetry collects the following metrics:" msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:196 -#: ../../source/tutorial-quickstart-tensorflow.rst:90 +#: ../../source/ref-telemetry.md:30 msgid "" -"That's it for the client. We only have to implement :code:`Client` or " -":code:`NumPyClient` and call :code:`fl.client.start_client()`. If you " -"implement a client of type :code:`NumPyClient` you'll need to first call " -"its :code:`to_client()` method. The string :code:`\"[::]:8080\"` tells " -"the client which server to connect to. In our case we can run the server " -"and the client on the same machine, therefore we use " -":code:`\"[::]:8080\"`. If we run a truly federated workload with the " -"server and clients running on different machines, all that needs to " -"change is the :code:`server_address` we point the client at." +"**Flower version.** Understand which versions of Flower are currently " +"being used. This helps us to decide whether we should invest effort into " +"releasing a patch version for an older version of Flower or instead use " +"the bandwidth to build new features." msgstr "" -#: ../../source/tutorial-quickstart-pytorch.rst:271 +#: ../../source/ref-telemetry.md:32 msgid "" -"Congratulations! You've successfully built and run your first federated " -"learning system. The full `source code " -"`_ for this example can be found in :code:`examples" -"/quickstart-pytorch`." +"**Operating system.** Enables us to answer questions such as: *Should we " +"create more guides for Linux, macOS, or Windows?*" msgstr "" -#: ../../source/tutorial-quickstart-pytorch-lightning.rst:-1 +#: ../../source/ref-telemetry.md:34 msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower " -"with PyTorch Lightning to train an Auto Encoder model on MNIST." +"**Python version.** Knowing the Python version helps us, for example, to " +"decide whether we should invest effort into supporting old versions of " +"Python or stop supporting them and start taking advantage of new Python " +"features." msgstr "" -#: ../../source/tutorial-quickstart-pytorch-lightning.rst:5 -msgid "Quickstart PyTorch Lightning" +#: ../../source/ref-telemetry.md:36 +msgid "" +"**Hardware properties.** Understanding the hardware environment that " +"Flower is being used in helps to decide whether we should, for example, " +"put more effort into supporting low-resource environments." msgstr "" -#: ../../source/tutorial-quickstart-pytorch-lightning.rst:10 +#: ../../source/ref-telemetry.md:38 msgid "" -"Let's build a horizontal federated learning system using PyTorch " -"Lightning and Flower!" +"**Execution mode.** Knowing what execution mode Flower starts in enables " +"us to understand how heavily certain features are being used and better " +"prioritize based on that." msgstr "" -#: ../../source/tutorial-quickstart-pytorch-lightning.rst:12 +#: ../../source/ref-telemetry.md:40 msgid "" -"Please refer to the `full code example " -"`_ to learn more." +"**Cluster.** Flower telemetry assigns a random in-memory cluster ID each " +"time a Flower workload starts. This allows us to understand which device " +"types not only start Flower workloads but also successfully complete " +"them." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:-1 +#: ../../source/ref-telemetry.md:42 msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower " -"with scikit-learn to train a linear regression model." +"**Source.** Flower telemetry tries to store a random source ID in " +"`~/.flwr/source` the first time a telemetry event is generated. The " +"source ID is important to identify whether an issue is recurring or " +"whether an issue is triggered by multiple clusters running concurrently " +"(which often happens in simulation). For example, if a device runs " +"multiple workloads at the same time, and this results in an issue, then, " +"in order to reproduce the issue, multiple workloads must be started at " +"the same time." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:5 -msgid "Quickstart scikit-learn" +#: ../../source/ref-telemetry.md:44 +msgid "" +"You may delete the source ID at any time. If you wish for all events " +"logged under a specific source ID to be deleted, you can send a deletion " +"request mentioning the source ID to `telemetry@flower.ai`. All events " +"related to that source ID will then be permanently deleted." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:10 +#: ../../source/ref-telemetry.md:46 msgid "" -"In this tutorial, we will learn how to train a :code:`Logistic " -"Regression` model on MNIST using Flower and scikit-learn." +"We will not collect any personally identifiable information. If you think" +" any of the metrics collected could be misused in any way, please [get in" +" touch with us](#how-to-contact-us). We will update this page to reflect " +"any changes to the metrics collected and publish changes in the " +"changelog." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:26 -msgid "Since we want to use scikt-learn, let's go ahead and install it:" +#: ../../source/ref-telemetry.md:48 +msgid "" +"If you think other metrics would be helpful for us to better guide our " +"decisions, please let us know! We will carefully review them; if we are " +"confident that they do not compromise user privacy, we may add them." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:32 -msgid "Or simply install all dependencies using Poetry:" +#: ../../source/ref-telemetry.md:50 +msgid "How to inspect what is being reported" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:42 +#: ../../source/ref-telemetry.md:52 msgid "" -"Now that we have all our dependencies installed, let's run a simple " -"distributed training with two clients and one server. However, before " -"setting up the client and server, we will define all functionalities that" -" we need for our federated learning setup within :code:`utils.py`. The " -":code:`utils.py` contains different functions defining all the machine " -"learning basics:" +"We wanted to make it very easy for you to inspect what anonymous usage " +"metrics are reported. You can view all the reported telemetry information" +" by setting the environment variable `FLWR_TELEMETRY_LOGGING=1`. Logging " +"is disabled by default. You may use logging independently from " +"`FLWR_TELEMETRY_ENABLED` so that you can inspect the telemetry feature " +"without sending any metrics." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:45 -msgid ":code:`get_model_parameters()`" +#: ../../source/ref-telemetry.md:58 +msgid "" +"The inspect Flower telemetry without sending any anonymous usage metrics," +" use both environment variables:" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:46 -msgid "Returns the parameters of a :code:`sklearn` LogisticRegression model" +#: ../../source/ref-telemetry.md:64 +msgid "How to contact us" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:47 -msgid ":code:`set_model_params()`" +#: ../../source/ref-telemetry.md:66 +msgid "" +"We want to hear from you. If you have any feedback or ideas on how to " +"improve the way we handle anonymous usage metrics, reach out to us via " +"[Slack](https://flower.ai/join-slack/) (channel `#telemetry`) or email " +"(`telemetry@flower.ai`)." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:48 -msgid "Sets the parameters of a :code:`sklean` LogisticRegression model" +#: ../../source/tutorial-quickstart-android.rst:-1 +msgid "" +"Read this Federated Learning quickstart tutorial for creating an Android " +"app using Flower." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:49 -msgid ":code:`set_initial_params()`" +#: ../../source/tutorial-quickstart-android.rst:5 +msgid "Quickstart Android" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:50 -msgid "Initializes the model parameters that the Flower server will ask for" +#: ../../source/tutorial-quickstart-android.rst:10 +msgid "" +"Let's build a federated learning system using TFLite and Flower on " +"Android!" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:51 -msgid ":code:`load_mnist()`" +#: ../../source/tutorial-quickstart-android.rst:12 +msgid "" +"Please refer to the `full code example " +"`_ to learn " +"more." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:52 -msgid "Loads the MNIST dataset using OpenML" +#: ../../source/tutorial-quickstart-fastai.rst:-1 +msgid "" +"Check out this Federated Learning quickstart tutorial for using Flower " +"with FastAI to train a vision model on CIFAR-10." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:53 -msgid ":code:`shuffle()`" +#: ../../source/tutorial-quickstart-fastai.rst:5 +msgid "Quickstart fastai" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:54 -msgid "Shuffles data and its label" +#: ../../source/tutorial-quickstart-fastai.rst:10 +msgid "Let's build a federated learning system using fastai and Flower!" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:56 -msgid ":code:`partition()`" +#: ../../source/tutorial-quickstart-fastai.rst:12 +msgid "" +"Please refer to the `full code example " +"`_ " +"to learn more." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:56 -msgid "Splits datasets into a number of partitions" +#: ../../source/tutorial-quickstart-huggingface.rst:-1 +msgid "" +"Check out this Federating Learning quickstart tutorial for using Flower " +"with HuggingFace Transformers in order to fine-tune an LLM." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:58 -msgid "" -"Please check out :code:`utils.py` `here " -"`_ for more details. The pre-defined functions are used in" -" the :code:`client.py` and imported. The :code:`client.py` also requires " -"to import several packages such as Flower and scikit-learn:" +#: ../../source/tutorial-quickstart-huggingface.rst:5 +msgid "Quickstart 🤗 Transformers" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:73 +#: ../../source/tutorial-quickstart-huggingface.rst:10 msgid "" -"We load the MNIST dataset from `OpenML `_, " -"a popular image classification dataset of handwritten digits for machine " -"learning. The utility :code:`utils.load_mnist()` downloads the training " -"and test data. The training set is split afterwards into 10 partitions " -"with :code:`utils.partition()`." +"Let's build a federated learning system using Hugging Face Transformers " +"and Flower!" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:85 +#: ../../source/tutorial-quickstart-huggingface.rst:12 msgid "" -"Next, the logistic regression model is defined and initialized with " -":code:`utils.set_initial_params()`." +"We will leverage Hugging Face to federate the training of language models" +" over multiple clients using Flower. More specifically, we will fine-tune" +" a pre-trained Transformer model (distilBERT) for sequence classification" +" over a dataset of IMDB ratings. The end goal is to detect if a movie " +"rating is positive or negative." msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:97 -msgid "" -"The Flower server interacts with clients through an interface called " -":code:`Client`. When the server selects a particular client for training," -" it sends training instructions over the network. The client receives " -"those instructions and calls one of the :code:`Client` methods to run " -"your code (i.e., to fit the logistic regression we defined earlier)." +#: ../../source/tutorial-quickstart-huggingface.rst:18 +msgid "Dependencies" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:103 +#: ../../source/tutorial-quickstart-huggingface.rst:20 msgid "" -"Flower provides a convenience class called :code:`NumPyClient` which " -"makes it easier to implement the :code:`Client` interface when your " -"workload uses scikit-learn. Implementing :code:`NumPyClient` usually " -"means defining the following methods (:code:`set_parameters` is optional " -"though):" +"To follow along this tutorial you will need to install the following " +"packages: :code:`datasets`, :code:`evaluate`, :code:`flwr`, " +":code:`torch`, and :code:`transformers`. This can be done using " +":code:`pip`:" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:112 -msgid "is directly imported with :code:`utils.set_model_params()`" +#: ../../source/tutorial-quickstart-huggingface.rst:30 +msgid "Standard Hugging Face workflow" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:120 -msgid "The methods can be implemented in the following way:" +#: ../../source/tutorial-quickstart-huggingface.rst:33 +msgid "Handling the data" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:143 +#: ../../source/tutorial-quickstart-huggingface.rst:35 msgid "" -"We can now create an instance of our class :code:`MnistClient` and add " -"one line to actually run this client:" +"To fetch the IMDB dataset, we will use Hugging Face's :code:`datasets` " +"library. We then need to tokenize the data and create :code:`PyTorch` " +"dataloaders, this is all done in the :code:`load_data` function:" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:150 -msgid "" -"That's it for the client. We only have to implement :code:`Client` or " -":code:`NumPyClient` and call :code:`fl.client.start_client()`. If you " -"implement a client of type :code:`NumPyClient` you'll need to first call " -"its :code:`to_client()` method. The string :code:`\"0.0.0.0:8080\"` tells" -" the client which server to connect to. In our case we can run the server" -" and the client on the same machine, therefore we use " -":code:`\"0.0.0.0:8080\"`. If we run a truly federated workload with the " -"server and clients running on different machines, all that needs to " -"change is the :code:`server_address` we pass to the client." +#: ../../source/tutorial-quickstart-huggingface.rst:81 +msgid "Training and testing the model" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:159 +#: ../../source/tutorial-quickstart-huggingface.rst:83 msgid "" -"The following Flower server is a little bit more advanced and returns an " -"evaluation function for the server-side evaluation. First, we import " -"again all required libraries such as Flower and scikit-learn." +"Once we have a way of creating our trainloader and testloader, we can " +"take care of the training and testing. This is very similar to any " +":code:`PyTorch` training or testing loop:" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:162 -msgid ":code:`server.py`, import Flower and start the server:" +#: ../../source/tutorial-quickstart-huggingface.rst:121 +msgid "Creating the model itself" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:173 +#: ../../source/tutorial-quickstart-huggingface.rst:123 msgid "" -"The number of federated learning rounds is set in :code:`fit_round()` and" -" the evaluation is defined in :code:`get_evaluate_fn()`. The evaluation " -"function is called after each federated learning round and gives you " -"information about loss and accuracy." +"To create the model itself, we will just load the pre-trained distillBERT" +" model using Hugging Face’s :code:`AutoModelForSequenceClassification` :" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:198 -msgid "" -"The :code:`main` contains the server-side parameter initialization " -":code:`utils.set_initial_params()` as well as the aggregation strategy " -":code:`fl.server.strategy:FedAvg()`. The strategy is the default one, " -"federated averaging (or FedAvg), with two clients and evaluation after " -"each federated learning round. The server can be started with the command" -" :code:`fl.server.start_server(server_address=\"0.0.0.0:8080\", " -"strategy=strategy, config=fl.server.ServerConfig(num_rounds=3))`." +#: ../../source/tutorial-quickstart-huggingface.rst:136 +msgid "Federating the example" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:217 -msgid "" -"With both client and server ready, we can now run everything and see " -"federated learning in action. Federated learning systems usually have a " -"server and multiple clients. We, therefore, have to start the server " -"first:" +#: ../../source/tutorial-quickstart-huggingface.rst:139 +msgid "Creating the IMDBClient" msgstr "" -#: ../../source/tutorial-quickstart-scikitlearn.rst:271 +#: ../../source/tutorial-quickstart-huggingface.rst:141 msgid "" -"Congratulations! You've successfully built and run your first federated " -"learning system. The full `source code " -"`_ for this example can be found in :code:`examples/sklearn-logreg-" -"mnist`." +"To federate our example to multiple clients, we first need to write our " +"Flower client class (inheriting from :code:`flwr.client.NumPyClient`). " +"This is very easy, as our model is a standard :code:`PyTorch` model:" msgstr "" -#: ../../source/tutorial-quickstart-tensorflow.rst:-1 +#: ../../source/tutorial-quickstart-huggingface.rst:169 msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower " -"with TensorFlow to train a MobilNetV2 model on CIFAR-10." +"The :code:`get_parameters` function lets the server get the client's " +"parameters. Inversely, the :code:`set_parameters` function allows the " +"server to send its parameters to the client. Finally, the :code:`fit` " +"function trains the model locally for the client, and the " +":code:`evaluate` function tests the model locally and returns the " +"relevant metrics." msgstr "" -#: ../../source/tutorial-quickstart-tensorflow.rst:5 -msgid "Quickstart TensorFlow" +#: ../../source/tutorial-quickstart-huggingface.rst:175 +msgid "Starting the server" msgstr "" -#: ../../source/tutorial-quickstart-tensorflow.rst:13 -msgid "Let's build a federated learning system in less than 20 lines of code!" +#: ../../source/tutorial-quickstart-huggingface.rst:177 +msgid "" +"Now that we have a way to instantiate clients, we need to create our " +"server in order to aggregate the results. Using Flower, this can be done " +"very easily by first choosing a strategy (here, we are using " +":code:`FedAvg`, which will define the global weights as the average of " +"all the clients' weights at each round) and then using the " +":code:`flwr.server.start_server` function:" msgstr "" -#: ../../source/tutorial-quickstart-tensorflow.rst:15 -msgid "Before Flower can be imported we have to install it:" +#: ../../source/tutorial-quickstart-huggingface.rst:205 +msgid "" +"The :code:`weighted_average` function is there to provide a way to " +"aggregate the metrics distributed amongst the clients (basically this " +"allows us to display a nice average accuracy and loss for every round)." msgstr "" -#: ../../source/tutorial-quickstart-tensorflow.rst:21 -msgid "" -"Since we want to use the Keras API of TensorFlow (TF), we have to install" -" TF as well:" +#: ../../source/tutorial-quickstart-huggingface.rst:209 +msgid "Putting everything together" msgstr "" -#: ../../source/tutorial-quickstart-tensorflow.rst:31 -msgid "Next, in a file called :code:`client.py`, import Flower and TensorFlow:" +#: ../../source/tutorial-quickstart-huggingface.rst:211 +msgid "We can now start client instances using:" msgstr "" -#: ../../source/tutorial-quickstart-tensorflow.rst:38 +#: ../../source/tutorial-quickstart-huggingface.rst:221 msgid "" -"We use the Keras utilities of TF to load CIFAR10, a popular colored image" -" classification dataset for machine learning. The call to " -":code:`tf.keras.datasets.cifar10.load_data()` downloads CIFAR10, caches " -"it locally, and then returns the entire training and test set as NumPy " -"ndarrays." +"And they will be able to connect to the server and start the federated " +"training." msgstr "" -#: ../../source/tutorial-quickstart-tensorflow.rst:47 +#: ../../source/tutorial-quickstart-huggingface.rst:223 msgid "" -"Next, we need a model. For the purpose of this tutorial, we use " -"MobilNetV2 with 10 output classes:" +"If you want to check out everything put together, you should check out " +"the `full code example `_ ." msgstr "" -#: ../../source/tutorial-quickstart-tensorflow.rst:60 +#: ../../source/tutorial-quickstart-huggingface.rst:226 msgid "" -"Flower provides a convenience class called :code:`NumPyClient` which " -"makes it easier to implement the :code:`Client` interface when your " -"workload uses Keras. The :code:`NumPyClient` interface defines three " -"methods which can be implemented in the following way:" +"Of course, this is a very basic example, and a lot can be added or " +"modified, it was just to showcase how simply we could federate a Hugging " +"Face workflow using Flower." msgstr "" -#: ../../source/tutorial-quickstart-tensorflow.rst:135 -msgid "Each client will have its own dataset." +#: ../../source/tutorial-quickstart-huggingface.rst:229 +msgid "" +"Note that in this example we used :code:`PyTorch`, but we could have very" +" well used :code:`TensorFlow`." msgstr "" -#: ../../source/tutorial-quickstart-tensorflow.rst:137 +#: ../../source/tutorial-quickstart-ios.rst:-1 msgid "" -"You should now see how the training does in the very first terminal (the " -"one that started the server):" +"Read this Federated Learning quickstart tutorial for creating an iOS app " +"using Flower to train a neural network on MNIST." msgstr "" -#: ../../source/tutorial-quickstart-tensorflow.rst:169 -msgid "" -"Congratulations! You've successfully built and run your first federated " -"learning system. The full `source code " -"`_ for this can be found in :code:`examples" -"/quickstart-tensorflow/client.py`." +#: ../../source/tutorial-quickstart-ios.rst:5 +msgid "Quickstart iOS" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:-1 +#: ../../source/tutorial-quickstart-ios.rst:10 msgid "" -"Check out this Federated Learning quickstart tutorial for using Flower " -"with XGBoost to train classification models on trees." +"In this tutorial we will learn how to train a Neural Network on MNIST " +"using Flower and CoreML on iOS devices." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:5 -msgid "Quickstart XGBoost" +#: ../../source/tutorial-quickstart-ios.rst:12 +msgid "" +"First of all, for running the Flower Python server, it is recommended to " +"create a virtual environment and run everything within a :doc:`virtualenv" +" `. For the Flower client " +"implementation in iOS, it is recommended to use Xcode as our IDE." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:14 -msgid "Federated XGBoost" +#: ../../source/tutorial-quickstart-ios.rst:15 +msgid "" +"Our example consists of one Python *server* and two iPhone *clients* that" +" all have the same model." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:16 +#: ../../source/tutorial-quickstart-ios.rst:17 msgid "" -"EXtreme Gradient Boosting (**XGBoost**) is a robust and efficient " -"implementation of gradient-boosted decision tree (**GBDT**), that " -"maximises the computational boundaries for boosted tree methods. It's " -"primarily designed to enhance both the performance and computational " -"speed of machine learning models. In XGBoost, trees are constructed " -"concurrently, unlike the sequential approach taken by GBDT." +"*Clients* are responsible for generating individual weight updates for " +"the model based on their local datasets. These updates are then sent to " +"the *server* which will aggregate them to produce a better model. " +"Finally, the *server* sends this improved version of the model back to " +"each *client*. A complete cycle of weight updates is called a *round*." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:20 +#: ../../source/tutorial-quickstart-ios.rst:21 msgid "" -"Often, for tabular data on medium-sized datasets with fewer than 10k " -"training examples, XGBoost surpasses the results of deep learning " -"techniques." +"Now that we have a rough idea of what is going on, let's get started to " +"setup our Flower server environment. We first need to install Flower. You" +" can do this by using pip:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:23 -msgid "Why federated XGBoost?" +#: ../../source/tutorial-quickstart-ios.rst:27 +msgid "Or Poetry:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:25 +#: ../../source/tutorial-quickstart-ios.rst:36 msgid "" -"Indeed, as the demand for data privacy and decentralized learning grows, " -"there's an increasing requirement to implement federated XGBoost systems " -"for specialised applications, like survival analysis and financial fraud " -"detection." +"Now that we have all our dependencies installed, let's run a simple " +"distributed training using CoreML as our local training pipeline and " +"MNIST as our dataset. For simplicity reasons we will use the complete " +"Flower client with CoreML, that has been implemented and stored inside " +"the Swift SDK. The client implementation can be seen below:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:27 +#: ../../source/tutorial-quickstart-ios.rst:72 msgid "" -"Federated learning ensures that raw data remains on the local device, " -"making it an attractive approach for sensitive domains where data " -"security and privacy are paramount. Given the robustness and efficiency " -"of XGBoost, combining it with federated learning offers a promising " -"solution for these specific challenges." +"Let's create a new application project in Xcode and add :code:`flwr` as a" +" dependency in your project. For our application, we will store the logic" +" of our app in :code:`FLiOSModel.swift` and the UI elements in " +":code:`ContentView.swift`. We will focus more on :code:`FLiOSModel.swift`" +" in this quickstart. Please refer to the `full code example " +"`_ to learn more " +"about the app." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:30 +#: ../../source/tutorial-quickstart-ios.rst:75 +msgid "Import Flower and CoreML related packages in :code:`FLiOSModel.swift`:" +msgstr "" + +#: ../../source/tutorial-quickstart-ios.rst:83 msgid "" -"In this tutorial we will learn how to train a federated XGBoost model on " -"HIGGS dataset using Flower and :code:`xgboost` package. We use a simple " -"example (`full code xgboost-quickstart " -"`_)" -" with two *clients* and one *server* to demonstrate how federated XGBoost" -" works, and then we dive into a more complex example (`full code xgboost-" -"comprehensive `_) to run various experiments." +"Then add the mlmodel to the project simply by drag-and-drop, the mlmodel " +"will be bundled inside the application during deployment to your iOS " +"device. We need to pass the url to access mlmodel and run CoreML machine " +"learning processes, it can be retrieved by calling the function " +":code:`Bundle.main.url`. For the MNIST dataset, we need to preprocess it " +"into :code:`MLBatchProvider` object. The preprocessing is done inside " +":code:`DataLoader.swift`." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:37 -msgid "Environment Setup" +#: ../../source/tutorial-quickstart-ios.rst:99 +msgid "" +"Since CoreML does not allow the model parameters to be seen before " +"training, and accessing the model parameters during or after the training" +" can only be done by specifying the layer name, we need to know this " +"information beforehand, through looking at the model specification, which" +" are written as proto files. The implementation can be seen in " +":code:`MLModelInspect`." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:41 +#: ../../source/tutorial-quickstart-ios.rst:102 msgid "" -"We first need to install Flower and Flower Datasets. You can do this by " -"running :" +"After we have all of the necessary information, let's create our Flower " +"client." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:47 +#: ../../source/tutorial-quickstart-ios.rst:117 msgid "" -"Since we want to use :code:`xgboost` package to build up XGBoost trees, " -"let's go ahead and install :code:`xgboost`:" +"Then start the Flower gRPC client and start communicating to the server " +"by passing our Flower client to the function :code:`startFlwrGRPC`." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:57 +#: ../../source/tutorial-quickstart-ios.rst:124 msgid "" -"*Clients* are responsible for generating individual weight-updates for " -"the model based on their local datasets. Now that we have all our " -"dependencies installed, let's run a simple distributed training with two " -"clients and one server." +"That's it for the client. We only have to implement :code:`Client` or " +"call the provided :code:`MLFlwrClient` and call :code:`startFlwrGRPC()`. " +"The attribute :code:`hostname` and :code:`port` tells the client which " +"server to connect to. This can be done by entering the hostname and port " +"in the application before clicking the start button to start the " +"federated learning process." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:60 +#: ../../source/tutorial-quickstart-ios.rst:131 +#: ../../source/tutorial-quickstart-mxnet.rst:228 +#: ../../source/tutorial-quickstart-pytorch.rst:205 +#: ../../source/tutorial-quickstart-tensorflow.rst:100 msgid "" -"In a file called :code:`client.py`, import xgboost, Flower, Flower " -"Datasets and other related functions:" +"For simple workloads we can start a Flower server and leave all the " +"configuration possibilities at their default values. In a file named " +":code:`server.py`, import Flower and start the server:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:87 -msgid "Dataset partition and hyper-parameter selection" +#: ../../source/tutorial-quickstart-ios.rst:142 +#: ../../source/tutorial-quickstart-mxnet.rst:239 +#: ../../source/tutorial-quickstart-pytorch.rst:216 +#: ../../source/tutorial-quickstart-scikitlearn.rst:215 +#: ../../source/tutorial-quickstart-tensorflow.rst:112 +msgid "Train the model, federated!" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:89 +#: ../../source/tutorial-quickstart-ios.rst:144 +#: ../../source/tutorial-quickstart-pytorch.rst:218 +#: ../../source/tutorial-quickstart-tensorflow.rst:114 +#: ../../source/tutorial-quickstart-xgboost.rst:525 msgid "" -"Prior to local training, we require loading the HIGGS dataset from Flower" -" Datasets and conduct data partitioning for FL:" +"With both client and server ready, we can now run everything and see " +"federated learning in action. FL systems usually have a server and " +"multiple clients. We therefore have to start the server first:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:102 +#: ../../source/tutorial-quickstart-ios.rst:152 msgid "" -"In this example, we split the dataset into two partitions with uniform " -"distribution (:code:`IidPartitioner(num_partitions=2)`). Then, we load " -"the partition for the given client based on :code:`node_id`:" +"Once the server is running we can start the clients in different " +"terminals. Build and run the client through your Xcode, one through Xcode" +" Simulator and the other by deploying it to your iPhone. To see more " +"about how to deploy your app to iPhone or Simulator visit `here " +"`_." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:121 +#: ../../source/tutorial-quickstart-ios.rst:156 msgid "" -"After that, we do train/test splitting on the given partition (client's " -"local data), and transform data format for :code:`xgboost` package." +"Congratulations! You've successfully built and run your first federated " +"learning system in your ios device. The full `source code " +"`_ for this " +"example can be found in :code:`examples/ios`." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:134 +#: ../../source/tutorial-quickstart-jax.rst:-1 msgid "" -"The functions of :code:`train_test_split` and " -":code:`transform_dataset_to_dmatrix` are defined as below:" +"Check out this Federated Learning quickstart tutorial for using Flower " +"with Jax to train a linear regression model on a scikit-learn dataset." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:158 -msgid "Finally, we define the hyper-parameters used for XGBoost training." +#: ../../source/tutorial-quickstart-jax.rst:5 +msgid "Quickstart JAX" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:174 +#: ../../source/tutorial-quickstart-mxnet.rst:-1 msgid "" -"The :code:`num_local_round` represents the number of iterations for local" -" tree boost. We use CPU for the training in default. One can shift it to " -"GPU by setting :code:`tree_method` to :code:`gpu_hist`. We use AUC as " -"evaluation metric." +"Check out this Federated Learning quickstart tutorial for using Flower " +"with MXNet to train a Sequential model on MNIST." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:181 -msgid "Flower client definition for XGBoost" +#: ../../source/tutorial-quickstart-mxnet.rst:5 +msgid "Quickstart MXNet" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:183 +#: ../../source/tutorial-quickstart-mxnet.rst:7 msgid "" -"After loading the dataset we define the Flower client. We follow the " -"general rule to define :code:`XgbClient` class inherited from " -":code:`fl.client.Client`." +"MXNet is no longer maintained and has been moved into `Attic " +"`_. As a result, we would " +"encourage you to use other ML frameworks alongside Flower, for example, " +"PyTorch. This tutorial might be removed in future versions of Flower." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:193 +#: ../../source/tutorial-quickstart-mxnet.rst:12 msgid "" -"The :code:`self.bst` is used to keep the Booster objects that remain " -"consistent across rounds, allowing them to store predictions from trees " -"integrated in earlier rounds and maintain other essential data structures" -" for training." +"In this tutorial, we will learn how to train a :code:`Sequential` model " +"on MNIST using Flower and MXNet." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:196 +#: ../../source/tutorial-quickstart-mxnet.rst:14 +#: ../../source/tutorial-quickstart-scikitlearn.rst:12 msgid "" -"Then, we override :code:`get_parameters`, :code:`fit` and " -":code:`evaluate` methods insides :code:`XgbClient` class as follows." +"It is recommended to create a virtual environment and run everything " +"within this :doc:`virtualenv `." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:210 +#: ../../source/tutorial-quickstart-mxnet.rst:18 +#: ../../source/tutorial-quickstart-scikitlearn.rst:16 msgid "" -"Unlike neural network training, XGBoost trees are not started from a " -"specified random weights. In this case, we do not use " -":code:`get_parameters` and :code:`set_parameters` to initialise model " -"parameters for XGBoost. As a result, let's return an empty tensor in " -":code:`get_parameters` when it is called by the server at the first " -"round." +"*Clients* are responsible for generating individual model parameter " +"updates for the model based on their local datasets. These updates are " +"then sent to the *server* which will aggregate them to produce an updated" +" global model. Finally, the *server* sends this improved version of the " +"model back to each *client*. A complete cycle of parameters updates is " +"called a *round*." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:251 +#: ../../source/tutorial-quickstart-mxnet.rst:22 +#: ../../source/tutorial-quickstart-scikitlearn.rst:20 msgid "" -"In :code:`fit`, at the first round, we call :code:`xgb.train()` to build " -"up the first set of trees. the returned Booster object and config are " -"stored in :code:`self.bst` and :code:`self.config`, respectively. From " -"the second round, we load the global model sent from server to " -":code:`self.bst`, and then update model weights on local training data " -"with function :code:`local_boost` as follows:" +"Now that we have a rough idea of what is going on, let's get started. We " +"first need to install Flower. You can do this by running:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:269 -msgid "" -"Given :code:`num_local_round`, we update trees by calling " -":code:`self.bst.update` method. After training, the last " -":code:`N=num_local_round` trees will be extracted to send to the server." +#: ../../source/tutorial-quickstart-mxnet.rst:28 +msgid "Since we want to use MXNet, let's go ahead and install it:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:291 +#: ../../source/tutorial-quickstart-mxnet.rst:38 msgid "" -"In :code:`evaluate`, we call :code:`self.bst.eval_set` function to " -"conduct evaluation on valid set. The AUC value will be returned." +"Now that we have all our dependencies installed, let's run a simple " +"distributed training with two clients and one server. Our training " +"procedure and network architecture are based on MXNet´s `Hand-written " +"Digit Recognition tutorial " +"`_." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:294 +#: ../../source/tutorial-quickstart-mxnet.rst:40 msgid "" -"Now, we can create an instance of our class :code:`XgbClient` and add one" -" line to actually run this client:" +"In a file called :code:`client.py`, import Flower and MXNet related " +"packages:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:300 -msgid "" -"That's it for the client. We only have to implement :code:`Client`and " -"call :code:`fl.client.start_client()`. The string :code:`\"[::]:8080\"` " -"tells the client which server to connect to. In our case we can run the " -"server and the client on the same machine, therefore we use " -":code:`\"[::]:8080\"`. If we run a truly federated workload with the " -"server and clients running on different machines, all that needs to " -"change is the :code:`server_address` we point the client at." +#: ../../source/tutorial-quickstart-mxnet.rst:55 +msgid "In addition, define the device allocation in MXNet with:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:311 +#: ../../source/tutorial-quickstart-mxnet.rst:61 msgid "" -"These updates are then sent to the *server* which will aggregate them to " -"produce a better model. Finally, the *server* sends this improved version" -" of the model back to each *client* to finish a complete FL round." +"We use MXNet to load MNIST, a popular image classification dataset of " +"handwritten digits for machine learning. The MXNet utility " +":code:`mx.test_utils.get_mnist()` downloads the training and test data." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:314 +#: ../../source/tutorial-quickstart-mxnet.rst:75 msgid "" -"In a file named :code:`server.py`, import Flower and FedXgbBagging from " -":code:`flwr.server.strategy`." -msgstr "" - -#: ../../source/tutorial-quickstart-xgboost.rst:316 -msgid "We first define a strategy for XGBoost bagging aggregation." +"Define the training and loss with MXNet. We train the model by looping " +"over the dataset, measure the corresponding loss, and optimize it." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:339 +#: ../../source/tutorial-quickstart-mxnet.rst:113 msgid "" -"We use two clients for this example. An " -":code:`evaluate_metrics_aggregation` function is defined to collect and " -"wighted average the AUC values from clients." -msgstr "" - -#: ../../source/tutorial-quickstart-xgboost.rst:342 -msgid "Then, we start the server:" +"Next, we define the validation of our machine learning model. We loop " +"over the test set and measure both loss and accuracy on the test set." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:354 -msgid "Tree-based bagging aggregation" +#: ../../source/tutorial-quickstart-mxnet.rst:137 +msgid "" +"After defining the training and testing of a MXNet machine learning " +"model, we use these functions to implement a Flower client." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:356 -msgid "" -"You must be curious about how bagging aggregation works. Let's look into " -"the details." +#: ../../source/tutorial-quickstart-mxnet.rst:139 +msgid "Our Flower clients will use a simple :code:`Sequential` model:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:358 +#: ../../source/tutorial-quickstart-mxnet.rst:158 msgid "" -"In file :code:`flwr.server.strategy.fedxgb_bagging.py`, we define " -":code:`FedXgbBagging` inherited from :code:`flwr.server.strategy.FedAvg`." -" Then, we override the :code:`aggregate_fit`, :code:`aggregate_evaluate` " -"and :code:`evaluate` methods as follows:" +"After loading the dataset with :code:`load_data()` we perform one forward" +" propagation to initialize the model and model parameters with " +":code:`model(init)`. Next, we implement a Flower client." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:454 +#: ../../source/tutorial-quickstart-mxnet.rst:160 +#: ../../source/tutorial-quickstart-pytorch.rst:144 +#: ../../source/tutorial-quickstart-tensorflow.rst:54 msgid "" -"In :code:`aggregate_fit`, we sequentially aggregate the clients' XGBoost " -"trees by calling :code:`aggregate()` function:" +"The Flower server interacts with clients through an interface called " +":code:`Client`. When the server selects a particular client for training," +" it sends training instructions over the network. The client receives " +"those instructions and calls one of the :code:`Client` methods to run " +"your code (i.e., to train the neural network we defined earlier)." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:513 +#: ../../source/tutorial-quickstart-mxnet.rst:166 msgid "" -"In this function, we first fetch the number of trees and the number of " -"parallel trees for the current and previous model by calling " -":code:`_get_tree_nums`. Then, the fetched information will be aggregated." -" After that, the trees (containing model weights) are aggregated to " -"generate a new tree model." +"Flower provides a convenience class called :code:`NumPyClient` which " +"makes it easier to implement the :code:`Client` interface when your " +"workload uses MXNet. Implementing :code:`NumPyClient` usually means " +"defining the following methods (:code:`set_parameters` is optional " +"though):" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:518 -msgid "" -"After traversal of all clients' models, a new global model is generated, " -"followed by the serialisation, and sending back to each client." +#: ../../source/tutorial-quickstart-mxnet.rst:172 +#: ../../source/tutorial-quickstart-pytorch.rst:156 +#: ../../source/tutorial-quickstart-scikitlearn.rst:109 +msgid "return the model weight as a list of NumPy ndarrays" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:523 -msgid "Launch Federated XGBoost!" +#: ../../source/tutorial-quickstart-mxnet.rst:173 +#: ../../source/tutorial-quickstart-pytorch.rst:157 +#: ../../source/tutorial-quickstart-scikitlearn.rst:111 +msgid ":code:`set_parameters` (optional)" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:585 +#: ../../source/tutorial-quickstart-mxnet.rst:174 +#: ../../source/tutorial-quickstart-pytorch.rst:158 +#: ../../source/tutorial-quickstart-scikitlearn.rst:111 msgid "" -"Congratulations! You've successfully built and run your first federated " -"XGBoost system. The AUC values can be checked in " -":code:`metrics_distributed`. One can see that the average AUC increases " -"over FL rounds." +"update the local model weights with the parameters received from the " +"server" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:590 -msgid "" -"The full `source code `_ for this example can be found in :code:`examples" -"/xgboost-quickstart`." +#: ../../source/tutorial-quickstart-mxnet.rst:176 +#: ../../source/tutorial-quickstart-pytorch.rst:160 +#: ../../source/tutorial-quickstart-scikitlearn.rst:114 +msgid "set the local model weights" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:594 -msgid "Comprehensive Federated XGBoost" +#: ../../source/tutorial-quickstart-mxnet.rst:177 +#: ../../source/tutorial-quickstart-pytorch.rst:161 +#: ../../source/tutorial-quickstart-scikitlearn.rst:115 +msgid "train the local model" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:596 -msgid "" -"Now that you have known how federated XGBoost work with Flower, it's time" -" to run some more comprehensive experiments by customising the " -"experimental settings. In the xgboost-comprehensive example (`full code " -"`_), we provide more options to define various experimental" -" setups, including aggregation strategies, data partitioning and " -"centralised/distributed evaluation. We also support `Flower simulation " -"`_ making " -"it easy to simulate large client cohorts in a resource-aware manner. " -"Let's take a look!" +#: ../../source/tutorial-quickstart-mxnet.rst:178 +#: ../../source/tutorial-quickstart-pytorch.rst:162 +#: ../../source/tutorial-quickstart-scikitlearn.rst:116 +msgid "receive the updated local model weights" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:603 -msgid "Cyclic training" +#: ../../source/tutorial-quickstart-mxnet.rst:180 +#: ../../source/tutorial-quickstart-pytorch.rst:164 +#: ../../source/tutorial-quickstart-scikitlearn.rst:118 +msgid "test the local model" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:605 -msgid "" -"In addition to bagging aggregation, we offer a cyclic training scheme, " -"which performs FL in a client-by-client fashion. Instead of aggregating " -"multiple clients, there is only one single client participating in the " -"training per round in the cyclic training scenario. The trained local " -"XGBoost trees will be passed to the next client as an initialised model " -"for next round's boosting." +#: ../../source/tutorial-quickstart-mxnet.rst:182 +msgid "They can be implemented in the following way:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:609 +#: ../../source/tutorial-quickstart-mxnet.rst:212 msgid "" -"To do this, we first customise a :code:`ClientManager` in " -":code:`server_utils.py`:" +"We can now create an instance of our class :code:`MNISTClient` and add " +"one line to actually run this client:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:649 +#: ../../source/tutorial-quickstart-mxnet.rst:219 msgid "" -"The customised :code:`ClientManager` samples all available clients in " -"each FL round based on the order of connection to the server. Then, we " -"define a new strategy :code:`FedXgbCyclic` in " -":code:`flwr.server.strategy.fedxgb_cyclic.py`, in order to sequentially " -"select only one client in given round and pass the received model to next" -" client." +"That's it for the client. We only have to implement :code:`Client` or " +":code:`NumPyClient` and call :code:`fl.client.start_client()` or " +":code:`fl.client.start_numpy_client()`. The string " +":code:`\"0.0.0.0:8080\"` tells the client which server to connect to. In " +"our case we can run the server and the client on the same machine, " +"therefore we use :code:`\"0.0.0.0:8080\"`. If we run a truly federated " +"workload with the server and clients running on different machines, all " +"that needs to change is the :code:`server_address` we pass to the client." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:690 +#: ../../source/tutorial-quickstart-mxnet.rst:241 msgid "" -"Unlike the original :code:`FedAvg`, we don't perform aggregation here. " -"Instead, we just make a copy of the received client model as global model" -" by overriding :code:`aggregate_fit`." +"With both client and server ready, we can now run everything and see " +"federated learning in action. Federated learning systems usually have a " +"server and multiple clients. We therefore have to start the server first:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:693 +#: ../../source/tutorial-quickstart-mxnet.rst:249 +#: ../../source/tutorial-quickstart-pytorch.rst:226 +#: ../../source/tutorial-quickstart-scikitlearn.rst:224 +#: ../../source/tutorial-quickstart-tensorflow.rst:122 +#: ../../source/tutorial-quickstart-xgboost.rst:533 msgid "" -"Also, the customised :code:`configure_fit` and :code:`configure_evaluate`" -" methods ensure the clients to be sequentially selected given FL round:" +"Once the server is running we can start the clients in different " +"terminals. Open a new terminal and start the first client:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:757 -msgid "Customised data partitioning" +#: ../../source/tutorial-quickstart-mxnet.rst:256 +#: ../../source/tutorial-quickstart-pytorch.rst:233 +#: ../../source/tutorial-quickstart-scikitlearn.rst:231 +#: ../../source/tutorial-quickstart-tensorflow.rst:129 +#: ../../source/tutorial-quickstart-xgboost.rst:540 +msgid "Open another terminal and start the second client:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:759 +#: ../../source/tutorial-quickstart-mxnet.rst:262 +#: ../../source/tutorial-quickstart-pytorch.rst:239 +#: ../../source/tutorial-quickstart-scikitlearn.rst:237 +#: ../../source/tutorial-quickstart-xgboost.rst:546 msgid "" -"In :code:`dataset.py`, we have a function :code:`instantiate_partitioner`" -" to instantiate the data partitioner based on the given " -":code:`num_partitions` and :code:`partitioner_type`. Currently, we " -"provide four supported partitioner type to simulate the uniformity/non-" -"uniformity in data quantity (uniform, linear, square, exponential)." +"Each client will have its own dataset. You should now see how the " +"training does in the very first terminal (the one that started the " +"server):" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:790 -msgid "Customised centralised/distributed evaluation" +#: ../../source/tutorial-quickstart-mxnet.rst:294 +msgid "" +"Congratulations! You've successfully built and run your first federated " +"learning system. The full `source code " +"`_ for this example can be found in :code:`examples" +"/quickstart-mxnet`." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:792 +#: ../../source/tutorial-quickstart-pandas.rst:-1 msgid "" -"To facilitate centralised evaluation, we define a function in " -":code:`server_utils.py`:" +"Check out this Federated Learning quickstart tutorial for using Flower " +"with Pandas to perform Federated Analytics." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:824 +#: ../../source/tutorial-quickstart-pandas.rst:5 +msgid "Quickstart Pandas" +msgstr "" + +#: ../../source/tutorial-quickstart-pandas.rst:10 +msgid "Let's build a federated analytics system using Pandas and Flower!" +msgstr "" + +#: ../../source/tutorial-quickstart-pandas.rst:12 msgid "" -"This function returns a evaluation function which instantiates a " -":code:`Booster` object and loads the global model weights to it. The " -"evaluation is conducted by calling :code:`eval_set()` method, and the " -"tested AUC value is reported." +"Please refer to the `full code example " +"`_ " +"to learn more." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:827 +#: ../../source/tutorial-quickstart-pytorch.rst:-1 msgid "" -"As for distributed evaluation on the clients, it's same as the quick-" -"start example by overriding the :code:`evaluate()` method insides the " -":code:`XgbClient` class in :code:`client_utils.py`." +"Check out this Federated Learning quickstart tutorial for using Flower " +"with PyTorch to train a CNN model on MNIST." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:831 -msgid "Flower simulation" +#: ../../source/tutorial-quickstart-pytorch.rst:13 +msgid "" +"In this tutorial we will learn how to train a Convolutional Neural " +"Network on CIFAR10 using Flower and PyTorch." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:832 +#: ../../source/tutorial-quickstart-pytorch.rst:15 +#: ../../source/tutorial-quickstart-xgboost.rst:39 msgid "" -"We also provide an example code (:code:`sim.py`) to use the simulation " -"capabilities of Flower to simulate federated XGBoost training on either a" -" single machine or a cluster of machines." +"First of all, it is recommended to create a virtual environment and run " +"everything within a :doc:`virtualenv `." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:866 +#: ../../source/tutorial-quickstart-pytorch.rst:29 msgid "" -"After importing all required packages, we define a :code:`main()` " -"function to perform the simulation process:" +"Since we want to use PyTorch to solve a computer vision task, let's go " +"ahead and install PyTorch and the **torchvision** library:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:921 +#: ../../source/tutorial-quickstart-pytorch.rst:39 msgid "" -"We first load the dataset and perform data partitioning, and the pre-" -"processed data is stored in a :code:`list`. After the simulation begins, " -"the clients won't need to pre-process their partitions again." +"Now that we have all our dependencies installed, let's run a simple " +"distributed training with two clients and one server. Our training " +"procedure and network architecture are based on PyTorch's `Deep Learning " +"with PyTorch " +"`_." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:924 -msgid "Then, we define the strategies and other hyper-parameters:" +#: ../../source/tutorial-quickstart-pytorch.rst:41 +msgid "" +"In a file called :code:`client.py`, import Flower and PyTorch related " +"packages:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:975 +#: ../../source/tutorial-quickstart-pytorch.rst:56 +msgid "In addition, we define the device allocation in PyTorch with:" +msgstr "" + +#: ../../source/tutorial-quickstart-pytorch.rst:62 msgid "" -"After that, we start the simulation by calling " -":code:`fl.simulation.start_simulation`:" +"We use PyTorch to load CIFAR10, a popular colored image classification " +"dataset for machine learning. The PyTorch :code:`DataLoader()` downloads " +"the training and test data that are then normalized." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:995 +#: ../../source/tutorial-quickstart-pytorch.rst:78 msgid "" -"One of key parameters for :code:`start_simulation` is :code:`client_fn` " -"which returns a function to construct a client. We define it as follows:" +"Define the loss and optimizer with PyTorch. The training of the dataset " +"is done by looping over the dataset, measure the corresponding loss and " +"optimize it." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:1038 -msgid "Arguments parser" +#: ../../source/tutorial-quickstart-pytorch.rst:94 +msgid "" +"Define then the validation of the machine learning network. We loop over" +" the test set and measure the loss and accuracy of the test set." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:1040 +#: ../../source/tutorial-quickstart-pytorch.rst:113 msgid "" -"In :code:`utils.py`, we define the arguments parsers for clients, server " -"and simulation, allowing users to specify different experimental " -"settings. Let's first see the sever side:" +"After defining the training and testing of a PyTorch machine learning " +"model, we use the functions for the Flower clients." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:1086 +#: ../../source/tutorial-quickstart-pytorch.rst:115 msgid "" -"This allows user to specify training strategies / the number of total " -"clients / FL rounds / participating clients / clients for evaluation, and" -" evaluation fashion. Note that with :code:`--centralised-eval`, the sever" -" will do centralised evaluation and all functionalities for client " -"evaluation will be disabled." +"The Flower clients will use a simple CNN adapted from 'PyTorch: A 60 " +"Minute Blitz':" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:1090 -msgid "Then, the argument parser on client side:" +#: ../../source/tutorial-quickstart-pytorch.rst:142 +msgid "" +"After loading the data set with :code:`load_data()` we define the Flower " +"interface." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:1144 +#: ../../source/tutorial-quickstart-pytorch.rst:150 msgid "" -"This defines various options for client data partitioning. Besides, " -"clients also have an option to conduct evaluation on centralised test set" -" by setting :code:`--centralised-eval`, as well as an option to perform " -"scaled learning rate based on the number of clients by setting :code" -":`--scaled-lr`." +"Flower provides a convenience class called :code:`NumPyClient` which " +"makes it easier to implement the :code:`Client` interface when your " +"workload uses PyTorch. Implementing :code:`NumPyClient` usually means " +"defining the following methods (:code:`set_parameters` is optional " +"though):" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:1148 -msgid "We also have an argument parser for simulation:" +#: ../../source/tutorial-quickstart-pytorch.rst:166 +msgid "which can be implemented in the following way:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:1226 -msgid "This integrates all arguments for both client and server sides." +#: ../../source/tutorial-quickstart-pytorch.rst:189 +#: ../../source/tutorial-quickstart-tensorflow.rst:82 +msgid "" +"We can now create an instance of our class :code:`CifarClient` and add " +"one line to actually run this client:" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:1229 -msgid "Example commands" +#: ../../source/tutorial-quickstart-pytorch.rst:196 +#: ../../source/tutorial-quickstart-tensorflow.rst:90 +msgid "" +"That's it for the client. We only have to implement :code:`Client` or " +":code:`NumPyClient` and call :code:`fl.client.start_client()`. If you " +"implement a client of type :code:`NumPyClient` you'll need to first call " +"its :code:`to_client()` method. The string :code:`\"[::]:8080\"` tells " +"the client which server to connect to. In our case we can run the server " +"and the client on the same machine, therefore we use " +":code:`\"[::]:8080\"`. If we run a truly federated workload with the " +"server and clients running on different machines, all that needs to " +"change is the :code:`server_address` we point the client at." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:1231 +#: ../../source/tutorial-quickstart-pytorch.rst:271 msgid "" -"To run a centralised evaluated experiment with bagging strategy on 5 " -"clients with exponential distribution for 50 rounds, we first start the " -"server as below:" +"Congratulations! You've successfully built and run your first federated " +"learning system. The full `source code " +"`_ for this example can be found in :code:`examples" +"/quickstart-pytorch`." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:1238 -msgid "Then, on each client terminal, we start the clients:" +#: ../../source/tutorial-quickstart-pytorch-lightning.rst:-1 +msgid "" +"Check out this Federated Learning quickstart tutorial for using Flower " +"with PyTorch Lightning to train an Auto Encoder model on MNIST." msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:1244 -msgid "To run the same experiment with Flower simulation:" +#: ../../source/tutorial-quickstart-pytorch-lightning.rst:5 +msgid "Quickstart PyTorch Lightning" msgstr "" -#: ../../source/tutorial-quickstart-xgboost.rst:1250 +#: ../../source/tutorial-quickstart-pytorch-lightning.rst:10 msgid "" -"The full `code `_ for this comprehensive example can be found in" -" :code:`examples/xgboost-comprehensive`." +"Let's build a horizontal federated learning system using PyTorch " +"Lightning and Flower!" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:9 -msgid "Build a strategy from scratch" +#: ../../source/tutorial-quickstart-pytorch-lightning.rst:12 +msgid "" +"Please refer to the `full code example " +"`_ to learn more." msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:11 +#: ../../source/tutorial-quickstart-scikitlearn.rst:-1 msgid "" -"Welcome to the third part of the Flower federated learning tutorial. In " -"previous parts of this tutorial, we introduced federated learning with " -"PyTorch and Flower (`part 1 `__) and we learned how strategies " -"can be used to customize the execution on both the server and the clients" -" (`part 2 `__)." +"Check out this Federated Learning quickstart tutorial for using Flower " +"with scikit-learn to train a linear regression model." msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:13 -msgid "" -"In this notebook, we'll continue to customize the federated learning " -"system we built previously by creating a custom version of FedAvg (again," -" using `Flower `__ and `PyTorch " -"`__)." +#: ../../source/tutorial-quickstart-scikitlearn.rst:5 +msgid "Quickstart scikit-learn" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:15 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:16 -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:15 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:15 +#: ../../source/tutorial-quickstart-scikitlearn.rst:10 msgid "" -"`Star Flower on GitHub `__ ⭐️ and join " -"the Flower community on Slack to connect, ask questions, and get help: " -"`Join Slack `__ 🌼 We'd love to hear from " -"you in the ``#introductions`` channel! And if anything is unclear, head " -"over to the ``#questions`` channel." +"In this tutorial, we will learn how to train a :code:`Logistic " +"Regression` model on MNIST using Flower and scikit-learn." msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:17 -msgid "Let's build a new ``Strategy`` from scratch!" +#: ../../source/tutorial-quickstart-scikitlearn.rst:26 +msgid "Since we want to use scikit-learn, let's go ahead and install it:" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:29 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:29 -msgid "Preparation" +#: ../../source/tutorial-quickstart-scikitlearn.rst:32 +msgid "Or simply install all dependencies using Poetry:" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:31 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:32 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:31 +#: ../../source/tutorial-quickstart-scikitlearn.rst:42 msgid "" -"Before we begin with the actual code, let's make sure that we have " -"everything we need." +"Now that we have all our dependencies installed, let's run a simple " +"distributed training with two clients and one server. However, before " +"setting up the client and server, we will define all functionalities that" +" we need for our federated learning setup within :code:`utils.py`. The " +":code:`utils.py` contains different functions defining all the machine " +"learning basics:" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:43 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:44 -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:43 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:43 -msgid "Installing dependencies" +#: ../../source/tutorial-quickstart-scikitlearn.rst:45 +msgid ":code:`get_model_parameters()`" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:45 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:46 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:45 -msgid "First, we install the necessary packages:" +#: ../../source/tutorial-quickstart-scikitlearn.rst:46 +msgid "Returns the parameters of a :code:`sklearn` LogisticRegression model" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:65 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:66 -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:65 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:65 -msgid "" -"Now that we have all dependencies installed, we can import everything we " -"need for this tutorial:" +#: ../../source/tutorial-quickstart-scikitlearn.rst:47 +msgid ":code:`set_model_params()`" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:101 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:102 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:101 -msgid "" -"It is possible to switch to a runtime that has GPU acceleration enabled " -"(on Google Colab: ``Runtime > Change runtime type > Hardware acclerator: " -"GPU > Save``). Note, however, that Google Colab is not always able to " -"offer GPU acceleration. If you see an error related to GPU availability " -"in one of the following sections, consider switching back to CPU-based " -"execution by setting ``DEVICE = torch.device(\"cpu\")``. If the runtime " -"has GPU acceleration enabled, you should see the output ``Training on " -"cuda``, otherwise it'll say ``Training on cpu``." +#: ../../source/tutorial-quickstart-scikitlearn.rst:48 +msgid "Sets the parameters of a :code:`sklean` LogisticRegression model" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:114 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:115 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:114 -msgid "Data loading" +#: ../../source/tutorial-quickstart-scikitlearn.rst:49 +msgid ":code:`set_initial_params()`" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:116 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:116 -msgid "" -"Let's now load the CIFAR-10 training and test set, partition them into " -"ten smaller datasets (each split into training and validation set), and " -"wrap everything in their own ``DataLoader``. We introduce a new parameter" -" ``num_clients`` which allows us to call ``load_datasets`` with different" -" numbers of clients." +#: ../../source/tutorial-quickstart-scikitlearn.rst:50 +msgid "Initializes the model parameters that the Flower server will ask for" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:167 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:168 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:167 -msgid "Model training/evaluation" +#: ../../source/tutorial-quickstart-scikitlearn.rst:51 +msgid ":code:`load_mnist()`" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:169 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:170 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:169 -msgid "" -"Let's continue with the usual model definition (including " -"``set_parameters`` and ``get_parameters``), training and test functions:" +#: ../../source/tutorial-quickstart-scikitlearn.rst:52 +msgid "Loads the MNIST dataset using OpenML" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:258 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:258 -msgid "Flower client" +#: ../../source/tutorial-quickstart-scikitlearn.rst:53 +msgid ":code:`shuffle()`" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:260 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:260 -msgid "" -"To implement the Flower client, we (again) create a subclass of " -"``flwr.client.NumPyClient`` and implement the three methods " -"``get_parameters``, ``fit``, and ``evaluate``. Here, we also pass the " -"``cid`` to the client and use it log additional details:" +#: ../../source/tutorial-quickstart-scikitlearn.rst:54 +msgid "Shuffles data and its label" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:308 -msgid "Let's test what we have so far before we continue:" +#: ../../source/tutorial-quickstart-scikitlearn.rst:56 +msgid ":code:`partition()`" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:339 -msgid "Build a Strategy from scratch" +#: ../../source/tutorial-quickstart-scikitlearn.rst:56 +msgid "Splits datasets into a number of partitions" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:341 +#: ../../source/tutorial-quickstart-scikitlearn.rst:58 msgid "" -"Let’s overwrite the ``configure_fit`` method such that it passes a higher" -" learning rate (potentially also other hyperparameters) to the optimizer " -"of a fraction of the clients. We will keep the sampling of the clients as" -" it is in ``FedAvg`` and then change the configuration dictionary (one of" -" the ``FitIns`` attributes)." +"Please check out :code:`utils.py` `here " +"`_ for more details. The pre-defined functions are used in" +" the :code:`client.py` and imported. The :code:`client.py` also requires " +"to import several packages such as Flower and scikit-learn:" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:507 +#: ../../source/tutorial-quickstart-scikitlearn.rst:73 msgid "" -"The only thing left is to use the newly created custom Strategy " -"``FedCustom`` when starting the experiment:" -msgstr "" - -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:534 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:932 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:697 -msgid "Recap" +"We load the MNIST dataset from `OpenML " +"`_, a popular " +"image classification dataset of handwritten digits for machine learning. " +"The utility :code:`utils.load_mnist()` downloads the training and test " +"data. The training set is split afterwards into 10 partitions with " +":code:`utils.partition()`." msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:536 +#: ../../source/tutorial-quickstart-scikitlearn.rst:85 msgid "" -"In this notebook, we’ve seen how to implement a custom strategy. A custom" -" strategy enables granular control over client node configuration, result" -" aggregation, and more. To define a custom strategy, you only have to " -"overwrite the abstract methods of the (abstract) base class ``Strategy``." -" To make custom strategies even more powerful, you can pass custom " -"functions to the constructor of your new class (``__init__``) and then " -"call these functions whenever needed." +"Next, the logistic regression model is defined and initialized with " +":code:`utils.set_initial_params()`." msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:550 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:948 -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:729 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:715 -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:369 +#: ../../source/tutorial-quickstart-scikitlearn.rst:97 msgid "" -"Before you continue, make sure to join the Flower community on Slack: " -"`Join Slack `__" +"The Flower server interacts with clients through an interface called " +":code:`Client`. When the server selects a particular client for training," +" it sends training instructions over the network. The client receives " +"those instructions and calls one of the :code:`Client` methods to run " +"your code (i.e., to fit the logistic regression we defined earlier)." msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:552 -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:950 -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:731 -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:717 -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:371 +#: ../../source/tutorial-quickstart-scikitlearn.rst:103 msgid "" -"There's a dedicated ``#questions`` channel if you need help, but we'd " -"also love to hear who you are in ``#introductions``!" +"Flower provides a convenience class called :code:`NumPyClient` which " +"makes it easier to implement the :code:`Client` interface when your " +"workload uses scikit-learn. Implementing :code:`NumPyClient` usually " +"means defining the following methods (:code:`set_parameters` is optional " +"though):" msgstr "" -#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:554 -msgid "" -"The `Flower Federated Learning Tutorial - Part 4 " -"`__ introduces ``Client``, the flexible API underlying " -"``NumPyClient``." +#: ../../source/tutorial-quickstart-scikitlearn.rst:112 +msgid "is directly imported with :code:`utils.set_model_params()`" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:9 -msgid "Customize the client" +#: ../../source/tutorial-quickstart-scikitlearn.rst:120 +msgid "The methods can be implemented in the following way:" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:11 +#: ../../source/tutorial-quickstart-scikitlearn.rst:143 msgid "" -"Welcome to the fourth part of the Flower federated learning tutorial. In " -"the previous parts of this tutorial, we introduced federated learning " -"with PyTorch and Flower (`part 1 `__), we learned how " -"strategies can be used to customize the execution on both the server and " -"the clients (`part 2 `__), and we built our own " -"custom strategy from scratch (`part 3 `__)." +"We can now create an instance of our class :code:`MnistClient` and add " +"one line to actually run this client:" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:14 +#: ../../source/tutorial-quickstart-scikitlearn.rst:150 msgid "" -"In this notebook, we revisit ``NumPyClient`` and introduce a new " -"baseclass for building clients, simply named ``Client``. In previous " -"parts of this tutorial, we've based our client on ``NumPyClient``, a " -"convenience class which makes it easy to work with machine learning " -"libraries that have good NumPy interoperability. With ``Client``, we gain" -" a lot of flexibility that we didn't have before, but we'll also have to " -"do a few things the we didn't have to do before." +"That's it for the client. We only have to implement :code:`Client` or " +":code:`NumPyClient` and call :code:`fl.client.start_client()`. If you " +"implement a client of type :code:`NumPyClient` you'll need to first call " +"its :code:`to_client()` method. The string :code:`\"0.0.0.0:8080\"` tells" +" the client which server to connect to. In our case we can run the server" +" and the client on the same machine, therefore we use " +":code:`\"0.0.0.0:8080\"`. If we run a truly federated workload with the " +"server and clients running on different machines, all that needs to " +"change is the :code:`server_address` we pass to the client." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:18 +#: ../../source/tutorial-quickstart-scikitlearn.rst:159 msgid "" -"Let's go deeper and see what it takes to move from ``NumPyClient`` to " -"``Client``!" +"The following Flower server is a little bit more advanced and returns an " +"evaluation function for the server-side evaluation. First, we import " +"again all required libraries such as Flower and scikit-learn." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:30 -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:29 -msgid "Step 0: Preparation" +#: ../../source/tutorial-quickstart-scikitlearn.rst:162 +msgid ":code:`server.py`, import Flower and start the server:" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:117 +#: ../../source/tutorial-quickstart-scikitlearn.rst:173 msgid "" -"Let's now load the CIFAR-10 training and test set, partition them into " -"ten smaller datasets (each split into training and validation set), and " -"wrap everything in their own ``DataLoader``." -msgstr "" - -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:259 -msgid "Step 1: Revisiting NumPyClient" +"The number of federated learning rounds is set in :code:`fit_round()` and" +" the evaluation is defined in :code:`get_evaluate_fn()`. The evaluation " +"function is called after each federated learning round and gives you " +"information about loss and accuracy." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:261 +#: ../../source/tutorial-quickstart-scikitlearn.rst:198 msgid "" -"So far, we've implemented our client by subclassing " -"``flwr.client.NumPyClient``. The three methods we implemented are " -"``get_parameters``, ``fit``, and ``evaluate``. Finally, we wrap the " -"creation of instances of this class in a function called ``client_fn``:" +"The :code:`main` contains the server-side parameter initialization " +":code:`utils.set_initial_params()` as well as the aggregation strategy " +":code:`fl.server.strategy:FedAvg()`. The strategy is the default one, " +"federated averaging (or FedAvg), with two clients and evaluation after " +"each federated learning round. The server can be started with the command" +" :code:`fl.server.start_server(server_address=\"0.0.0.0:8080\", " +"strategy=strategy, config=fl.server.ServerConfig(num_rounds=3))`." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:309 +#: ../../source/tutorial-quickstart-scikitlearn.rst:217 msgid "" -"We've seen this before, there's nothing new so far. The only *tiny* " -"difference compared to the previous notebook is naming, we've changed " -"``FlowerClient`` to ``FlowerNumPyClient`` and ``client_fn`` to " -"``numpyclient_fn``. Let's run it to see the output we get:" +"With both client and server ready, we can now run everything and see " +"federated learning in action. Federated learning systems usually have a " +"server and multiple clients. We, therefore, have to start the server " +"first:" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:339 +#: ../../source/tutorial-quickstart-scikitlearn.rst:271 msgid "" -"This works as expected, two clients are training for three rounds of " -"federated learning." +"Congratulations! You've successfully built and run your first federated " +"learning system. The full `source code " +"`_ for this example can be found in :code:`examples/sklearn-logreg-" +"mnist`." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:341 +#: ../../source/tutorial-quickstart-tensorflow.rst:-1 msgid "" -"Let's dive a little bit deeper and discuss how Flower executes this " -"simulation. Whenever a client is selected to do some work, " -"``start_simulation`` calls the function ``numpyclient_fn`` to create an " -"instance of our ``FlowerNumPyClient`` (along with loading the model and " -"the data)." +"Check out this Federated Learning quickstart tutorial for using Flower " +"with TensorFlow to train a MobilNetV2 model on CIFAR-10." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:343 -msgid "" -"But here's the perhaps surprising part: Flower doesn't actually use the " -"``FlowerNumPyClient`` object directly. Instead, it wraps the object to " -"makes it look like a subclass of ``flwr.client.Client``, not " -"``flwr.client.NumPyClient``. In fact, the Flower core framework doesn't " -"know how to handle ``NumPyClient``'s, it only knows how to handle " -"``Client``'s. ``NumPyClient`` is just a convenience abstraction built on " -"top of ``Client``." +#: ../../source/tutorial-quickstart-tensorflow.rst:5 +msgid "Quickstart TensorFlow" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:345 -msgid "" -"Instead of building on top of ``NumPyClient``, we can directly build on " -"top of ``Client``." +#: ../../source/tutorial-quickstart-tensorflow.rst:13 +msgid "Let's build a federated learning system in less than 20 lines of code!" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:357 -msgid "Step 2: Moving from ``NumPyClient`` to ``Client``" +#: ../../source/tutorial-quickstart-tensorflow.rst:15 +msgid "Before Flower can be imported we have to install it:" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:359 +#: ../../source/tutorial-quickstart-tensorflow.rst:21 msgid "" -"Let's try to do the same thing using ``Client`` instead of " -"``NumPyClient``." -msgstr "" - -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:465 -msgid "" -"Before we discuss the code in more detail, let's try to run it! Gotta " -"make sure our new ``Client``-based client works, right?" +"Since we want to use the Keras API of TensorFlow (TF), we have to install" +" TF as well:" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:490 -msgid "" -"That's it, we're now using ``Client``. It probably looks similar to what " -"we've done with ``NumPyClient``. So what's the difference?" +#: ../../source/tutorial-quickstart-tensorflow.rst:31 +msgid "Next, in a file called :code:`client.py`, import Flower and TensorFlow:" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:492 +#: ../../source/tutorial-quickstart-tensorflow.rst:38 msgid "" -"First of all, it's more code. But why? The difference comes from the fact" -" that ``Client`` expects us to take care of parameter serialization and " -"deserialization. For Flower to be able to send parameters over the " -"network, it eventually needs to turn these parameters into ``bytes``. " -"Turning parameters (e.g., NumPy ``ndarray``'s) into raw bytes is called " -"serialization. Turning raw bytes into something more useful (like NumPy " -"``ndarray``'s) is called deserialization. Flower needs to do both: it " -"needs to serialize parameters on the server-side and send them to the " -"client, the client needs to deserialize them to use them for local " -"training, and then serialize the updated parameters again to send them " -"back to the server, which (finally!) deserializes them again in order to " -"aggregate them with the updates received from other clients." +"We use the Keras utilities of TF to load CIFAR10, a popular colored image" +" classification dataset for machine learning. The call to " +":code:`tf.keras.datasets.cifar10.load_data()` downloads CIFAR10, caches " +"it locally, and then returns the entire training and test set as NumPy " +"ndarrays." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:495 +#: ../../source/tutorial-quickstart-tensorflow.rst:47 msgid "" -"The only *real* difference between Client and NumPyClient is that " -"NumPyClient takes care of serialization and deserialization for you. It " -"can do so because it expects you to return parameters as NumPy ndarray's," -" and it knows how to handle these. This makes working with machine " -"learning libraries that have good NumPy support (most of them) a breeze." +"Next, we need a model. For the purpose of this tutorial, we use " +"MobilNetV2 with 10 output classes:" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:497 +#: ../../source/tutorial-quickstart-tensorflow.rst:60 msgid "" -"In terms of API, there's one major difference: all methods in Client take" -" exactly one argument (e.g., ``FitIns`` in ``Client.fit``) and return " -"exactly one value (e.g., ``FitRes`` in ``Client.fit``). The methods in " -"``NumPyClient`` on the other hand have multiple arguments (e.g., " -"``parameters`` and ``config`` in ``NumPyClient.fit``) and multiple return" -" values (e.g., ``parameters``, ``num_example``, and ``metrics`` in " -"``NumPyClient.fit``) if there are multiple things to handle. These " -"``*Ins`` and ``*Res`` objects in ``Client`` wrap all the individual " -"values you're used to from ``NumPyClient``." +"Flower provides a convenience class called :code:`NumPyClient` which " +"makes it easier to implement the :code:`Client` interface when your " +"workload uses Keras. The :code:`NumPyClient` interface defines three " +"methods which can be implemented in the following way:" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:510 -msgid "Step 3: Custom serialization" +#: ../../source/tutorial-quickstart-tensorflow.rst:135 +msgid "Each client will have its own dataset." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:512 +#: ../../source/tutorial-quickstart-tensorflow.rst:137 msgid "" -"Here we will explore how to implement custom serialization with a simple " -"example." +"You should now see how the training does in the very first terminal (the " +"one that started the server):" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:514 +#: ../../source/tutorial-quickstart-tensorflow.rst:169 msgid "" -"But first what is serialization? Serialization is just the process of " -"converting an object into raw bytes, and equally as important, " -"deserialization is the process of converting raw bytes back into an " -"object. This is very useful for network communication. Indeed, without " -"serialization, you could not just a Python object through the internet." +"Congratulations! You've successfully built and run your first federated " +"learning system. The full `source code " +"`_ for this can be found in :code:`examples" +"/quickstart-tensorflow/client.py`." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:516 +#: ../../source/tutorial-quickstart-xgboost.rst:-1 msgid "" -"Federated Learning relies heavily on internet communication for training " -"by sending Python objects back and forth between the clients and the " -"server. This means that serialization is an essential part of Federated " -"Learning." +"Check out this Federated Learning quickstart tutorial for using Flower " +"with XGBoost to train classification models on trees." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:518 -msgid "" -"In the following section, we will write a basic example where instead of " -"sending a serialized version of our ``ndarray``\\ s containing our " -"parameters, we will first convert the ``ndarray`` into sparse matrices, " -"before sending them. This technique can be used to save bandwidth, as in " -"certain cases where the weights of a model are sparse (containing many 0 " -"entries), converting them to a sparse matrix can greatly improve their " -"bytesize." +#: ../../source/tutorial-quickstart-xgboost.rst:5 +msgid "Quickstart XGBoost" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:521 -msgid "Our custom serialization/deserialization functions" +#: ../../source/tutorial-quickstart-xgboost.rst:14 +msgid "Federated XGBoost" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:523 +#: ../../source/tutorial-quickstart-xgboost.rst:16 msgid "" -"This is where the real serialization/deserialization will happen, " -"especially in ``ndarray_to_sparse_bytes`` for serialization and " -"``sparse_bytes_to_ndarray`` for deserialization." +"EXtreme Gradient Boosting (**XGBoost**) is a robust and efficient " +"implementation of gradient-boosted decision tree (**GBDT**), that " +"maximises the computational boundaries for boosted tree methods. It's " +"primarily designed to enhance both the performance and computational " +"speed of machine learning models. In XGBoost, trees are constructed " +"concurrently, unlike the sequential approach taken by GBDT." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:525 +#: ../../source/tutorial-quickstart-xgboost.rst:20 msgid "" -"Note that we imported the ``scipy.sparse`` library in order to convert " -"our arrays." -msgstr "" - -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:613 -msgid "Client-side" +"Often, for tabular data on medium-sized datasets with fewer than 10k " +"training examples, XGBoost surpasses the results of deep learning " +"techniques." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:615 -msgid "" -"To be able to serialize our ``ndarray``\\ s into sparse " -"parameters, we will just have to call our custom functions in our " -"``flwr.client.Client``." +#: ../../source/tutorial-quickstart-xgboost.rst:23 +msgid "Why federated XGBoost?" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:617 +#: ../../source/tutorial-quickstart-xgboost.rst:25 msgid "" -"Indeed, in ``get_parameters`` we need to serialize the parameters we got " -"from our network using our custom ``ndarrays_to_sparse_parameters`` " -"defined above." +"Indeed, as the demand for data privacy and decentralized learning grows, " +"there's an increasing requirement to implement federated XGBoost systems " +"for specialised applications, like survival analysis and financial fraud " +"detection." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:619 +#: ../../source/tutorial-quickstart-xgboost.rst:27 msgid "" -"In ``fit``, we first need to deserialize the parameters coming from the " -"server using our custom ``sparse_parameters_to_ndarrays`` and then we " -"need to serialize our local results with " -"``ndarrays_to_sparse_parameters``." +"Federated learning ensures that raw data remains on the local device, " +"making it an attractive approach for sensitive domains where data " +"security and privacy are paramount. Given the robustness and efficiency " +"of XGBoost, combining it with federated learning offers a promising " +"solution for these specific challenges." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:621 +#: ../../source/tutorial-quickstart-xgboost.rst:30 msgid "" -"In ``evaluate``, we will only need to deserialize the global parameters " -"with our custom function." +"In this tutorial we will learn how to train a federated XGBoost model on " +"HIGGS dataset using Flower and :code:`xgboost` package. We use a simple " +"example (`full code xgboost-quickstart " +"`_)" +" with two *clients* and one *server* to demonstrate how federated XGBoost" +" works, and then we dive into a more complex example (`full code xgboost-" +"comprehensive `_) to run various experiments." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:725 -msgid "Server-side" +#: ../../source/tutorial-quickstart-xgboost.rst:37 +msgid "Environment Setup" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:727 +#: ../../source/tutorial-quickstart-xgboost.rst:41 msgid "" -"For this example, we will just use ``FedAvg`` as a strategy. To change " -"the serialization and deserialization here, we only need to reimplement " -"the ``evaluate`` and ``aggregate_fit`` functions of ``FedAvg``. The other" -" functions of the strategy will be inherited from the super class " -"``FedAvg``." +"We first need to install Flower and Flower Datasets. You can do this by " +"running :" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:729 -msgid "As you can see only one line as change in ``evaluate``:" +#: ../../source/tutorial-quickstart-xgboost.rst:47 +msgid "" +"Since we want to use :code:`xgboost` package to build up XGBoost trees, " +"let's go ahead and install :code:`xgboost`:" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:735 +#: ../../source/tutorial-quickstart-xgboost.rst:57 msgid "" -"And for ``aggregate_fit``, we will first deserialize every result we " -"received:" +"*Clients* are responsible for generating individual weight-updates for " +"the model based on their local datasets. Now that we have all our " +"dependencies installed, let's run a simple distributed training with two " +"clients and one server." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:744 -msgid "And then serialize the aggregated result:" +#: ../../source/tutorial-quickstart-xgboost.rst:60 +msgid "" +"In a file called :code:`client.py`, import xgboost, Flower, Flower " +"Datasets and other related functions:" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:903 -msgid "We can now run our custom serialization example!" +#: ../../source/tutorial-quickstart-xgboost.rst:87 +msgid "Dataset partition and hyper-parameter selection" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:934 +#: ../../source/tutorial-quickstart-xgboost.rst:89 msgid "" -"In this part of the tutorial, we've seen how we can build clients by " -"subclassing either ``NumPyClient`` or ``Client``. ``NumPyClient`` is a " -"convenience abstraction that makes it easier to work with machine " -"learning libraries that have good NumPy interoperability. ``Client`` is a" -" more flexible abstraction that allows us to do things that are not " -"possible in ``NumPyClient``. In order to do so, it requires us to handle " -"parameter serialization and deserialization ourselves." +"Prior to local training, we require loading the HIGGS dataset from Flower" +" Datasets and conduct data partitioning for FL:" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:952 +#: ../../source/tutorial-quickstart-xgboost.rst:102 msgid "" -"This is the final part of the Flower tutorial (for now!), " -"congratulations! You're now well equipped to understand the rest of the " -"documentation. There are many topics we didn't cover in the tutorial, we " -"recommend the following resources:" +"In this example, we split the dataset into two partitions with uniform " +"distribution (:code:`IidPartitioner(num_partitions=2)`). Then, we load " +"the partition for the given client based on :code:`node_id`:" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:954 -msgid "`Read Flower Docs `__" +#: ../../source/tutorial-quickstart-xgboost.rst:121 +msgid "" +"After that, we do train/test splitting on the given partition (client's " +"local data), and transform data format for :code:`xgboost` package." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:955 +#: ../../source/tutorial-quickstart-xgboost.rst:134 msgid "" -"`Check out Flower Code Examples " -"`__" +"The functions of :code:`train_test_split` and " +":code:`transform_dataset_to_dmatrix` are defined as below:" msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:956 -msgid "" -"`Use Flower Baselines for your research " -"`__" +#: ../../source/tutorial-quickstart-xgboost.rst:158 +msgid "Finally, we define the hyper-parameters used for XGBoost training." msgstr "" -#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:957 +#: ../../source/tutorial-quickstart-xgboost.rst:174 msgid "" -"`Watch Flower Summit 2023 videos `__" +"The :code:`num_local_round` represents the number of iterations for local" +" tree boost. We use CPU for the training in default. One can shift it to " +"GPU by setting :code:`tree_method` to :code:`gpu_hist`. We use AUC as " +"evaluation metric." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:9 -msgid "Get started with Flower" +#: ../../source/tutorial-quickstart-xgboost.rst:181 +msgid "Flower client definition for XGBoost" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:11 -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:11 -msgid "Welcome to the Flower federated learning tutorial!" +#: ../../source/tutorial-quickstart-xgboost.rst:183 +msgid "" +"After loading the dataset we define the Flower client. We follow the " +"general rule to define :code:`XgbClient` class inherited from " +":code:`fl.client.Client`." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:13 +#: ../../source/tutorial-quickstart-xgboost.rst:193 msgid "" -"In this notebook, we'll build a federated learning system using Flower, " -"`Flower Datasets `__ and PyTorch. In " -"part 1, we use PyTorch for the model training pipeline and data loading. " -"In part 2, we continue to federate the PyTorch-based pipeline using " -"Flower." +"The :code:`self.bst` is used to keep the Booster objects that remain " +"consistent across rounds, allowing them to store predictions from trees " +"integrated in earlier rounds and maintain other essential data structures" +" for training." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:17 -msgid "Let's get stated!" +#: ../../source/tutorial-quickstart-xgboost.rst:196 +msgid "" +"Then, we override :code:`get_parameters`, :code:`fit` and " +":code:`evaluate` methods insides :code:`XgbClient` class as follows." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:31 +#: ../../source/tutorial-quickstart-xgboost.rst:210 msgid "" -"Before we begin with any actual code, let's make sure that we have " -"everything we need." +"Unlike neural network training, XGBoost trees are not started from a " +"specified random weights. In this case, we do not use " +":code:`get_parameters` and :code:`set_parameters` to initialise model " +"parameters for XGBoost. As a result, let's return an empty tensor in " +":code:`get_parameters` when it is called by the server at the first " +"round." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:45 +#: ../../source/tutorial-quickstart-xgboost.rst:251 msgid "" -"Next, we install the necessary packages for PyTorch (``torch`` and " -"``torchvision``), Flower Datasets (``flwr-datasets``) and Flower " -"(``flwr``):" +"In :code:`fit`, at the first round, we call :code:`xgb.train()` to build " +"up the first set of trees. the returned Booster object and config are " +"stored in :code:`self.bst` and :code:`self.config`, respectively. From " +"the second round, we load the global model sent from server to " +":code:`self.bst`, and then update model weights on local training data " +"with function :code:`local_boost` as follows:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:105 +#: ../../source/tutorial-quickstart-xgboost.rst:269 msgid "" -"It is possible to switch to a runtime that has GPU acceleration enabled " -"(on Google Colab: ``Runtime > Change runtime type > Hardware accelerator:" -" GPU > Save``). Note, however, that Google Colab is not always able to " -"offer GPU acceleration. If you see an error related to GPU availability " -"in one of the following sections, consider switching back to CPU-based " -"execution by setting ``DEVICE = torch.device(\"cpu\")``. If the runtime " -"has GPU acceleration enabled, you should see the output ``Training on " -"cuda``, otherwise it'll say ``Training on cpu``." +"Given :code:`num_local_round`, we update trees by calling " +":code:`self.bst.update` method. After training, the last " +":code:`N=num_local_round` trees will be extracted to send to the server." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:118 -msgid "Loading the data" +#: ../../source/tutorial-quickstart-xgboost.rst:291 +msgid "" +"In :code:`evaluate`, we call :code:`self.bst.eval_set` function to " +"conduct evaluation on valid set. The AUC value will be returned." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:120 +#: ../../source/tutorial-quickstart-xgboost.rst:294 msgid "" -"Federated learning can be applied to many different types of tasks across" -" different domains. In this tutorial, we introduce federated learning by " -"training a simple convolutional neural network (CNN) on the popular " -"CIFAR-10 dataset. CIFAR-10 can be used to train image classifiers that " -"distinguish between images from ten different classes: 'airplane', " -"'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', and " -"'truck'." +"Now, we can create an instance of our class :code:`XgbClient` and add one" +" line to actually run this client:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:131 +#: ../../source/tutorial-quickstart-xgboost.rst:300 msgid "" -"We simulate having multiple datasets from multiple organizations (also " -"called the \"cross-silo\" setting in federated learning) by splitting the" -" original CIFAR-10 dataset into multiple partitions. Each partition will " -"represent the data from a single organization. We're doing this purely " -"for experimentation purposes, in the real world there's no need for data " -"splitting because each organization already has their own data (so the " -"data is naturally partitioned)." +"That's it for the client. We only have to implement :code:`Client`and " +"call :code:`fl.client.start_client()`. The string :code:`\"[::]:8080\"` " +"tells the client which server to connect to. In our case we can run the " +"server and the client on the same machine, therefore we use " +":code:`\"[::]:8080\"`. If we run a truly federated workload with the " +"server and clients running on different machines, all that needs to " +"change is the :code:`server_address` we point the client at." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:133 +#: ../../source/tutorial-quickstart-xgboost.rst:311 msgid "" -"Each organization will act as a client in the federated learning system. " -"So having ten organizations participate in a federation means having ten " -"clients connected to the federated learning server." +"These updates are then sent to the *server* which will aggregate them to " +"produce a better model. Finally, the *server* sends this improved version" +" of the model back to each *client* to finish a complete FL round." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:144 +#: ../../source/tutorial-quickstart-xgboost.rst:314 msgid "" -"Let's now create the Federated Dataset abstraction that from ``flwr-" -"datasets`` that partitions the CIFAR-10. We will create small training " -"and test set for each edge device and wrap each of them into a PyTorch " -"``DataLoader``:" +"In a file named :code:`server.py`, import Flower and FedXgbBagging from " +":code:`flwr.server.strategy`." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:198 -msgid "" -"We now have a list of ten training sets and ten validation sets " -"(``trainloaders`` and ``valloaders``) representing the data of ten " -"different organizations. Each ``trainloader``/``valloader`` pair contains" -" 4500 training examples and 500 validation examples. There's also a " -"single ``testloader`` (we did not split the test set). Again, this is " -"only necessary for building research or educational systems, actual " -"federated learning systems have their data naturally distributed across " -"multiple partitions." +#: ../../source/tutorial-quickstart-xgboost.rst:316 +msgid "We first define a strategy for XGBoost bagging aggregation." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:201 +#: ../../source/tutorial-quickstart-xgboost.rst:339 msgid "" -"Let's take a look at the first batch of images and labels in the first " -"training set (i.e., ``trainloaders[0]``) before we move on:" +"We use two clients for this example. An " +":code:`evaluate_metrics_aggregation` function is defined to collect and " +"wighted average the AUC values from clients." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:240 -msgid "" -"The output above shows a random batch of images from the first " -"``trainloader`` in our list of ten ``trainloaders``. It also prints the " -"labels associated with each image (i.e., one of the ten possible labels " -"we've seen above). If you run the cell again, you should see another " -"batch of images." +#: ../../source/tutorial-quickstart-xgboost.rst:342 +msgid "Then, we start the server:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:252 -msgid "Step 1: Centralized Training with PyTorch" +#: ../../source/tutorial-quickstart-xgboost.rst:354 +msgid "Tree-based bagging aggregation" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:263 +#: ../../source/tutorial-quickstart-xgboost.rst:356 msgid "" -"Next, we're going to use PyTorch to define a simple convolutional neural " -"network. This introduction assumes basic familiarity with PyTorch, so it " -"doesn't cover the PyTorch-related aspects in full detail. If you want to " -"dive deeper into PyTorch, we recommend `DEEP LEARNING WITH PYTORCH: A 60 " -"MINUTE BLITZ " -"`__." +"You must be curious about how bagging aggregation works. Let's look into " +"the details." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:275 -msgid "Defining the model" +#: ../../source/tutorial-quickstart-xgboost.rst:358 +msgid "" +"In file :code:`flwr.server.strategy.fedxgb_bagging.py`, we define " +":code:`FedXgbBagging` inherited from :code:`flwr.server.strategy.FedAvg`." +" Then, we override the :code:`aggregate_fit`, :code:`aggregate_evaluate` " +"and :code:`evaluate` methods as follows:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:277 +#: ../../source/tutorial-quickstart-xgboost.rst:454 msgid "" -"We use the simple CNN described in the `PyTorch tutorial " -"`__:" +"In :code:`aggregate_fit`, we sequentially aggregate the clients' XGBoost " +"trees by calling :code:`aggregate()` function:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:314 -msgid "Let's continue with the usual training and test functions:" +#: ../../source/tutorial-quickstart-xgboost.rst:513 +msgid "" +"In this function, we first fetch the number of trees and the number of " +"parallel trees for the current and previous model by calling " +":code:`_get_tree_nums`. Then, the fetched information will be aggregated." +" After that, the trees (containing model weights) are aggregated to " +"generate a new tree model." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:374 -msgid "Training the model" +#: ../../source/tutorial-quickstart-xgboost.rst:518 +msgid "" +"After traversal of all clients' models, a new global model is generated, " +"followed by the serialisation, and sending back to each client." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:376 +#: ../../source/tutorial-quickstart-xgboost.rst:523 +msgid "Launch Federated XGBoost!" +msgstr "" + +#: ../../source/tutorial-quickstart-xgboost.rst:585 msgid "" -"We now have all the basic building blocks we need: a dataset, a model, a " -"training function, and a test function. Let's put them together to train " -"the model on the dataset of one of our organizations " -"(``trainloaders[0]``). This simulates the reality of most machine " -"learning projects today: each organization has their own data and trains " -"models only on this internal data:" +"Congratulations! You've successfully built and run your first federated " +"XGBoost system. The AUC values can be checked in " +":code:`metrics_distributed`. One can see that the average AUC increases " +"over FL rounds." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:406 +#: ../../source/tutorial-quickstart-xgboost.rst:590 msgid "" -"Training the simple CNN on our CIFAR-10 split for 5 epochs should result " -"in a test set accuracy of about 41%, which is not good, but at the same " -"time, it doesn't really matter for the purposes of this tutorial. The " -"intent was just to show a simplistic centralized training pipeline that " -"sets the stage for what comes next - federated learning!" +"The full `source code `_ for this example can be found in :code:`examples" +"/xgboost-quickstart`." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:418 -msgid "Step 2: Federated Learning with Flower" +#: ../../source/tutorial-quickstart-xgboost.rst:594 +msgid "Comprehensive Federated XGBoost" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:420 +#: ../../source/tutorial-quickstart-xgboost.rst:596 msgid "" -"Step 1 demonstrated a simple centralized training pipeline. All data was " -"in one place (i.e., a single ``trainloader`` and a single ``valloader``)." -" Next, we'll simulate a situation where we have multiple datasets in " -"multiple organizations and where we train a model over these " -"organizations using federated learning." +"Now that you have known how federated XGBoost work with Flower, it's time" +" to run some more comprehensive experiments by customising the " +"experimental settings. In the xgboost-comprehensive example (`full code " +"`_), we provide more options to define various experimental" +" setups, including aggregation strategies, data partitioning and " +"centralised/distributed evaluation. We also support :doc:`Flower " +"simulation ` making it easy to simulate large " +"client cohorts in a resource-aware manner. Let's take a look!" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:432 -msgid "Updating model parameters" +#: ../../source/tutorial-quickstart-xgboost.rst:603 +msgid "Cyclic training" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:434 +#: ../../source/tutorial-quickstart-xgboost.rst:605 msgid "" -"In federated learning, the server sends the global model parameters to " -"the client, and the client updates the local model with the parameters " -"received from the server. It then trains the model on the local data " -"(which changes the model parameters locally) and sends the " -"updated/changed model parameters back to the server (or, alternatively, " -"it sends just the gradients back to the server, not the full model " -"parameters)." +"In addition to bagging aggregation, we offer a cyclic training scheme, " +"which performs FL in a client-by-client fashion. Instead of aggregating " +"multiple clients, there is only one single client participating in the " +"training per round in the cyclic training scenario. The trained local " +"XGBoost trees will be passed to the next client as an initialised model " +"for next round's boosting." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:436 +#: ../../source/tutorial-quickstart-xgboost.rst:609 msgid "" -"We need two helper functions to update the local model with parameters " -"received from the server and to get the updated model parameters from the" -" local model: ``set_parameters`` and ``get_parameters``. The following " -"two functions do just that for the PyTorch model above." +"To do this, we first customise a :code:`ClientManager` in " +":code:`server_utils.py`:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:438 +#: ../../source/tutorial-quickstart-xgboost.rst:649 msgid "" -"The details of how this works are not really important here (feel free to" -" consult the PyTorch documentation if you want to learn more). In " -"essence, we use ``state_dict`` to access PyTorch model parameter tensors." -" The parameter tensors are then converted to/from a list of NumPy " -"ndarray's (which Flower knows how to serialize/deserialize):" +"The customised :code:`ClientManager` samples all available clients in " +"each FL round based on the order of connection to the server. Then, we " +"define a new strategy :code:`FedXgbCyclic` in " +":code:`flwr.server.strategy.fedxgb_cyclic.py`, in order to sequentially " +"select only one client in given round and pass the received model to next" +" client." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:466 -msgid "Implementing a Flower client" +#: ../../source/tutorial-quickstart-xgboost.rst:690 +msgid "" +"Unlike the original :code:`FedAvg`, we don't perform aggregation here. " +"Instead, we just make a copy of the received client model as global model" +" by overriding :code:`aggregate_fit`." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:468 +#: ../../source/tutorial-quickstart-xgboost.rst:693 msgid "" -"With that out of the way, let's move on to the interesting part. " -"Federated learning systems consist of a server and multiple clients. In " -"Flower, we create clients by implementing subclasses of " -"``flwr.client.Client`` or ``flwr.client.NumPyClient``. We use " -"``NumPyClient`` in this tutorial because it is easier to implement and " -"requires us to write less boilerplate." +"Also, the customised :code:`configure_fit` and :code:`configure_evaluate`" +" methods ensure the clients to be sequentially selected given FL round:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:470 +#: ../../source/tutorial-quickstart-xgboost.rst:757 +msgid "Customised data partitioning" +msgstr "" + +#: ../../source/tutorial-quickstart-xgboost.rst:759 msgid "" -"To implement the Flower client, we create a subclass of " -"``flwr.client.NumPyClient`` and implement the three methods " -"``get_parameters``, ``fit``, and ``evaluate``:" +"In :code:`dataset.py`, we have a function :code:`instantiate_partitioner`" +" to instantiate the data partitioner based on the given " +":code:`num_partitions` and :code:`partitioner_type`. Currently, we " +"provide four supported partitioner type to simulate the uniformity/non-" +"uniformity in data quantity (uniform, linear, square, exponential)." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:472 -msgid "``get_parameters``: Return the current local model parameters" +#: ../../source/tutorial-quickstart-xgboost.rst:790 +msgid "Customised centralised/distributed evaluation" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:473 +#: ../../source/tutorial-quickstart-xgboost.rst:792 msgid "" -"``fit``: Receive model parameters from the server, train the model " -"parameters on the local data, and return the (updated) model parameters " -"to the server" +"To facilitate centralised evaluation, we define a function in " +":code:`server_utils.py`:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:474 +#: ../../source/tutorial-quickstart-xgboost.rst:824 msgid "" -"``evaluate``: Receive model parameters from the server, evaluate the " -"model parameters on the local data, and return the evaluation result to " -"the server" +"This function returns a evaluation function which instantiates a " +":code:`Booster` object and loads the global model weights to it. The " +"evaluation is conducted by calling :code:`eval_set()` method, and the " +"tested AUC value is reported." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:476 +#: ../../source/tutorial-quickstart-xgboost.rst:827 msgid "" -"We mentioned that our clients will use the previously defined PyTorch " -"components for model training and evaluation. Let's see a simple Flower " -"client implementation that brings everything together:" +"As for distributed evaluation on the clients, it's same as the quick-" +"start example by overriding the :code:`evaluate()` method insides the " +":code:`XgbClient` class in :code:`client_utils.py`." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:513 -msgid "" -"Our class ``FlowerClient`` defines how local training/evaluation will be " -"performed and allows Flower to call the local training/evaluation through" -" ``fit`` and ``evaluate``. Each instance of ``FlowerClient`` represents a" -" *single client* in our federated learning system. Federated learning " -"systems have multiple clients (otherwise, there's not much to federate), " -"so each client will be represented by its own instance of " -"``FlowerClient``. If we have, for example, three clients in our workload," -" then we'd have three instances of ``FlowerClient``. Flower calls " -"``FlowerClient.fit`` on the respective instance when the server selects a" -" particular client for training (and ``FlowerClient.evaluate`` for " -"evaluation)." +#: ../../source/tutorial-quickstart-xgboost.rst:831 +msgid "Flower simulation" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:517 -msgid "Using the Virtual Client Engine" +#: ../../source/tutorial-quickstart-xgboost.rst:832 +msgid "" +"We also provide an example code (:code:`sim.py`) to use the simulation " +"capabilities of Flower to simulate federated XGBoost training on either a" +" single machine or a cluster of machines." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:519 +#: ../../source/tutorial-quickstart-xgboost.rst:866 msgid "" -"In this notebook, we want to simulate a federated learning system with 10" -" clients on a single machine. This means that the server and all 10 " -"clients will live on a single machine and share resources such as CPU, " -"GPU, and memory. Having 10 clients would mean having 10 instances of " -"``FlowerClient`` in memory. Doing this on a single machine can quickly " -"exhaust the available memory resources, even if only a subset of these " -"clients participates in a single round of federated learning." +"After importing all required packages, we define a :code:`main()` " +"function to perform the simulation process:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:521 +#: ../../source/tutorial-quickstart-xgboost.rst:921 msgid "" -"In addition to the regular capabilities where server and clients run on " -"multiple machines, Flower, therefore, provides special simulation " -"capabilities that create ``FlowerClient`` instances only when they are " -"actually necessary for training or evaluation. To enable the Flower " -"framework to create clients when necessary, we need to implement a " -"function called ``client_fn`` that creates a ``FlowerClient`` instance on" -" demand. Flower calls ``client_fn`` whenever it needs an instance of one " -"particular client to call ``fit`` or ``evaluate`` (those instances are " -"usually discarded after use, so they should not keep any local state). " -"Clients are identified by a client ID, or short ``cid``. The ``cid`` can " -"be used, for example, to load different local data partitions for " -"different clients, as can be seen below:" +"We first load the dataset and perform data partitioning, and the pre-" +"processed data is stored in a :code:`list`. After the simulation begins, " +"the clients won't need to pre-process their partitions again." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:556 -msgid "Starting the training" +#: ../../source/tutorial-quickstart-xgboost.rst:924 +msgid "Then, we define the strategies and other hyper-parameters:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:558 +#: ../../source/tutorial-quickstart-xgboost.rst:975 msgid "" -"We now have the class ``FlowerClient`` which defines client-side " -"training/evaluation and ``client_fn`` which allows Flower to create " -"``FlowerClient`` instances whenever it needs to call ``fit`` or " -"``evaluate`` on one particular client. The last step is to start the " -"actual simulation using ``flwr.simulation.start_simulation``." +"After that, we start the simulation by calling " +":code:`fl.simulation.start_simulation`:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:560 +#: ../../source/tutorial-quickstart-xgboost.rst:995 msgid "" -"The function ``start_simulation`` accepts a number of arguments, amongst " -"them the ``client_fn`` used to create ``FlowerClient`` instances, the " -"number of clients to simulate (``num_clients``), the number of federated " -"learning rounds (``num_rounds``), and the strategy. The strategy " -"encapsulates the federated learning approach/algorithm, for example, " -"*Federated Averaging* (FedAvg)." +"One of key parameters for :code:`start_simulation` is :code:`client_fn` " +"which returns a function to construct a client. We define it as follows:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:562 +#: ../../source/tutorial-quickstart-xgboost.rst:1038 +msgid "Arguments parser" +msgstr "" + +#: ../../source/tutorial-quickstart-xgboost.rst:1040 msgid "" -"Flower has a number of built-in strategies, but we can also use our own " -"strategy implementations to customize nearly all aspects of the federated" -" learning approach. For this example, we use the built-in ``FedAvg`` " -"implementation and customize it using a few basic parameters. The last " -"step is the actual call to ``start_simulation`` which - you guessed it - " -"starts the simulation:" +"In :code:`utils.py`, we define the arguments parsers for clients, server " +"and simulation, allowing users to specify different experimental " +"settings. Let's first see the sever side:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:608 -msgid "Behind the scenes" +#: ../../source/tutorial-quickstart-xgboost.rst:1086 +msgid "" +"This allows user to specify training strategies / the number of total " +"clients / FL rounds / participating clients / clients for evaluation, and" +" evaluation fashion. Note that with :code:`--centralised-eval`, the sever" +" will do centralised evaluation and all functionalities for client " +"evaluation will be disabled." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:610 -msgid "So how does this work? How does Flower execute this simulation?" +#: ../../source/tutorial-quickstart-xgboost.rst:1090 +msgid "Then, the argument parser on client side:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:612 -#, python-format +#: ../../source/tutorial-quickstart-xgboost.rst:1144 msgid "" -"When we call ``start_simulation``, we tell Flower that there are 10 " -"clients (``num_clients=10``). Flower then goes ahead an asks the " -"``FedAvg`` strategy to select clients. ``FedAvg`` knows that it should " -"select 100% of the available clients (``fraction_fit=1.0``), so it goes " -"ahead and selects 10 random clients (i.e., 100% of 10)." +"This defines various options for client data partitioning. Besides, " +"clients also have an option to conduct evaluation on centralised test set" +" by setting :code:`--centralised-eval`, as well as an option to perform " +"scaled learning rate based on the number of clients by setting :code" +":`--scaled-lr`." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:614 -msgid "" -"Flower then asks the selected 10 clients to train the model. When the " -"server receives the model parameter updates from the clients, it hands " -"those updates over to the strategy (*FedAvg*) for aggregation. The " -"strategy aggregates those updates and returns the new global model, which" -" then gets used in the next round of federated learning." +#: ../../source/tutorial-quickstart-xgboost.rst:1148 +msgid "We also have an argument parser for simulation:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:626 -msgid "Where's the accuracy?" +#: ../../source/tutorial-quickstart-xgboost.rst:1226 +msgid "This integrates all arguments for both client and server sides." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:628 -msgid "" -"You may have noticed that all metrics except for ``losses_distributed`` " -"are empty. Where did the ``{\"accuracy\": float(accuracy)}`` go?" +#: ../../source/tutorial-quickstart-xgboost.rst:1229 +msgid "Example commands" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:630 +#: ../../source/tutorial-quickstart-xgboost.rst:1231 msgid "" -"Flower can automatically aggregate losses returned by individual clients," -" but it cannot do the same for metrics in the generic metrics dictionary " -"(the one with the ``accuracy`` key). Metrics dictionaries can contain " -"very different kinds of metrics and even key/value pairs that are not " -"metrics at all, so the framework does not (and can not) know how to " -"handle these automatically." +"To run a centralised evaluated experiment with bagging strategy on 5 " +"clients with exponential distribution for 50 rounds, we first start the " +"server as below:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:632 -msgid "" -"As users, we need to tell the framework how to handle/aggregate these " -"custom metrics, and we do so by passing metric aggregation functions to " -"the strategy. The strategy will then call these functions whenever it " -"receives fit or evaluate metrics from clients. The two possible functions" -" are ``fit_metrics_aggregation_fn`` and " -"``evaluate_metrics_aggregation_fn``." +#: ../../source/tutorial-quickstart-xgboost.rst:1238 +msgid "Then, on each client terminal, we start the clients:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:634 -msgid "" -"Let's create a simple weighted averaging function to aggregate the " -"``accuracy`` metric we return from ``evaluate``:" +#: ../../source/tutorial-quickstart-xgboost.rst:1244 +msgid "To run the same experiment with Flower simulation:" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:660 +#: ../../source/tutorial-quickstart-xgboost.rst:1250 msgid "" -"The only thing left to do is to tell the strategy to call this function " -"whenever it receives evaluation metric dictionaries from the clients:" +"The full `code `_ for this comprehensive example can be found in" +" :code:`examples/xgboost-comprehensive`." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:697 -msgid "" -"We now have a full system that performs federated training and federated " -"evaluation. It uses the ``weighted_average`` function to aggregate custom" -" evaluation metrics and calculates a single ``accuracy`` metric across " -"all clients on the server side." +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:9 +msgid "Build a strategy from scratch" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:699 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:11 msgid "" -"The other two categories of metrics (``losses_centralized`` and " -"``metrics_centralized``) are still empty because they only apply when " -"centralized evaluation is being used. Part two of the Flower tutorial " -"will cover centralized evaluation." +"Welcome to the third part of the Flower federated learning tutorial. In " +"previous parts of this tutorial, we introduced federated learning with " +"PyTorch and Flower (`part 1 `__) and we learned how strategies " +"can be used to customize the execution on both the server and the clients" +" (`part 2 `__)." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:711 -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:351 -msgid "Final remarks" +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:13 +msgid "" +"In this notebook, we'll continue to customize the federated learning " +"system we built previously by creating a custom version of FedAvg (again," +" using `Flower `__ and `PyTorch " +"`__)." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:713 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:15 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:16 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:15 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:15 msgid "" -"Congratulations, you just trained a convolutional neural network, " -"federated over 10 clients! With that, you understand the basics of " -"federated learning with Flower. The same approach you've seen can be used" -" with other machine learning frameworks (not just PyTorch) and tasks (not" -" just CIFAR-10 images classification), for example NLP with Hugging Face " -"Transformers or speech with SpeechBrain." +"`Star Flower on GitHub `__ ⭐️ and join " +"the Flower community on Slack to connect, ask questions, and get help: " +"`Join Slack `__ 🌼 We'd love to hear from " +"you in the ``#introductions`` channel! And if anything is unclear, head " +"over to the ``#questions`` channel." msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:715 -msgid "" -"In the next notebook, we're going to cover some more advanced concepts. " -"Want to customize your strategy? Initialize parameters on the server " -"side? Or evaluate the aggregated model on the server side? We'll cover " -"all this and more in the next tutorial." +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:17 +msgid "Let's build a new ``Strategy`` from scratch!" msgstr "" -#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:733 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:29 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:29 +msgid "Preparation" +msgstr "" + +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:31 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:32 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:31 msgid "" -"The `Flower Federated Learning Tutorial - Part 2 " -"`__ goes into more depth about strategies and all " -"the advanced things you can build with them." +"Before we begin with the actual code, let's make sure that we have " +"everything we need." msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:9 -msgid "Use a federated learning strategy" +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:43 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:44 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:43 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:43 +msgid "Installing dependencies" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:11 -msgid "" -"Welcome to the next part of the federated learning tutorial. In previous " -"parts of this tutorial, we introduced federated learning with PyTorch and" -" Flower (`part 1 `__)." +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:45 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:46 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:45 +msgid "First, we install the necessary packages:" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:13 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:65 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:66 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:65 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:65 msgid "" -"In this notebook, we'll begin to customize the federated learning system " -"we built in the introductory notebook (again, using `Flower " -"`__ and `PyTorch `__)." +"Now that we have all dependencies installed, we can import everything we " +"need for this tutorial:" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:17 -msgid "Let's move beyond FedAvg with Flower strategies!" +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:101 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:102 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:101 +msgid "" +"It is possible to switch to a runtime that has GPU acceleration enabled " +"(on Google Colab: ``Runtime > Change runtime type > Hardware acclerator: " +"GPU > Save``). Note, however, that Google Colab is not always able to " +"offer GPU acceleration. If you see an error related to GPU availability " +"in one of the following sections, consider switching back to CPU-based " +"execution by setting ``DEVICE = torch.device(\"cpu\")``. If the runtime " +"has GPU acceleration enabled, you should see the output ``Training on " +"cuda``, otherwise it'll say ``Training on cpu``." msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:309 -msgid "Strategy customization" +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:114 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:115 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:114 +msgid "Data loading" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:311 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:116 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:116 msgid "" -"So far, everything should look familiar if you've worked through the " -"introductory notebook. With that, we're ready to introduce a number of " -"new features." +"Let's now load the CIFAR-10 training and test set, partition them into " +"ten smaller datasets (each split into training and validation set), and " +"wrap everything in their own ``DataLoader``. We introduce a new parameter" +" ``num_clients`` which allows us to call ``load_datasets`` with different" +" numbers of clients." msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:323 -msgid "Server-side parameter **initialization**" +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:167 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:168 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:167 +msgid "Model training/evaluation" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:325 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:169 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:170 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:169 msgid "" -"Flower, by default, initializes the global model by asking one random " -"client for the initial parameters. In many cases, we want more control " -"over parameter initialization though. Flower therefore allows you to " -"directly pass the initial parameters to the Strategy:" +"Let's continue with the usual model definition (including " +"``set_parameters`` and ``get_parameters``), training and test functions:" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:370 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:258 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:258 +msgid "Flower client" +msgstr "" + +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:260 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:260 msgid "" -"Passing ``initial_parameters`` to the ``FedAvg`` strategy prevents Flower" -" from asking one of the clients for the initial parameters. If we look " -"closely, we can see that the logs do not show any calls to the " -"``FlowerClient.get_parameters`` method." +"To implement the Flower client, we (again) create a subclass of " +"``flwr.client.NumPyClient`` and implement the three methods " +"``get_parameters``, ``fit``, and ``evaluate``. Here, we also pass the " +"``cid`` to the client and use it log additional details:" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:382 -msgid "Starting with a customized strategy" +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:308 +msgid "Let's test what we have so far before we continue:" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:384 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:339 +msgid "Build a Strategy from scratch" +msgstr "" + +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:341 msgid "" -"We've seen the function ``start_simulation`` before. It accepts a number " -"of arguments, amongst them the ``client_fn`` used to create " -"``FlowerClient`` instances, the number of clients to simulate " -"``num_clients``, the number of rounds ``num_rounds``, and the strategy." +"Let’s overwrite the ``configure_fit`` method such that it passes a higher" +" learning rate (potentially also other hyperparameters) to the optimizer " +"of a fraction of the clients. We will keep the sampling of the clients as" +" it is in ``FedAvg`` and then change the configuration dictionary (one of" +" the ``FitIns`` attributes)." msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:386 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:507 msgid "" -"The strategy encapsulates the federated learning approach/algorithm, for " -"example, ``FedAvg`` or ``FedAdagrad``. Let's try to use a different " -"strategy this time:" +"The only thing left is to use the newly created custom Strategy " +"``FedCustom`` when starting the experiment:" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:424 -msgid "Server-side parameter **evaluation**" +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:534 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:932 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:697 +msgid "Recap" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:426 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:536 msgid "" -"Flower can evaluate the aggregated model on the server-side or on the " -"client-side. Client-side and server-side evaluation are similar in some " -"ways, but different in others." +"In this notebook, we’ve seen how to implement a custom strategy. A custom" +" strategy enables granular control over client node configuration, result" +" aggregation, and more. To define a custom strategy, you only have to " +"overwrite the abstract methods of the (abstract) base class ``Strategy``." +" To make custom strategies even more powerful, you can pass custom " +"functions to the constructor of your new class (``__init__``) and then " +"call these functions whenever needed." msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:428 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:550 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:948 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:729 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:715 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:369 msgid "" -"**Centralized Evaluation** (or *server-side evaluation*) is conceptually " -"simple: it works the same way that evaluation in centralized machine " -"learning does. If there is a server-side dataset that can be used for " -"evaluation purposes, then that's great. We can evaluate the newly " -"aggregated model after each round of training without having to send the " -"model to clients. We're also fortunate in the sense that our entire " -"evaluation dataset is available at all times." +"Before you continue, make sure to join the Flower community on Slack: " +"`Join Slack `__" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:430 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:552 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:950 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:731 +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:717 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:371 msgid "" -"**Federated Evaluation** (or *client-side evaluation*) is more complex, " -"but also more powerful: it doesn't require a centralized dataset and " -"allows us to evaluate models over a larger set of data, which often " -"yields more realistic evaluation results. In fact, many scenarios require" -" us to use **Federated Evaluation** if we want to get representative " -"evaluation results at all. But this power comes at a cost: once we start " -"to evaluate on the client side, we should be aware that our evaluation " -"dataset can change over consecutive rounds of learning if those clients " -"are not always available. Moreover, the dataset held by each client can " -"also change over consecutive rounds. This can lead to evaluation results " -"that are not stable, so even if we would not change the model, we'd see " -"our evaluation results fluctuate over consecutive rounds." +"There's a dedicated ``#questions`` channel if you need help, but we'd " +"also love to hear who you are in ``#introductions``!" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:433 +#: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:554 msgid "" -"We've seen how federated evaluation works on the client side (i.e., by " -"implementing the ``evaluate`` method in ``FlowerClient``). Now let's see " -"how we can evaluate aggregated model parameters on the server-side:" +"The `Flower Federated Learning Tutorial - Part 4 " +"`__ introduces ``Client``, the flexible API underlying " +"``NumPyClient``." msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:490 -msgid "Sending/receiving arbitrary values to/from clients" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:9 +msgid "Customize the client" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:492 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:11 msgid "" -"In some situations, we want to configure client-side execution (training," -" evaluation) from the server-side. One example for that is the server " -"asking the clients to train for a certain number of local epochs. Flower " -"provides a way to send configuration values from the server to the " -"clients using a dictionary. Let's look at an example where the clients " -"receive values from the server through the ``config`` parameter in " -"``fit`` (``config`` is also available in ``evaluate``). The ``fit`` " -"method receives the configuration dictionary through the ``config`` " -"parameter and can then read values from this dictionary. In this example," -" it reads ``server_round`` and ``local_epochs`` and uses those values to " -"improve the logging and configure the number of local training epochs:" +"Welcome to the fourth part of the Flower federated learning tutorial. In " +"the previous parts of this tutorial, we introduced federated learning " +"with PyTorch and Flower (`part 1 `__), we learned how " +"strategies can be used to customize the execution on both the server and " +"the clients (`part 2 `__), and we built our own " +"custom strategy from scratch (`part 3 `__)." msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:546 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:14 msgid "" -"So how can we send this config dictionary from server to clients? The " -"built-in Flower Strategies provide way to do this, and it works similarly" -" to the way server-side evaluation works. We provide a function to the " -"strategy, and the strategy calls this function for every round of " -"federated learning:" +"In this notebook, we revisit ``NumPyClient`` and introduce a new " +"baseclass for building clients, simply named ``Client``. In previous " +"parts of this tutorial, we've based our client on ``NumPyClient``, a " +"convenience class which makes it easy to work with machine learning " +"libraries that have good NumPy interoperability. With ``Client``, we gain" +" a lot of flexibility that we didn't have before, but we'll also have to " +"do a few things the we didn't have to do before." msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:576 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:18 msgid "" -"Next, we'll just pass this function to the FedAvg strategy before " -"starting the simulation:" +"Let's go deeper and see what it takes to move from ``NumPyClient`` to " +"``Client``!" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:613 -msgid "" -"As we can see, the client logs now include the current round of federated" -" learning (which they read from the ``config`` dictionary). We can also " -"configure local training to run for one epoch during the first and second" -" round of federated learning, and then for two epochs during the third " -"round." +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:30 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:29 +msgid "Step 0: Preparation" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:615 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:117 msgid "" -"Clients can also return arbitrary values to the server. To do so, they " -"return a dictionary from ``fit`` and/or ``evaluate``. We have seen and " -"used this concept throughout this notebook without mentioning it " -"explicitly: our ``FlowerClient`` returns a dictionary containing a custom" -" key/value pair as the third return value in ``evaluate``." +"Let's now load the CIFAR-10 training and test set, partition them into " +"ten smaller datasets (each split into training and validation set), and " +"wrap everything in their own ``DataLoader``." msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:627 -msgid "Scaling federated learning" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:259 +msgid "Step 1: Revisiting NumPyClient" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:629 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:261 msgid "" -"As a last step in this notebook, let's see how we can use Flower to " -"experiment with a large number of clients." +"So far, we've implemented our client by subclassing " +"``flwr.client.NumPyClient``. The three methods we implemented are " +"``get_parameters``, ``fit``, and ``evaluate``. Finally, we wrap the " +"creation of instances of this class in a function called ``client_fn``:" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:651 -#, python-format +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:309 msgid "" -"We now have 1000 partitions, each holding 45 training and 5 validation " -"examples. Given that the number of training examples on each client is " -"quite small, we should probably train the model a bit longer, so we " -"configure the clients to perform 3 local training epochs. We should also " -"adjust the fraction of clients selected for training during each round " -"(we don't want all 1000 clients participating in every round), so we " -"adjust ``fraction_fit`` to ``0.05``, which means that only 5% of " -"available clients (so 50 clients) will be selected for training each " -"round:" +"We've seen this before, there's nothing new so far. The only *tiny* " +"difference compared to the previous notebook is naming, we've changed " +"``FlowerClient`` to ``FlowerNumPyClient`` and ``client_fn`` to " +"``numpyclient_fn``. Let's run it to see the output we get:" msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:699 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:339 msgid "" -"In this notebook, we've seen how we can gradually enhance our system by " -"customizing the strategy, initializing parameters on the server side, " -"choosing a different strategy, and evaluating models on the server-side. " -"That's quite a bit of flexibility with so little code, right?" +"This works as expected, two clients are training for three rounds of " +"federated learning." msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:701 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:341 msgid "" -"In the later sections, we've seen how we can communicate arbitrary values" -" between server and clients to fully customize client-side execution. " -"With that capability, we built a large-scale Federated Learning " -"simulation using the Flower Virtual Client Engine and ran an experiment " -"involving 1000 clients in the same workload - all in a Jupyter Notebook!" +"Let's dive a little bit deeper and discuss how Flower executes this " +"simulation. Whenever a client is selected to do some work, " +"``start_simulation`` calls the function ``numpyclient_fn`` to create an " +"instance of our ``FlowerNumPyClient`` (along with loading the model and " +"the data)." msgstr "" -#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:719 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:343 msgid "" -"The `Flower Federated Learning Tutorial - Part 3 " -"`__ shows how to build a fully custom ``Strategy`` " -"from scratch." +"But here's the perhaps surprising part: Flower doesn't actually use the " +"``FlowerNumPyClient`` object directly. Instead, it wraps the object to " +"makes it look like a subclass of ``flwr.client.Client``, not " +"``flwr.client.NumPyClient``. In fact, the Flower core framework doesn't " +"know how to handle ``NumPyClient``'s, it only knows how to handle " +"``Client``'s. ``NumPyClient`` is just a convenience abstraction built on " +"top of ``Client``." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:9 -msgid "What is Federated Learning?" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:345 +msgid "" +"Instead of building on top of ``NumPyClient``, we can directly build on " +"top of ``Client``." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:13 -msgid "" -"In this tutorial, you will learn what federated learning is, build your " -"first system in Flower, and gradually extend it. If you work through all " -"parts of the tutorial, you will be able to build advanced federated " -"learning systems that approach the current state of the art in the field." +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:357 +msgid "Step 2: Moving from ``NumPyClient`` to ``Client``" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:15 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:359 msgid "" -"🧑‍🏫 This tutorial starts at zero and expects no familiarity with " -"federated learning. Only a basic understanding of data science and Python" -" programming is assumed." +"Let's try to do the same thing using ``Client`` instead of " +"``NumPyClient``." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:17 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:465 msgid "" -"`Star Flower on GitHub `__ ⭐️ and join " -"the open-source Flower community on Slack to connect, ask questions, and " -"get help: `Join Slack `__ 🌼 We'd love to " -"hear from you in the ``#introductions`` channel! And if anything is " -"unclear, head over to the ``#questions`` channel." +"Before we discuss the code in more detail, let's try to run it! Gotta " +"make sure our new ``Client``-based client works, right?" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:19 -msgid "Let's get started!" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:490 +msgid "" +"That's it, we're now using ``Client``. It probably looks similar to what " +"we've done with ``NumPyClient``. So what's the difference?" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:31 -msgid "Classic machine learning" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:492 +msgid "" +"First of all, it's more code. But why? The difference comes from the fact" +" that ``Client`` expects us to take care of parameter serialization and " +"deserialization. For Flower to be able to send parameters over the " +"network, it eventually needs to turn these parameters into ``bytes``. " +"Turning parameters (e.g., NumPy ``ndarray``'s) into raw bytes is called " +"serialization. Turning raw bytes into something more useful (like NumPy " +"``ndarray``'s) is called deserialization. Flower needs to do both: it " +"needs to serialize parameters on the server-side and send them to the " +"client, the client needs to deserialize them to use them for local " +"training, and then serialize the updated parameters again to send them " +"back to the server, which (finally!) deserializes them again in order to " +"aggregate them with the updates received from other clients." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:33 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:495 msgid "" -"Before we begin to discuss federated learning, let us quickly recap how " -"most machine learning works today." +"The only *real* difference between Client and NumPyClient is that " +"NumPyClient takes care of serialization and deserialization for you. It " +"can do so because it expects you to return parameters as NumPy ndarray's," +" and it knows how to handle these. This makes working with machine " +"learning libraries that have good NumPy support (most of them) a breeze." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:35 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:497 msgid "" -"In machine learning, we have a model, and we have data. The model could " -"be a neural network (as depicted here), or something else, like classical" -" linear regression." +"In terms of API, there's one major difference: all methods in Client take" +" exactly one argument (e.g., ``FitIns`` in ``Client.fit``) and return " +"exactly one value (e.g., ``FitRes`` in ``Client.fit``). The methods in " +"``NumPyClient`` on the other hand have multiple arguments (e.g., " +"``parameters`` and ``config`` in ``NumPyClient.fit``) and multiple return" +" values (e.g., ``parameters``, ``num_example``, and ``metrics`` in " +"``NumPyClient.fit``) if there are multiple things to handle. These " +"``*Ins`` and ``*Res`` objects in ``Client`` wrap all the individual " +"values you're used to from ``NumPyClient``." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:41 -msgid "|31e4b1afa87c4b968327bbeafbf184d4|" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:510 +msgid "Step 3: Custom serialization" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:109 -msgid "Model and data" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:512 +msgid "" +"Here we will explore how to implement custom serialization with a simple " +"example." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:47 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:514 msgid "" -"We train the model using the data to perform a useful task. A task could " -"be to detect objects in images, transcribe an audio recording, or play a " -"game like Go." +"But first what is serialization? Serialization is just the process of " +"converting an object into raw bytes, and equally as important, " +"deserialization is the process of converting raw bytes back into an " +"object. This is very useful for network communication. Indeed, without " +"serialization, you could not just a Python object through the internet." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:53 -msgid "|c9d935b4284e4c389a33d86b33e07c0a|" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:516 +msgid "" +"Federated Learning relies heavily on internet communication for training " +"by sending Python objects back and forth between the clients and the " +"server. This means that serialization is an essential part of Federated " +"Learning." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:111 -msgid "Train model using data" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:518 +msgid "" +"In the following section, we will write a basic example where instead of " +"sending a serialized version of our ``ndarray``\\ s containing our " +"parameters, we will first convert the ``ndarray`` into sparse matrices, " +"before sending them. This technique can be used to save bandwidth, as in " +"certain cases where the weights of a model are sparse (containing many 0 " +"entries), converting them to a sparse matrix can greatly improve their " +"bytesize." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:59 -msgid "" -"Now, in practice, the training data we work with doesn't originate on the" -" machine we train the model on. It gets created somewhere else." +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:521 +msgid "Our custom serialization/deserialization functions" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:61 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:523 msgid "" -"It originates on a smartphone by the user interacting with an app, a car " -"collecting sensor data, a laptop receiving input via the keyboard, or a " -"smart speaker listening to someone trying to sing a song." +"This is where the real serialization/deserialization will happen, " +"especially in ``ndarray_to_sparse_bytes`` for serialization and " +"``sparse_bytes_to_ndarray`` for deserialization." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:67 -msgid "|00727b5faffb468f84dd1b03ded88638|" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:525 +msgid "" +"Note that we imported the ``scipy.sparse`` library in order to convert " +"our arrays." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:113 -msgid "Data on a phone" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:613 +msgid "Client-side" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:73 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:615 msgid "" -"What's also important to mention, this \"somewhere else\" is usually not " -"just one place, it's many places. It could be several devices all running" -" the same app. But it could also be several organizations, all generating" -" data for the same task." -msgstr "" - -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:79 -msgid "|daf0cf0ff4c24fd29439af78416cf47b|" +"To be able to serialize our ``ndarray``\\ s into sparse parameters, we " +"will just have to call our custom functions in our " +"``flwr.client.Client``." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:115 -msgid "Data is on many devices" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:617 +msgid "" +"Indeed, in ``get_parameters`` we need to serialize the parameters we got " +"from our network using our custom ``ndarrays_to_sparse_parameters`` " +"defined above." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:85 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:619 msgid "" -"So to use machine learning, or any kind of data analysis, the approach " -"that has been used in the past was to collect all data on a central " -"server. This server can be somewhere in a data center, or somewhere in " -"the cloud." +"In ``fit``, we first need to deserialize the parameters coming from the " +"server using our custom ``sparse_parameters_to_ndarrays`` and then we " +"need to serialize our local results with " +"``ndarrays_to_sparse_parameters``." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:91 -msgid "|9f093007080d471d94ca90d3e9fde9b6|" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:621 +msgid "" +"In ``evaluate``, we will only need to deserialize the global parameters " +"with our custom function." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:117 -msgid "Central data collection" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:725 +msgid "Server-side" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:97 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:727 msgid "" -"Once all the data is collected in one place, we can finally use machine " -"learning algorithms to train our model on the data. This is the machine " -"learning approach that we've basically always relied on." -msgstr "" - -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:103 -msgid "|46a26e6150e0479fbd3dfd655f36eb13|" -msgstr "" - -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:119 -msgid "Central model training" +"For this example, we will just use ``FedAvg`` as a strategy. To change " +"the serialization and deserialization here, we only need to reimplement " +"the ``evaluate`` and ``aggregate_fit`` functions of ``FedAvg``. The other" +" functions of the strategy will be inherited from the super class " +"``FedAvg``." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:130 -msgid "Challenges of classical machine learning" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:729 +msgid "As you can see only one line as change in ``evaluate``:" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:132 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:735 msgid "" -"The classic machine learning approach we've just seen can be used in some" -" cases. Great examples include categorizing holiday photos, or analyzing " -"web traffic. Cases, where all the data is naturally available on a " -"centralized server." +"And for ``aggregate_fit``, we will first deserialize every result we " +"received:" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:138 -msgid "|3daba297595c4c7fb845d90404a6179a|" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:744 +msgid "And then serialize the aggregated result:" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:173 -msgid "Centralized possible" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:903 +msgid "We can now run our custom serialization example!" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:144 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:934 msgid "" -"But the approach can not be used in many other cases. Cases, where the " -"data is not available on a centralized server, or cases where the data " -"available on one server is not enough to train a good model." +"In this part of the tutorial, we've seen how we can build clients by " +"subclassing either ``NumPyClient`` or ``Client``. ``NumPyClient`` is a " +"convenience abstraction that makes it easier to work with machine " +"learning libraries that have good NumPy interoperability. ``Client`` is a" +" more flexible abstraction that allows us to do things that are not " +"possible in ``NumPyClient``. In order to do so, it requires us to handle " +"parameter serialization and deserialization ourselves." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:150 -msgid "|5769874fa9c4455b80b2efda850d39d7|" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:952 +msgid "" +"This is the final part of the Flower tutorial (for now!), " +"congratulations! You're now well equipped to understand the rest of the " +"documentation. There are many topics we didn't cover in the tutorial, we " +"recommend the following resources:" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:175 -msgid "Centralized impossible" +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:954 +msgid "`Read Flower Docs `__" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:156 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:955 msgid "" -"There are many reasons why the classic centralized machine learning " -"approach does not work for a large number of highly important real-world " -"use cases. Those reasons include:" +"`Check out Flower Code Examples " +"`__" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:158 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:956 msgid "" -"**Regulations**: GDPR (Europe), CCPA (California), PIPEDA (Canada), LGPD " -"(Brazil), PDPL (Argentina), KVKK (Turkey), POPI (South Africa), FSS " -"(Russia), CDPR (China), PDPB (India), PIPA (Korea), APPI (Japan), PDP " -"(Indonesia), PDPA (Singapore), APP (Australia), and other regulations " -"protect sensitive data from being moved. In fact, those regulations " -"sometimes even prevent single organizations from combining their own " -"users' data for artificial intelligence training because those users live" -" in different parts of the world, and their data is governed by different" -" data protection regulations." +"`Use Flower Baselines for your research " +"`__" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:160 +#: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:957 msgid "" -"**User preference**: In addition to regulation, there are use cases where" -" users just expect that no data leaves their device, ever. If you type " -"your passwords and credit card info into the digital keyboard of your " -"phone, you don't expect those passwords to end up on the server of the " -"company that developed that keyboard, do you? In fact, that use case was " -"the reason federated learning was invented in the first place." +"`Watch Flower Summit 2023 videos `__" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:161 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:9 +msgid "Get started with Flower" +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:11 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:11 +msgid "Welcome to the Flower federated learning tutorial!" +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:13 msgid "" -"**Data volume**: Some sensors, like cameras, produce such a high data " -"volume that it is neither feasible nor economic to collect all the data " -"(due to, for example, bandwidth or communication efficiency). Think about" -" a national rail service with hundreds of train stations across the " -"country. If each of these train stations is outfitted with a number of " -"security cameras, the volume of raw on-device data they produce requires " -"incredibly powerful and exceedingly expensive infrastructure to process " -"and store. And most of the data isn't even useful." +"In this notebook, we'll build a federated learning system using Flower, " +"`Flower Datasets `__ and PyTorch. In " +"part 1, we use PyTorch for the model training pipeline and data loading. " +"In part 2, we continue to federate the PyTorch-based pipeline using " +"Flower." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:164 -msgid "Examples where centralized machine learning does not work include:" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:17 +msgid "Let's get stated!" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:166 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:31 msgid "" -"Sensitive healthcare records from multiple hospitals to train cancer " -"detection models" +"Before we begin with any actual code, let's make sure that we have " +"everything we need." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:167 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:45 msgid "" -"Financial information from different organizations to detect financial " -"fraud" +"Next, we install the necessary packages for PyTorch (``torch`` and " +"``torchvision``), Flower Datasets (``flwr-datasets``) and Flower " +"(``flwr``):" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:168 -msgid "Location data from your electric car to make better range prediction" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:105 +msgid "" +"It is possible to switch to a runtime that has GPU acceleration enabled " +"(on Google Colab: ``Runtime > Change runtime type > Hardware accelerator:" +" GPU > Save``). Note, however, that Google Colab is not always able to " +"offer GPU acceleration. If you see an error related to GPU availability " +"in one of the following sections, consider switching back to CPU-based " +"execution by setting ``DEVICE = torch.device(\"cpu\")``. If the runtime " +"has GPU acceleration enabled, you should see the output ``Training on " +"cuda``, otherwise it'll say ``Training on cpu``." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:169 -msgid "End-to-end encrypted messages to train better auto-complete models" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:118 +msgid "Loading the data" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:171 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:120 msgid "" -"The popularity of privacy-enhancing systems like the `Brave " -"`__ browser or the `Signal `__ " -"messenger shows that users care about privacy. In fact, they choose the " -"privacy-enhancing version over other alternatives, if such an alternative " -"exists. But what can we do to apply machine learning and data science to " -"these cases to utilize private data? After all, these are all areas that " -"would benefit significantly from recent advances in AI." +"Federated learning can be applied to many different types of tasks across" +" different domains. In this tutorial, we introduce federated learning by " +"training a simple convolutional neural network (CNN) on the popular " +"CIFAR-10 dataset. CIFAR-10 can be used to train image classifiers that " +"distinguish between images from ten different classes: 'airplane', " +"'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', and " +"'truck'." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:186 -msgid "Federated learning" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:131 +msgid "" +"We simulate having multiple datasets from multiple organizations (also " +"called the \"cross-silo\" setting in federated learning) by splitting the" +" original CIFAR-10 dataset into multiple partitions. Each partition will " +"represent the data from a single organization. We're doing this purely " +"for experimentation purposes, in the real world there's no need for data " +"splitting because each organization already has their own data (so the " +"data is naturally partitioned)." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:188 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:133 msgid "" -"Federated learning simply reverses this approach. It enables machine " -"learning on distributed data by moving the training to the data, instead " -"of moving the data to the training. Here's the single-sentence " -"explanation:" +"Each organization will act as a client in the federated learning system. " +"So having ten organizations participate in a federation means having ten " +"clients connected to the federated learning server." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:190 -msgid "Central machine learning: move the data to the computation" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:144 +msgid "" +"Let's now create the Federated Dataset abstraction that from ``flwr-" +"datasets`` that partitions the CIFAR-10. We will create small training " +"and test set for each edge device and wrap each of them into a PyTorch " +"``DataLoader``:" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:191 -msgid "Federated (machine) learning: move the computation to the data" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:198 +msgid "" +"We now have a list of ten training sets and ten validation sets " +"(``trainloaders`` and ``valloaders``) representing the data of ten " +"different organizations. Each ``trainloader``/``valloader`` pair contains" +" 4500 training examples and 500 validation examples. There's also a " +"single ``testloader`` (we did not split the test set). Again, this is " +"only necessary for building research or educational systems, actual " +"federated learning systems have their data naturally distributed across " +"multiple partitions." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:193 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:201 msgid "" -"By doing so, it enables us to use machine learning (and other data " -"science approaches) in areas where it wasn't possible before. We can now " -"train excellent medical AI models by enabling different hospitals to work" -" together. We can solve financial fraud by training AI models on the data" -" of different financial institutions. We can build novel privacy-" -"enhancing applications (such as secure messaging) that have better built-" -"in AI than their non-privacy-enhancing alternatives. And those are just a" -" few of the examples that come to mind. As we deploy federated learning, " -"we discover more and more areas that can suddenly be reinvented because " -"they now have access to vast amounts of previously inaccessible data." +"Let's take a look at the first batch of images and labels in the first " +"training set (i.e., ``trainloaders[0]``) before we move on:" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:196 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:240 msgid "" -"So how does federated learning work, exactly? Let's start with an " -"intuitive explanation." +"The output above shows a random batch of images from the first " +"``trainloader`` in our list of ten ``trainloaders``. It also prints the " +"labels associated with each image (i.e., one of the ten possible labels " +"we've seen above). If you run the cell again, you should see another " +"batch of images." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:199 -msgid "Federated learning in five steps" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:252 +msgid "Step 1: Centralized Training with PyTorch" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:202 -msgid "Step 0: Initialize global model" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:263 +msgid "" +"Next, we're going to use PyTorch to define a simple convolutional neural " +"network. This introduction assumes basic familiarity with PyTorch, so it " +"doesn't cover the PyTorch-related aspects in full detail. If you want to " +"dive deeper into PyTorch, we recommend `DEEP LEARNING WITH PYTORCH: A 60 " +"MINUTE BLITZ " +"`__." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:204 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:275 +msgid "Defining the model" +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:277 msgid "" -"We start by initializing the model on the server. This is exactly the " -"same in classic centralized learning: we initialize the model parameters," -" either randomly or from a previously saved checkpoint." +"We use the simple CNN described in the `PyTorch tutorial " +"`__:" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:210 -msgid "|ba47ffb421814b0f8f9fa5719093d839|" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:314 +msgid "Let's continue with the usual training and test functions:" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:307 -msgid "Initialize global model" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:374 +msgid "Training the model" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:217 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:376 msgid "" -"Step 1: Send model to a number of connected organizations/devices (client" -" nodes)" +"We now have all the basic building blocks we need: a dataset, a model, a " +"training function, and a test function. Let's put them together to train " +"the model on the dataset of one of our organizations " +"(``trainloaders[0]``). This simulates the reality of most machine " +"learning projects today: each organization has their own data and trains " +"models only on this internal data:" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:219 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:406 msgid "" -"Next, we send the parameters of the global model to the connected client " -"nodes (think: edge devices like smartphones or servers belonging to " -"organizations). This is to ensure that each participating node starts " -"their local training using the same model parameters. We often use only a" -" few of the connected nodes instead of all nodes. The reason for this is " -"that selecting more and more client nodes has diminishing returns." +"Training the simple CNN on our CIFAR-10 split for 5 epochs should result " +"in a test set accuracy of about 41%, which is not good, but at the same " +"time, it doesn't really matter for the purposes of this tutorial. The " +"intent was just to show a simplistic centralized training pipeline that " +"sets the stage for what comes next - federated learning!" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:225 -msgid "|aeac5bf79cbf497082e979834717e01b|" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:418 +msgid "Step 2: Federated Learning with Flower" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:309 -msgid "Send global model" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:420 +msgid "" +"Step 1 demonstrated a simple centralized training pipeline. All data was " +"in one place (i.e., a single ``trainloader`` and a single ``valloader``)." +" Next, we'll simulate a situation where we have multiple datasets in " +"multiple organizations and where we train a model over these " +"organizations using federated learning." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:232 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:432 +msgid "Updating model parameters" +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:434 msgid "" -"Step 2: Train model locally on the data of each organization/device " -"(client node)" +"In federated learning, the server sends the global model parameters to " +"the client, and the client updates the local model with the parameters " +"received from the server. It then trains the model on the local data " +"(which changes the model parameters locally) and sends the " +"updated/changed model parameters back to the server (or, alternatively, " +"it sends just the gradients back to the server, not the full model " +"parameters)." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:234 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:436 msgid "" -"Now that all (selected) client nodes have the latest version of the " -"global model parameters, they start the local training. They use their " -"own local dataset to train their own local model. They don't train the " -"model until full convergence, but they only train for a little while. " -"This could be as little as one epoch on the local data, or even just a " -"few steps (mini-batches)." +"We need two helper functions to update the local model with parameters " +"received from the server and to get the updated model parameters from the" +" local model: ``set_parameters`` and ``get_parameters``. The following " +"two functions do just that for the PyTorch model above." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:240 -msgid "|ce27ed4bbe95459dba016afc42486ba2|" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:438 +msgid "" +"The details of how this works are not really important here (feel free to" +" consult the PyTorch documentation if you want to learn more). In " +"essence, we use ``state_dict`` to access PyTorch model parameter tensors." +" The parameter tensors are then converted to/from a list of NumPy " +"ndarray's (which Flower knows how to serialize/deserialize):" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:311 -msgid "Train on local data" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:466 +msgid "Implementing a Flower client" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:247 -msgid "Step 3: Return model updates back to the server" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:468 +msgid "" +"With that out of the way, let's move on to the interesting part. " +"Federated learning systems consist of a server and multiple clients. In " +"Flower, we create clients by implementing subclasses of " +"``flwr.client.Client`` or ``flwr.client.NumPyClient``. We use " +"``NumPyClient`` in this tutorial because it is easier to implement and " +"requires us to write less boilerplate." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:249 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:470 msgid "" -"After local training, each client node has a slightly different version " -"of the model parameters they originally received. The parameters are all " -"different because each client node has different examples in its local " -"dataset. The client nodes then send those model updates back to the " -"server. The model updates they send can either be the full model " -"parameters or just the gradients that were accumulated during local " -"training." +"To implement the Flower client, we create a subclass of " +"``flwr.client.NumPyClient`` and implement the three methods " +"``get_parameters``, ``fit``, and ``evaluate``:" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:255 -msgid "|ae94a7f71dda443cbec2385751427d41|" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:472 +msgid "``get_parameters``: Return the current local model parameters" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:313 -msgid "Send model updates" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:473 +msgid "" +"``fit``: Receive model parameters from the server, train the model " +"parameters on the local data, and return the (updated) model parameters " +"to the server" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:262 -msgid "Step 4: Aggregate model updates into a new global model" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:474 +msgid "" +"``evaluate``: Receive model parameters from the server, evaluate the " +"model parameters on the local data, and return the evaluation result to " +"the server" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:264 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:476 msgid "" -"The server receives model updates from the selected client nodes. If it " -"selected 100 client nodes, it now has 100 slightly different versions of " -"the original global model, each trained on the local data of one client. " -"But didn't we want to have one model that contains the learnings from the" -" data of all 100 client nodes?" +"We mentioned that our clients will use the previously defined PyTorch " +"components for model training and evaluation. Let's see a simple Flower " +"client implementation that brings everything together:" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:266 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:513 msgid "" -"In order to get one single model, we have to combine all the model " -"updates we received from the client nodes. This process is called " -"*aggregation*, and there are many different ways to do it. The most basic" -" way to do it is called *Federated Averaging* (`McMahan et al., 2016 " -"`__), often abbreviated as *FedAvg*. " -"*FedAvg* takes the 100 model updates and, as the name suggests, averages " -"them. To be more precise, it takes the *weighted average* of the model " -"updates, weighted by the number of examples each client used for " -"training. The weighting is important to make sure that each data example " -"has the same \"influence\" on the resulting global model. If one client " -"has 10 examples, and another client has 100 examples, then - without " -"weighting - each of the 10 examples would influence the global model ten " -"times as much as each of the 100 examples." +"Our class ``FlowerClient`` defines how local training/evaluation will be " +"performed and allows Flower to call the local training/evaluation through" +" ``fit`` and ``evaluate``. Each instance of ``FlowerClient`` represents a" +" *single client* in our federated learning system. Federated learning " +"systems have multiple clients (otherwise, there's not much to federate), " +"so each client will be represented by its own instance of " +"``FlowerClient``. If we have, for example, three clients in our workload," +" then we'd have three instances of ``FlowerClient``. Flower calls " +"``FlowerClient.fit`` on the respective instance when the server selects a" +" particular client for training (and ``FlowerClient.evaluate`` for " +"evaluation)." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:273 -msgid "|e61fce4d43d243e7bb08bdde97d81ce6|" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:517 +msgid "Using the Virtual Client Engine" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:315 -msgid "Aggregate model updates" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:519 +msgid "" +"In this notebook, we want to simulate a federated learning system with 10" +" clients on a single machine. This means that the server and all 10 " +"clients will live on a single machine and share resources such as CPU, " +"GPU, and memory. Having 10 clients would mean having 10 instances of " +"``FlowerClient`` in memory. Doing this on a single machine can quickly " +"exhaust the available memory resources, even if only a subset of these " +"clients participates in a single round of federated learning." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:280 -msgid "Step 5: Repeat steps 1 to 4 until the model converges" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:521 +msgid "" +"In addition to the regular capabilities where server and clients run on " +"multiple machines, Flower, therefore, provides special simulation " +"capabilities that create ``FlowerClient`` instances only when they are " +"actually necessary for training or evaluation. To enable the Flower " +"framework to create clients when necessary, we need to implement a " +"function called ``client_fn`` that creates a ``FlowerClient`` instance on" +" demand. Flower calls ``client_fn`` whenever it needs an instance of one " +"particular client to call ``fit`` or ``evaluate`` (those instances are " +"usually discarded after use, so they should not keep any local state). " +"Clients are identified by a client ID, or short ``cid``. The ``cid`` can " +"be used, for example, to load different local data partitions for " +"different clients, as can be seen below:" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:282 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:556 +msgid "Starting the training" +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:558 msgid "" -"Steps 1 to 4 are what we call a single round of federated learning. The " -"global model parameters get sent to the participating client nodes (step " -"1), the client nodes train on their local data (step 2), they send their " -"updated models to the server (step 3), and the server then aggregates the" -" model updates to get a new version of the global model (step 4)." +"We now have the class ``FlowerClient`` which defines client-side " +"training/evaluation and ``client_fn`` which allows Flower to create " +"``FlowerClient`` instances whenever it needs to call ``fit`` or " +"``evaluate`` on one particular client. The last step is to start the " +"actual simulation using ``flwr.simulation.start_simulation``." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:284 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:560 msgid "" -"During a single round, each client node that participates in that " -"iteration only trains for a little while. This means that after the " -"aggregation step (step 4), we have a model that has been trained on all " -"the data of all participating client nodes, but only for a little while. " -"We then have to repeat this training process over and over again to " -"eventually arrive at a fully trained model that performs well across the " -"data of all client nodes." +"The function ``start_simulation`` accepts a number of arguments, amongst " +"them the ``client_fn`` used to create ``FlowerClient`` instances, the " +"number of clients to simulate (``num_clients``), the number of federated " +"learning rounds (``num_rounds``), and the strategy. The strategy " +"encapsulates the federated learning approach/algorithm, for example, " +"*Federated Averaging* (FedAvg)." msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:289 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:562 msgid "" -"Congratulations, you now understand the basics of federated learning. " -"There's a lot more to discuss, of course, but that was federated learning" -" in a nutshell. In later parts of this tutorial, we will go into more " -"detail. Interesting questions include: How can we select the best client " -"nodes that should participate in the next round? What's the best way to " -"aggregate model updates? How can we handle failing client nodes " -"(stragglers)?" +"Flower has a number of built-in strategies, but we can also use our own " +"strategy implementations to customize nearly all aspects of the federated" +" learning approach. For this example, we use the built-in ``FedAvg`` " +"implementation and customize it using a few basic parameters. The last " +"step is the actual call to ``start_simulation`` which - you guessed it - " +"starts the simulation:" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:294 +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:608 +msgid "Behind the scenes" +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:610 +msgid "So how does this work? How does Flower execute this simulation?" +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:612 +#, python-format msgid "" -"Just like we can train a model on the decentralized data of different " -"client nodes, we can also evaluate the model on that data to receive " -"valuable metrics. This is called federated evaluation, sometimes " -"abbreviated as FE. In fact, federated evaluation is an integral part of " -"most federated learning systems." +"When we call ``start_simulation``, we tell Flower that there are 10 " +"clients (``num_clients=10``). Flower then goes ahead an asks the " +"``FedAvg`` strategy to select clients. ``FedAvg`` knows that it should " +"select 100% of the available clients (``fraction_fit=1.0``), so it goes " +"ahead and selects 10 random clients (i.e., 100% of 10)." +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:614 +msgid "" +"Flower then asks the selected 10 clients to train the model. When the " +"server receives the model parameter updates from the clients, it hands " +"those updates over to the strategy (*FedAvg*) for aggregation. The " +"strategy aggregates those updates and returns the new global model, which" +" then gets used in the next round of federated learning." +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:626 +msgid "Where's the accuracy?" msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:297 -msgid "Federated analytics" -msgstr "" +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:628 +msgid "" +"You may have noticed that all metrics except for ``losses_distributed`` " +"are empty. Where did the ``{\"accuracy\": float(accuracy)}`` go?" +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:630 +msgid "" +"Flower can automatically aggregate losses returned by individual clients," +" but it cannot do the same for metrics in the generic metrics dictionary " +"(the one with the ``accuracy`` key). Metrics dictionaries can contain " +"very different kinds of metrics and even key/value pairs that are not " +"metrics at all, so the framework does not (and can not) know how to " +"handle these automatically." +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:632 +msgid "" +"As users, we need to tell the framework how to handle/aggregate these " +"custom metrics, and we do so by passing metric aggregation functions to " +"the strategy. The strategy will then call these functions whenever it " +"receives fit or evaluate metrics from clients. The two possible functions" +" are ``fit_metrics_aggregation_fn`` and " +"``evaluate_metrics_aggregation_fn``." +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:634 +msgid "" +"Let's create a simple weighted averaging function to aggregate the " +"``accuracy`` metric we return from ``evaluate``:" +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:660 +msgid "" +"The only thing left to do is to tell the strategy to call this function " +"whenever it receives evaluation metric dictionaries from the clients:" +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:697 +msgid "" +"We now have a full system that performs federated training and federated " +"evaluation. It uses the ``weighted_average`` function to aggregate custom" +" evaluation metrics and calculates a single ``accuracy`` metric across " +"all clients on the server side." +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:699 +msgid "" +"The other two categories of metrics (``losses_centralized`` and " +"``metrics_centralized``) are still empty because they only apply when " +"centralized evaluation is being used. Part two of the Flower tutorial " +"will cover centralized evaluation." +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:711 +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:351 +msgid "Final remarks" +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:713 +msgid "" +"Congratulations, you just trained a convolutional neural network, " +"federated over 10 clients! With that, you understand the basics of " +"federated learning with Flower. The same approach you've seen can be used" +" with other machine learning frameworks (not just PyTorch) and tasks (not" +" just CIFAR-10 images classification), for example NLP with Hugging Face " +"Transformers or speech with SpeechBrain." +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:715 +msgid "" +"In the next notebook, we're going to cover some more advanced concepts. " +"Want to customize your strategy? Initialize parameters on the server " +"side? Or evaluate the aggregated model on the server side? We'll cover " +"all this and more in the next tutorial." +msgstr "" + +#: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:733 +msgid "" +"The `Flower Federated Learning Tutorial - Part 2 " +"`__ goes into more depth about strategies and all " +"the advanced things you can build with them." +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:9 +msgid "Use a federated learning strategy" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:11 +msgid "" +"Welcome to the next part of the federated learning tutorial. In previous " +"parts of this tutorial, we introduced federated learning with PyTorch and" +" Flower (`part 1 `__)." +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:13 +msgid "" +"In this notebook, we'll begin to customize the federated learning system " +"we built in the introductory notebook (again, using `Flower " +"`__ and `PyTorch `__)." +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:17 +msgid "Let's move beyond FedAvg with Flower strategies!" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:309 +msgid "Strategy customization" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:311 +msgid "" +"So far, everything should look familiar if you've worked through the " +"introductory notebook. With that, we're ready to introduce a number of " +"new features." +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:323 +msgid "Server-side parameter **initialization**" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:325 +msgid "" +"Flower, by default, initializes the global model by asking one random " +"client for the initial parameters. In many cases, we want more control " +"over parameter initialization though. Flower therefore allows you to " +"directly pass the initial parameters to the Strategy:" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:370 +msgid "" +"Passing ``initial_parameters`` to the ``FedAvg`` strategy prevents Flower" +" from asking one of the clients for the initial parameters. If we look " +"closely, we can see that the logs do not show any calls to the " +"``FlowerClient.get_parameters`` method." +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:382 +msgid "Starting with a customized strategy" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:384 +msgid "" +"We've seen the function ``start_simulation`` before. It accepts a number " +"of arguments, amongst them the ``client_fn`` used to create " +"``FlowerClient`` instances, the number of clients to simulate " +"``num_clients``, the number of rounds ``num_rounds``, and the strategy." +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:386 +msgid "" +"The strategy encapsulates the federated learning approach/algorithm, for " +"example, ``FedAvg`` or ``FedAdagrad``. Let's try to use a different " +"strategy this time:" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:424 +msgid "Server-side parameter **evaluation**" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:426 +msgid "" +"Flower can evaluate the aggregated model on the server-side or on the " +"client-side. Client-side and server-side evaluation are similar in some " +"ways, but different in others." +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:428 +msgid "" +"**Centralized Evaluation** (or *server-side evaluation*) is conceptually " +"simple: it works the same way that evaluation in centralized machine " +"learning does. If there is a server-side dataset that can be used for " +"evaluation purposes, then that's great. We can evaluate the newly " +"aggregated model after each round of training without having to send the " +"model to clients. We're also fortunate in the sense that our entire " +"evaluation dataset is available at all times." +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:430 +msgid "" +"**Federated Evaluation** (or *client-side evaluation*) is more complex, " +"but also more powerful: it doesn't require a centralized dataset and " +"allows us to evaluate models over a larger set of data, which often " +"yields more realistic evaluation results. In fact, many scenarios require" +" us to use **Federated Evaluation** if we want to get representative " +"evaluation results at all. But this power comes at a cost: once we start " +"to evaluate on the client side, we should be aware that our evaluation " +"dataset can change over consecutive rounds of learning if those clients " +"are not always available. Moreover, the dataset held by each client can " +"also change over consecutive rounds. This can lead to evaluation results " +"that are not stable, so even if we would not change the model, we'd see " +"our evaluation results fluctuate over consecutive rounds." +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:433 +msgid "" +"We've seen how federated evaluation works on the client side (i.e., by " +"implementing the ``evaluate`` method in ``FlowerClient``). Now let's see " +"how we can evaluate aggregated model parameters on the server-side:" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:490 +msgid "Sending/receiving arbitrary values to/from clients" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:492 +msgid "" +"In some situations, we want to configure client-side execution (training," +" evaluation) from the server-side. One example for that is the server " +"asking the clients to train for a certain number of local epochs. Flower " +"provides a way to send configuration values from the server to the " +"clients using a dictionary. Let's look at an example where the clients " +"receive values from the server through the ``config`` parameter in " +"``fit`` (``config`` is also available in ``evaluate``). The ``fit`` " +"method receives the configuration dictionary through the ``config`` " +"parameter and can then read values from this dictionary. In this example," +" it reads ``server_round`` and ``local_epochs`` and uses those values to " +"improve the logging and configure the number of local training epochs:" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:546 +msgid "" +"So how can we send this config dictionary from server to clients? The " +"built-in Flower Strategies provide way to do this, and it works similarly" +" to the way server-side evaluation works. We provide a function to the " +"strategy, and the strategy calls this function for every round of " +"federated learning:" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:576 +msgid "" +"Next, we'll just pass this function to the FedAvg strategy before " +"starting the simulation:" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:613 +msgid "" +"As we can see, the client logs now include the current round of federated" +" learning (which they read from the ``config`` dictionary). We can also " +"configure local training to run for one epoch during the first and second" +" round of federated learning, and then for two epochs during the third " +"round." +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:615 +msgid "" +"Clients can also return arbitrary values to the server. To do so, they " +"return a dictionary from ``fit`` and/or ``evaluate``. We have seen and " +"used this concept throughout this notebook without mentioning it " +"explicitly: our ``FlowerClient`` returns a dictionary containing a custom" +" key/value pair as the third return value in ``evaluate``." +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:627 +msgid "Scaling federated learning" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:629 +msgid "" +"As a last step in this notebook, let's see how we can use Flower to " +"experiment with a large number of clients." +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:651 +#, python-format +msgid "" +"We now have 1000 partitions, each holding 45 training and 5 validation " +"examples. Given that the number of training examples on each client is " +"quite small, we should probably train the model a bit longer, so we " +"configure the clients to perform 3 local training epochs. We should also " +"adjust the fraction of clients selected for training during each round " +"(we don't want all 1000 clients participating in every round), so we " +"adjust ``fraction_fit`` to ``0.05``, which means that only 5% of " +"available clients (so 50 clients) will be selected for training each " +"round:" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:699 +msgid "" +"In this notebook, we've seen how we can gradually enhance our system by " +"customizing the strategy, initializing parameters on the server side, " +"choosing a different strategy, and evaluating models on the server-side. " +"That's quite a bit of flexibility with so little code, right?" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:701 +msgid "" +"In the later sections, we've seen how we can communicate arbitrary values" +" between server and clients to fully customize client-side execution. " +"With that capability, we built a large-scale Federated Learning " +"simulation using the Flower Virtual Client Engine and ran an experiment " +"involving 1000 clients in the same workload - all in a Jupyter Notebook!" +msgstr "" + +#: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:719 +msgid "" +"The `Flower Federated Learning Tutorial - Part 3 " +"`__ shows how to build a fully custom ``Strategy`` from " +"scratch." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:9 +msgid "What is Federated Learning?" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:13 +msgid "" +"In this tutorial, you will learn what federated learning is, build your " +"first system in Flower, and gradually extend it. If you work through all " +"parts of the tutorial, you will be able to build advanced federated " +"learning systems that approach the current state of the art in the field." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:15 +msgid "" +"🧑‍🏫 This tutorial starts at zero and expects no familiarity with " +"federated learning. Only a basic understanding of data science and Python" +" programming is assumed." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:17 +msgid "" +"`Star Flower on GitHub `__ ⭐️ and join " +"the open-source Flower community on Slack to connect, ask questions, and " +"get help: `Join Slack `__ 🌼 We'd love to " +"hear from you in the ``#introductions`` channel! And if anything is " +"unclear, head over to the ``#questions`` channel." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:19 +msgid "Let's get started!" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:31 +msgid "Classic machine learning" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:33 +msgid "" +"Before we begin to discuss federated learning, let us quickly recap how " +"most machine learning works today." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:35 +msgid "" +"In machine learning, we have a model, and we have data. The model could " +"be a neural network (as depicted here), or something else, like classical" +" linear regression." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:41 +msgid "|80cc4ef4b3224771af9b191bd64bd76f|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:109 +msgid "Model and data" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:47 +msgid "" +"We train the model using the data to perform a useful task. A task could " +"be to detect objects in images, transcribe an audio recording, or play a " +"game like Go." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:53 +msgid "|c9e448ddf27c4d05a9c0c9dbf6dd3c9c|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:111 +msgid "Train model using data" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:59 +msgid "" +"Now, in practice, the training data we work with doesn't originate on the" +" machine we train the model on. It gets created somewhere else." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:61 +msgid "" +"It originates on a smartphone by the user interacting with an app, a car " +"collecting sensor data, a laptop receiving input via the keyboard, or a " +"smart speaker listening to someone trying to sing a song." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:67 +msgid "|9326cea66a0c4775a1e86cd758874ad9|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:113 +msgid "Data on a phone" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:73 +msgid "" +"What's also important to mention, this \"somewhere else\" is usually not " +"just one place, it's many places. It could be several devices all running" +" the same app. But it could also be several organizations, all generating" +" data for the same task." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:79 +msgid "|a1df0dedc616406fb2ac37a9d8ecf899|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:115 +msgid "Data is on many devices" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:85 +msgid "" +"So to use machine learning, or any kind of data analysis, the approach " +"that has been used in the past was to collect all data on a central " +"server. This server can be somewhere in a data center, or somewhere in " +"the cloud." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:91 +msgid "|56c16903e53c47099d73bcf8b39171cd|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:117 +msgid "Central data collection" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:97 +msgid "" +"Once all the data is collected in one place, we can finally use machine " +"learning algorithms to train our model on the data. This is the machine " +"learning approach that we've basically always relied on." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:103 +msgid "|6b392ec5b09a4b4a983e98d2081095a7|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:119 +msgid "Central model training" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:130 +msgid "Challenges of classical machine learning" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:132 +msgid "" +"The classic machine learning approach we've just seen can be used in some" +" cases. Great examples include categorizing holiday photos, or analyzing " +"web traffic. Cases, where all the data is naturally available on a " +"centralized server." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:138 +msgid "|de1b6f37ef3f4f5bad9a01cb0c809806|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:173 +msgid "Centralized possible" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:144 +msgid "" +"But the approach can not be used in many other cases. Cases, where the " +"data is not available on a centralized server, or cases where the data " +"available on one server is not enough to train a good model." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:150 +msgid "|dab5666c4cf646f2983441af0dab3e21|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:175 +msgid "Centralized impossible" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:156 +msgid "" +"There are many reasons why the classic centralized machine learning " +"approach does not work for a large number of highly important real-world " +"use cases. Those reasons include:" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:158 +msgid "" +"**Regulations**: GDPR (Europe), CCPA (California), PIPEDA (Canada), LGPD " +"(Brazil), PDPL (Argentina), KVKK (Turkey), POPI (South Africa), FSS " +"(Russia), CDPR (China), PDPB (India), PIPA (Korea), APPI (Japan), PDP " +"(Indonesia), PDPA (Singapore), APP (Australia), and other regulations " +"protect sensitive data from being moved. In fact, those regulations " +"sometimes even prevent single organizations from combining their own " +"users' data for artificial intelligence training because those users live" +" in different parts of the world, and their data is governed by different" +" data protection regulations." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:160 +msgid "" +"**User preference**: In addition to regulation, there are use cases where" +" users just expect that no data leaves their device, ever. If you type " +"your passwords and credit card info into the digital keyboard of your " +"phone, you don't expect those passwords to end up on the server of the " +"company that developed that keyboard, do you? In fact, that use case was " +"the reason federated learning was invented in the first place." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:161 +msgid "" +"**Data volume**: Some sensors, like cameras, produce such a high data " +"volume that it is neither feasible nor economic to collect all the data " +"(due to, for example, bandwidth or communication efficiency). Think about" +" a national rail service with hundreds of train stations across the " +"country. If each of these train stations is outfitted with a number of " +"security cameras, the volume of raw on-device data they produce requires " +"incredibly powerful and exceedingly expensive infrastructure to process " +"and store. And most of the data isn't even useful." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:164 +msgid "Examples where centralized machine learning does not work include:" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:166 +msgid "" +"Sensitive healthcare records from multiple hospitals to train cancer " +"detection models" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:167 +msgid "" +"Financial information from different organizations to detect financial " +"fraud" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:168 +msgid "Location data from your electric car to make better range prediction" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:169 +msgid "End-to-end encrypted messages to train better auto-complete models" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:171 +msgid "" +"The popularity of privacy-enhancing systems like the `Brave " +"`__ browser or the `Signal `__ " +"messenger shows that users care about privacy. In fact, they choose the " +"privacy-enhancing version over other alternatives, if such an alternative" +" exists. But what can we do to apply machine learning and data science to" +" these cases to utilize private data? After all, these are all areas that" +" would benefit significantly from recent advances in AI." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:186 +msgid "Federated learning" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:188 +msgid "" +"Federated learning simply reverses this approach. It enables machine " +"learning on distributed data by moving the training to the data, instead " +"of moving the data to the training. Here's the single-sentence " +"explanation:" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:190 +msgid "Central machine learning: move the data to the computation" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:191 +msgid "Federated (machine) learning: move the computation to the data" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:193 +msgid "" +"By doing so, it enables us to use machine learning (and other data " +"science approaches) in areas where it wasn't possible before. We can now " +"train excellent medical AI models by enabling different hospitals to work" +" together. We can solve financial fraud by training AI models on the data" +" of different financial institutions. We can build novel privacy-" +"enhancing applications (such as secure messaging) that have better built-" +"in AI than their non-privacy-enhancing alternatives. And those are just a" +" few of the examples that come to mind. As we deploy federated learning, " +"we discover more and more areas that can suddenly be reinvented because " +"they now have access to vast amounts of previously inaccessible data." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:196 +msgid "" +"So how does federated learning work, exactly? Let's start with an " +"intuitive explanation." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:199 +msgid "Federated learning in five steps" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:202 +msgid "Step 0: Initialize global model" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:204 +msgid "" +"We start by initializing the model on the server. This is exactly the " +"same in classic centralized learning: we initialize the model parameters," +" either randomly or from a previously saved checkpoint." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:210 +msgid "|1d73c61ed0e34484bc5f4cb2b86996c1|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:307 +msgid "Initialize global model" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:217 +msgid "" +"Step 1: Send model to a number of connected organizations/devices (client" +" nodes)" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:219 +msgid "" +"Next, we send the parameters of the global model to the connected client " +"nodes (think: edge devices like smartphones or servers belonging to " +"organizations). This is to ensure that each participating node starts " +"their local training using the same model parameters. We often use only a" +" few of the connected nodes instead of all nodes. The reason for this is " +"that selecting more and more client nodes has diminishing returns." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:225 +msgid "|ecce7ba27b174ddf906ee9c12cc9c545|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:309 +msgid "Send global model" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:232 +msgid "" +"Step 2: Train model locally on the data of each organization/device " +"(client node)" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:234 +msgid "" +"Now that all (selected) client nodes have the latest version of the " +"global model parameters, they start the local training. They use their " +"own local dataset to train their own local model. They don't train the " +"model until full convergence, but they only train for a little while. " +"This could be as little as one epoch on the local data, or even just a " +"few steps (mini-batches)." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:240 +msgid "|30eee0b0ca684a8d9187380a5f71d6af|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:311 +msgid "Train on local data" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:247 +msgid "Step 3: Return model updates back to the server" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:249 +msgid "" +"After local training, each client node has a slightly different version " +"of the model parameters they originally received. The parameters are all " +"different because each client node has different examples in its local " +"dataset. The client nodes then send those model updates back to the " +"server. The model updates they send can either be the full model " +"parameters or just the gradients that were accumulated during local " +"training." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:255 +msgid "|22e8fb88ba204b04b61212b2460e6b48|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:313 +msgid "Send model updates" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:262 +msgid "Step 4: Aggregate model updates into a new global model" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:264 +msgid "" +"The server receives model updates from the selected client nodes. If it " +"selected 100 client nodes, it now has 100 slightly different versions of " +"the original global model, each trained on the local data of one client. " +"But didn't we want to have one model that contains the learnings from the" +" data of all 100 client nodes?" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:266 +msgid "" +"In order to get one single model, we have to combine all the model " +"updates we received from the client nodes. This process is called " +"*aggregation*, and there are many different ways to do it. The most basic" +" way to do it is called *Federated Averaging* (`McMahan et al., 2016 " +"`__), often abbreviated as *FedAvg*. " +"*FedAvg* takes the 100 model updates and, as the name suggests, averages " +"them. To be more precise, it takes the *weighted average* of the model " +"updates, weighted by the number of examples each client used for " +"training. The weighting is important to make sure that each data example " +"has the same \"influence\" on the resulting global model. If one client " +"has 10 examples, and another client has 100 examples, then - without " +"weighting - each of the 10 examples would influence the global model ten " +"times as much as each of the 100 examples." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:273 +msgid "|5d53a3f539644cd5a4ba28696421b01a|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:315 +msgid "Aggregate model updates" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:280 +msgid "Step 5: Repeat steps 1 to 4 until the model converges" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:282 +msgid "" +"Steps 1 to 4 are what we call a single round of federated learning. The " +"global model parameters get sent to the participating client nodes (step " +"1), the client nodes train on their local data (step 2), they send their " +"updated models to the server (step 3), and the server then aggregates the" +" model updates to get a new version of the global model (step 4)." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:284 +msgid "" +"During a single round, each client node that participates in that " +"iteration only trains for a little while. This means that after the " +"aggregation step (step 4), we have a model that has been trained on all " +"the data of all participating client nodes, but only for a little while. " +"We then have to repeat this training process over and over again to " +"eventually arrive at a fully trained model that performs well across the " +"data of all client nodes." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:289 +msgid "" +"Congratulations, you now understand the basics of federated learning. " +"There's a lot more to discuss, of course, but that was federated learning" +" in a nutshell. In later parts of this tutorial, we will go into more " +"detail. Interesting questions include: How can we select the best client " +"nodes that should participate in the next round? What's the best way to " +"aggregate model updates? How can we handle failing client nodes " +"(stragglers)?" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:294 +msgid "" +"Just like we can train a model on the decentralized data of different " +"client nodes, we can also evaluate the model on that data to receive " +"valuable metrics. This is called federated evaluation, sometimes " +"abbreviated as FE. In fact, federated evaluation is an integral part of " +"most federated learning systems." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:297 +msgid "Federated analytics" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:299 +msgid "" +"In many cases, machine learning isn't necessary to derive value from " +"data. Data analysis can yield valuable insights, but again, there's often" +" not enough data to get a clear answer. What's the average age at which " +"people develop a certain type of health condition? Federated analytics " +"enables such queries over multiple client nodes. It is usually used in " +"conjunction with other privacy-enhancing technologies like secure " +"aggregation to prevent the server from seeing the results submitted by " +"individual client nodes." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:305 +msgid "" +"Differential privacy (DP) is often mentioned in the context of Federated " +"Learning. It is a privacy-preserving method used when analyzing and " +"sharing statistical data, ensuring the privacy of individual " +"participants. DP achieves this by adding statistical noise to the model " +"updates, ensuring any individual participants’ information cannot be " +"distinguished or re-identified. This technique can be considered an " +"optimization that provides a quantifiable privacy protection measure." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:326 +msgid "Flower" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:328 +msgid "" +"Federated learning, federated evaluation, and federated analytics require" +" infrastructure to move machine learning models back and forth, train and" +" evaluate them on local data, and then aggregate the updated models. " +"Flower provides the infrastructure to do exactly that in an easy, " +"scalable, and secure way. In short, Flower presents a unified approach to" +" federated learning, analytics, and evaluation. It allows the user to " +"federate any workload, any ML framework, and any programming language." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:334 +msgid "|6887fea9613d4dff8c9aae62a1f207e2|" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:340 +msgid "" +"Flower federated learning server and client nodes (car, scooter, personal" +" computer, roomba, and phone)" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:353 +msgid "" +"Congratulations, you just learned the basics of federated learning and " +"how it relates to the classic (centralized) machine learning!" +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:355 +msgid "" +"In the next part of this tutorial, we are going to build a first " +"federated learning system with Flower." +msgstr "" + +#: ../../source/tutorial-series-what-is-federated-learning.ipynb:373 +msgid "" +"The `Flower Federated Learning Tutorial - Part 1 " +"`__ shows how to build a simple federated learning system " +"with PyTorch and Flower." +msgstr "" + +#~ msgid "" +#~ "Configuring and setting up the " +#~ ":code:`Dockerfile` as well the configuration" +#~ " for the devcontainer can be a " +#~ "bit more involved. The good thing " +#~ "is you want have to do it. " +#~ "Usually it should be enough to " +#~ "install Docker on your system and " +#~ "ensure its available on your command " +#~ "line. Additionally, install the `VSCode " +#~ "Containers Extension `_." +#~ msgstr "" + +#~ msgid "" +#~ "``flwr = { path = " +#~ "\"../../dist/flwr-1.0.0-py3-none-any.whl\" }`` " +#~ "(without extras)" +#~ msgstr "" + +#~ msgid "" +#~ "``flwr = { path = " +#~ "\"../../dist/flwr-1.0.0-py3-none-any.whl\", extras =" +#~ " [\"simulation\"] }`` (with extras)" +#~ msgstr "" + +#~ msgid "Upload the whl (e.g., ``flwr-1.7.0-py3-none-any.whl``)" +#~ msgstr "" + +#~ msgid "" +#~ "Change ``!pip install -q 'flwr[simulation]'" +#~ " torch torchvision matplotlib`` to ``!pip" +#~ " install -q 'flwr-1.7.0-py3-none-" +#~ "any.whl[simulation]' torch torchvision matplotlib``" +#~ msgstr "" + +#~ msgid "Before the release" +#~ msgstr "" + +#~ msgid "" +#~ "Update the changelog (``changelog.md``) with" +#~ " all relevant changes that happened " +#~ "after the last release. If the " +#~ "last release was tagged ``v1.2.0``, you" +#~ " can use the following URL to " +#~ "see all commits that got merged " +#~ "into ``main`` since then:" +#~ msgstr "" + +#~ msgid "" +#~ "`GitHub: Compare v1.2.0...main " +#~ "`_" +#~ msgstr "" + +#~ msgid "" +#~ "Thank the authors who contributed since" +#~ " the last release. This can be " +#~ "done by running the ``./dev/add-" +#~ "shortlog.sh`` convenience script (it can " +#~ "be ran multiple times and will " +#~ "update the names in the list if" +#~ " new contributors were added in the" +#~ " meantime)." +#~ msgstr "" + +#~ msgid "" +#~ "Update the ``changelog.md`` section header " +#~ "``Unreleased`` to contain the version " +#~ "number and date for the release " +#~ "you are building. Create a pull " +#~ "request with the change." +#~ msgstr "" + +#~ msgid "" +#~ "Tag the release commit with the " +#~ "version number as soon as the PR" +#~ " is merged: ``git tag v0.12.3``, then" +#~ " ``git push --tags``. This will " +#~ "create a draft release on GitHub " +#~ "containing the correct artifacts and the" +#~ " relevant part of the changelog." +#~ msgstr "" + +#~ msgid "" +#~ "Note that, in order to build the" +#~ " documentation locally (with ``poetry run" +#~ " make html``, like described below), " +#~ "`Pandoc _` needs " +#~ "to be installed on the system." +#~ msgstr "" + +#~ msgid "" +#~ "If you're familiar with how contributing" +#~ " on GitHub works, you can directly" +#~ " checkout our `getting started guide " +#~ "for contributors `_ and examples " +#~ "of `good first contributions " +#~ "`_." +#~ msgstr "" + +#~ msgid "" +#~ "This will create a `flower/` (or " +#~ "the name of your fork if you " +#~ "renamed it) folder in the current " +#~ "working directory." +#~ msgstr "" + +#~ msgid "Otherwise you can always find this option in the `Branches` page." +#~ msgstr "" + +#~ msgid "" +#~ "Once you click the `Compare & pull" +#~ " request` button, you should see " +#~ "something similar to this:" +#~ msgstr "" + +#~ msgid "Find the source file in `doc/source`" +#~ msgstr "" + +#~ msgid "" +#~ "Make the change in the `.rst` file" +#~ " (beware, the dashes under the title" +#~ " should be the same length as " +#~ "the title itself)" +#~ msgstr "" + +#~ msgid "Change the file name to `save-progress.rst`" +#~ msgstr "" + +#~ msgid "Add a redirect rule to `doc/source/conf.py`" +#~ msgstr "" + +#~ msgid "" +#~ "This will cause a redirect from " +#~ "`saving-progress.html` to `save-progress.html`," +#~ " old links will continue to work." +#~ msgstr "" + +#~ msgid "" +#~ "For the lateral navigation bar to " +#~ "work properly, it is very important " +#~ "to update the `index.rst` file as " +#~ "well. This is where we define the" +#~ " whole arborescence of the navbar." +#~ msgstr "" + +#~ msgid "Find and modify the file name in `index.rst`" +#~ msgstr "" + +#~ msgid "Add CI job to deploy the staging system when the `main` branch changes" +#~ msgstr "" + +#~ msgid "`Python 3.7 `_ or above" +#~ msgstr "" + +#~ msgid "" +#~ "First, clone the `Flower repository " +#~ "`_ from GitHub::" +#~ msgstr "" + +#~ msgid "" +#~ "Second, create a virtual environment " +#~ "(and activate it). If you chose to" +#~ " use :code:`pyenv` (with the :code" +#~ ":`pyenv-virtualenv` plugin) and already " +#~ "have it installed , you can use" +#~ " the following convenience script (by " +#~ "default it will use :code:`Python " +#~ "3.8.17`, but you can change it by" +#~ " providing a specific :code:``)::" +#~ msgstr "" + +#~ msgid "" +#~ "If you don't have :code:`pyenv` " +#~ "installed, you can use the following " +#~ "script that will install pyenv, set " +#~ "it up and create the virtual " +#~ "environment (with :code:`Python 3.8.17` by " +#~ "default)::" +#~ msgstr "" + +#~ msgid "" +#~ "Third, install the Flower package in " +#~ "development mode (think :code:`pip install " +#~ "-e`) along with all necessary " +#~ "dependencies::" +#~ msgstr "" + +#~ msgid "" +#~ "Developers could run the full set " +#~ "of Github Actions workflows under their" +#~ " local environment by using `Act " +#~ "_`. Please refer to" +#~ " the installation instructions under the" +#~ " linked repository and run the next" +#~ " command under Flower main cloned " +#~ "repository folder::" +#~ msgstr "" + +#~ msgid "" +#~ "Please note that these components are" +#~ " still experimental, the correct " +#~ "configuration of DP for a specific " +#~ "task is still an unsolved problem." +#~ msgstr "" + +#~ msgid "" +#~ "The distribution of the update norm " +#~ "has been shown to vary from " +#~ "task-to-task and to evolve as " +#~ "training progresses. Therefore, we use " +#~ "an adaptive approach [andrew]_ that " +#~ "continuously adjusts the clipping threshold" +#~ " to track a prespecified quantile of" +#~ " the update norm distribution." +#~ msgstr "" + +#~ msgid "" +#~ "We make (and attempt to enforce) a" +#~ " number of assumptions that must be" +#~ " satisfied to ensure that the " +#~ "training process actually realises the " +#~ ":math:`(\\epsilon, \\delta)` guarantees the " +#~ "user has in mind when configuring " +#~ "the setup." +#~ msgstr "" + +#~ msgid "" +#~ "The first two are useful for " +#~ "eliminating a multitude of complications " +#~ "associated with calibrating the noise to" +#~ " the clipping threshold while the " +#~ "third one is required to comply " +#~ "with the assumptions of the privacy " +#~ "analysis." +#~ msgstr "" + +#~ msgid "" +#~ "The first version of our solution " +#~ "was to define a decorator whose " +#~ "constructor accepted, among other things, " +#~ "a boolean valued variable indicating " +#~ "whether adaptive clipping was to be " +#~ "enabled or not. We quickly realized " +#~ "that this would clutter its " +#~ ":code:`__init__()` function with variables " +#~ "corresponding to hyperparameters of adaptive" +#~ " clipping that would remain unused " +#~ "when it was disabled. A cleaner " +#~ "implementation could be achieved by " +#~ "splitting the functionality into two " +#~ "decorators, :code:`DPFedAvgFixed` and " +#~ ":code:`DPFedAvgAdaptive`, with the latter sub-" +#~ " classing the former. The constructors " +#~ "for both classes accept a boolean " +#~ "parameter :code:`server_side_noising`, which, as " +#~ "the name suggests, determines where " +#~ "noising is to be performed." +#~ msgstr "" + +#~ msgid "" +#~ ":code:`aggregate_fit()`: We check whether any" +#~ " of the sampled clients dropped out" +#~ " or failed to upload an update " +#~ "before the round timed out. In " +#~ "that case, we need to abort the" +#~ " current round, discarding any successful" +#~ " updates that were received, and move" +#~ " on to the next one. On the " +#~ "other hand, if all clients responded " +#~ "successfully, we must force the " +#~ "averaging of the updates to happen " +#~ "in an unweighted manner by intercepting" +#~ " the :code:`parameters` field of " +#~ ":code:`FitRes` for each received update " +#~ "and setting it to 1. Furthermore, " +#~ "if :code:`server_side_noising=true`, each update " +#~ "is perturbed with an amount of " +#~ "noise equal to what it would have" +#~ " been subjected to had client-side" +#~ " noising being enabled. This entails " +#~ "*pre*-processing of the arguments to " +#~ "this method before passing them on " +#~ "to the wrappee's implementation of " +#~ ":code:`aggregate_fit()`." +#~ msgstr "" + +#~ msgid "" +#~ "McMahan, H. Brendan, et al. \"Learning" +#~ " differentially private recurrent language " +#~ "models.\" arXiv preprint arXiv:1710.06963 " +#~ "(2017)." +#~ msgstr "" + +#~ msgid "" +#~ "Andrew, Galen, et al. \"Differentially " +#~ "private learning with adaptive clipping.\" " +#~ "Advances in Neural Information Processing " +#~ "Systems 34 (2021): 17455-17466." +#~ msgstr "" + +#~ msgid "" +#~ "The following command can be used " +#~ "to verfiy if Flower was successfully " +#~ "installed. If everything worked, it " +#~ "should print the version of Flower " +#~ "to the command line::" +#~ msgstr "" + +#~ msgid "flwr (Python API reference)" +#~ msgstr "" + +#~ msgid "start_client" +#~ msgstr "" + +#~ msgid "start_numpy_client" +#~ msgstr "" + +#~ msgid "start_simulation" +#~ msgstr "" + +#~ msgid "server.start_server" +#~ msgstr "" + +#~ msgid "server.strategy" +#~ msgstr "" + +#~ msgid "server.strategy.Strategy" +#~ msgstr "" + +#~ msgid "server.strategy.FedAvg" +#~ msgstr "" + +#~ msgid "server.strategy.FedAvgM" +#~ msgstr "" + +#~ msgid "server.strategy.FedMedian" +#~ msgstr "" + +#~ msgid "server.strategy.QFedAvg" +#~ msgstr "" + +#~ msgid "server.strategy.FaultTolerantFedAvg" +#~ msgstr "" + +#~ msgid "server.strategy.FedOpt" +#~ msgstr "" + +#~ msgid "server.strategy.FedProx" +#~ msgstr "" + +#~ msgid "server.strategy.FedAdagrad" +#~ msgstr "" + +#~ msgid "server.strategy.FedAdam" +#~ msgstr "" + +#~ msgid "server.strategy.FedYogi" +#~ msgstr "" + +#~ msgid "server.strategy.FedTrimmedAvg" +#~ msgstr "" + +#~ msgid "server.strategy.Krum" +#~ msgstr "" + +#~ msgid "server.strategy.FedXgbNnAvg" +#~ msgstr "" + +#~ msgid "server.strategy.DPFedAvgAdaptive" +#~ msgstr "" + +#~ msgid "server.strategy.DPFedAvgFixed" +#~ msgstr "" + +#~ msgid "" +#~ "**Fix the incorrect return types of " +#~ "Strategy** " +#~ "([#2432](https://github.com/adap/flower/pull/2432/files))" +#~ msgstr "" + +#~ msgid "" +#~ "The types of the return values in" +#~ " the docstrings in two methods " +#~ "(`aggregate_fit` and `aggregate_evaluate`) now " +#~ "match the hint types in the code." +#~ msgstr "" + +#~ msgid "" +#~ "Using the `client_fn`, Flower clients " +#~ "can interchangeably run as standalone " +#~ "processes (i.e. via `start_client`) or " +#~ "in simulation (i.e. via `start_simulation`)" +#~ " without requiring changes to how the" +#~ " client class is defined and " +#~ "instantiated. Calling `start_numpy_client` is " +#~ "now deprecated." +#~ msgstr "" + +#~ msgid "" +#~ "**Update Flower Examples** " +#~ "([#2384](https://github.com/adap/flower/pull/2384)), " +#~ "([#2425](https://github.com/adap/flower/pull/2425))" +#~ msgstr "" + +#~ msgid "" +#~ "**General updates to baselines** " +#~ "([#2301](https://github.com/adap/flower/pull/2301), " +#~ "[#2305](https://github.com/adap/flower/pull/2305), " +#~ "[#2307](https://github.com/adap/flower/pull/2307), " +#~ "[#2327](https://github.com/adap/flower/pull/2327), " +#~ "[#2435](https://github.com/adap/flower/pull/2435))" +#~ msgstr "" + +#~ msgid "" +#~ "**General updates to the simulation " +#~ "engine** ([#2331](https://github.com/adap/flower/pull/2331), " +#~ "[#2447](https://github.com/adap/flower/pull/2447), " +#~ "[#2448](https://github.com/adap/flower/pull/2448))" +#~ msgstr "" + +#~ msgid "" +#~ "**General improvements** " +#~ "([#2309](https://github.com/adap/flower/pull/2309), " +#~ "[#2310](https://github.com/adap/flower/pull/2310), " +#~ "[2313](https://github.com/adap/flower/pull/2313), " +#~ "[#2316](https://github.com/adap/flower/pull/2316), " +#~ "[2317](https://github.com/adap/flower/pull/2317),[#2349](https://github.com/adap/flower/pull/2349)," +#~ " [#2360](https://github.com/adap/flower/pull/2360), " +#~ "[#2402](https://github.com/adap/flower/pull/2402), " +#~ "[#2446](https://github.com/adap/flower/pull/2446))" +#~ msgstr "" + +#~ msgid "" +#~ "`flower-superlink --driver-api-address " +#~ "\"0.0.0.0:8081\" --fleet-api-address " +#~ "\"0.0.0.0:8086\"`" +#~ msgstr "" + +#~ msgid "" +#~ "That's it for the client. We only" +#~ " have to implement :code:`Client` or " +#~ ":code:`NumPyClient` and call " +#~ ":code:`fl.client.start_client()`. The string " +#~ ":code:`\"0.0.0.0:8080\"` tells the client " +#~ "which server to connect to. In our" +#~ " case we can run the server and" +#~ " the client on the same machine, " +#~ "therefore we use :code:`\"0.0.0.0:8080\"`. If" +#~ " we run a truly federated workload" +#~ " with the server and clients running" +#~ " on different machines, all that " +#~ "needs to change is the " +#~ ":code:`server_address` we pass to the " +#~ "client." +#~ msgstr "" + +#~ msgid "" +#~ "That's it for the client. We only" +#~ " have to implement :code:`Client` or " +#~ ":code:`NumPyClient` and call " +#~ ":code:`fl.client.start_client()`. The string " +#~ ":code:`\"[::]:8080\"` tells the client which" +#~ " server to connect to. In our " +#~ "case we can run the server and " +#~ "the client on the same machine, " +#~ "therefore we use :code:`\"[::]:8080\"`. If " +#~ "we run a truly federated workload " +#~ "with the server and clients running " +#~ "on different machines, all that needs" +#~ " to change is the :code:`server_address`" +#~ " we point the client at." +#~ msgstr "" + +#~ msgid "" +#~ "Let's build a horizontal federated " +#~ "learning system using XGBoost and " +#~ "Flower!" +#~ msgstr "" + +#~ msgid "" +#~ "Please refer to the `full code " +#~ "example `_ to learn " +#~ "more." +#~ msgstr "" + +#~ msgid "" +#~ "In this notebook, we'll build a " +#~ "federated learning system using Flower " +#~ "and PyTorch. In part 1, we use " +#~ "PyTorch for the model training pipeline" +#~ " and data loading. In part 2, " +#~ "we continue to federate the PyTorch-" +#~ "based pipeline using Flower." +#~ msgstr "" + +#~ msgid "" +#~ "Next, we install the necessary packages" +#~ " for PyTorch (``torch`` and " +#~ "``torchvision``) and Flower (``flwr``):" +#~ msgstr "" + +#~ msgid "" +#~ "Federated learning can be applied to " +#~ "many different types of tasks across " +#~ "different domains. In this tutorial, we" +#~ " introduce federated learning by training" +#~ " a simple convolutional neural network " +#~ "(CNN) on the popular CIFAR-10 dataset." +#~ " CIFAR-10 can be used to train " +#~ "image classifiers that distinguish between " +#~ "images from ten different classes:" +#~ msgstr "" + +#~ msgid "" +#~ "Each organization will act as a " +#~ "client in the federated learning system." +#~ " So having ten organizations participate" +#~ " in a federation means having ten " +#~ "clients connected to the federated " +#~ "learning server:" +#~ msgstr "" + +#~ msgid "" +#~ "Let's now load the CIFAR-10 training " +#~ "and test set, partition them into " +#~ "ten smaller datasets (each split into" +#~ " training and validation set), and " +#~ "wrap the resulting partitions by " +#~ "creating a PyTorch ``DataLoader`` for " +#~ "each of them:" +#~ msgstr "" + +#~ msgid "|ed6498a023f2477a9ccd57ee4514bda4|" +#~ msgstr "" + +#~ msgid "|5a4f742489ac4f819afefdd4dc9ab272|" +#~ msgstr "" + +#~ msgid "|3331c80cd05045f6a56524d8e3e76d0c|" +#~ msgstr "" + +#~ msgid "|4987b26884ec4b2c8f06c1264bcebe60|" +#~ msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:299 -msgid "" -"In many cases, machine learning isn't necessary to derive value from " -"data. Data analysis can yield valuable insights, but again, there's often" -" not enough data to get a clear answer. What's the average age at which " -"people develop a certain type of health condition? Federated analytics " -"enables such queries over multiple client nodes. It is usually used in " -"conjunction with other privacy-enhancing technologies like secure " -"aggregation to prevent the server from seeing the results submitted by " -"individual client nodes." -msgstr "" +#~ msgid "|ec8ae2d778aa493a986eb2fa29c220e5|" +#~ msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:303 -msgid "Differential Privacy" -msgstr "" +#~ msgid "|b8949d0669fe4f8eadc9a4932f4e9c57|" +#~ msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:305 -msgid "" -"Differential privacy (DP) is often mentioned in the context of Federated " -"Learning. It is a privacy-preserving method used when analyzing and " -"sharing statistical data, ensuring the privacy of individual " -"participants. DP achieves this by adding statistical noise to the model " -"updates, ensuring any individual participants’ information cannot be " -"distinguished or re-identified. This technique can be considered an " -"optimization that provides a quantifiable privacy protection measure." -msgstr "" +#~ msgid "|94ff30bdcd09443e8488b5f29932a541|" +#~ msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:326 -msgid "Flower" -msgstr "" +#~ msgid "|48dccf1d6d0544bba8917d2783a47719|" +#~ msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:328 -msgid "" -"Federated learning, federated evaluation, and federated analytics require" -" infrastructure to move machine learning models back and forth, train and" -" evaluate them on local data, and then aggregate the updated models. " -"Flower provides the infrastructure to do exactly that in an easy, " -"scalable, and secure way. In short, Flower presents a unified approach to" -" federated learning, analytics, and evaluation. It allows the user to " -"federate any workload, any ML framework, and any programming language." -msgstr "" +#~ msgid "|0366618db96b4f329f0d4372d1150fde|" +#~ msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:334 -msgid "|08cb60859b07461588fe44e55810b050|" -msgstr "" +#~ msgid "|ac80eddc76e6478081b1ca35eed029c0|" +#~ msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:340 -msgid "" -"Flower federated learning server and client nodes (car, scooter, personal" -" computer, roomba, and phone)" -msgstr "" +#~ msgid "|1ac94140c317450e89678db133c7f3c2|" +#~ msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:353 -msgid "" -"Congratulations, you just learned the basics of federated learning and " -"how it relates to the classic (centralized) machine learning!" -msgstr "" +#~ msgid "|f8850c6e96fc4430b55e53bba237a7c0|" +#~ msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:355 -msgid "" -"In the next part of this tutorial, we are going to build a first " -"federated learning system with Flower." -msgstr "" +#~ msgid "|4a368fdd3fc34adabd20a46752a68582|" +#~ msgstr "" -#: ../../source/tutorial-series-what-is-federated-learning.ipynb:373 -msgid "" -"The `Flower Federated Learning Tutorial - Part 1 " -"`__ shows how to build a simple federated learning system " -"with PyTorch and Flower." -msgstr "" +#~ msgid "|40f69c17bb444652a7c8dfe577cd120e|" +#~ msgstr "" #~ msgid "" -#~ "Configuring and setting up the " -#~ ":code:`Dockerfile` as well the configuration" -#~ " for the devcontainer can be a " -#~ "bit more involved. The good thing " -#~ "is you want have to do it. " -#~ "Usually it should be enough to " -#~ "install Docker on your system and " -#~ "ensure its available on your command " -#~ "line. Additionally, install the `VSCode " -#~ "Containers Extension `_." +#~ "Please follow the first section on " +#~ "`Run Flower using Docker " +#~ "`_ which covers this" +#~ " step in more detail." #~ msgstr "" #~ msgid "" -#~ "``flwr = { path = " -#~ "\"../../dist/flwr-1.0.0-py3-none-any.whl\" }`` " -#~ "(without extras)" +#~ "Since `Flower 1.5 `_ we have " +#~ "introduced translations to our doc " +#~ "pages, but, as you might have " +#~ "noticed, the translations are often " +#~ "imperfect. If you speak languages other" +#~ " than English, you might be able " +#~ "to help us in our effort to " +#~ "make Federated Learning accessible to as" +#~ " many people as possible by " +#~ "contributing to those translations! This " +#~ "might also be a great opportunity " +#~ "for those wanting to become open " +#~ "source contributors with little prerequistes." #~ msgstr "" #~ msgid "" -#~ "``flwr = { path = " -#~ "\"../../dist/flwr-1.0.0-py3-none-any.whl\", extras =" -#~ " [\"simulation\"] }`` (with extras)" +#~ "You input your translation in the " +#~ "textbox at the top and then, once" +#~ " you are happy with it, you " +#~ "either press ``Save and continue`` (to" +#~ " save the translation and go to " +#~ "the next untranslated string), ``Save " +#~ "and stay`` (to save the translation " +#~ "and stay on the same page), " +#~ "``Suggest`` (to add your translation to" +#~ " suggestions for other users to " +#~ "view), or ``Skip`` (to go to the" +#~ " next untranslated string without saving" +#~ " anything)." #~ msgstr "" -#~ msgid "Upload the whl (e.g., ``flwr-1.7.0-py3-none-any.whl``)" +#~ msgid "" +#~ "The first thing we need to do " +#~ "is to define a message type for" +#~ " the RPC system in :code:`transport.proto`." +#~ " Note that we have to do it " +#~ "for both the request and response " +#~ "messages. For more details on the " +#~ "syntax of proto3, please see the " +#~ "`official documentation `_." #~ msgstr "" #~ msgid "" -#~ "Change ``!pip install -q 'flwr[simulation]'" -#~ " torch torchvision matplotlib`` to ``!pip" -#~ " install -q 'flwr-1.7.0-py3-none-" -#~ "any.whl[simulation]' torch torchvision matplotlib``" +#~ "Source: `Official VSCode documentation " +#~ "`_" #~ msgstr "" -#~ msgid "Before the release" +#~ msgid "" +#~ "`Developing inside a Container " +#~ "`_" #~ msgstr "" #~ msgid "" -#~ "Update the changelog (``changelog.md``) with" -#~ " all relevant changes that happened " -#~ "after the last release. If the " -#~ "last release was tagged ``v1.2.0``, you" -#~ " can use the following URL to " -#~ "see all commits that got merged " -#~ "into ``main`` since then:" +#~ "`Remote development in Containers " +#~ "`_" #~ msgstr "" #~ msgid "" -#~ "`GitHub: Compare v1.2.0...main " -#~ "`_" +#~ "If you are not familiar with " +#~ "Flower Baselines, you should probably " +#~ "check-out our `contributing guide for " +#~ "baselines `_." #~ msgstr "" #~ msgid "" -#~ "Thank the authors who contributed since" -#~ " the last release. This can be " -#~ "done by running the ``./dev/add-" -#~ "shortlog.sh`` convenience script (it can " -#~ "be ran multiple times and will " -#~ "update the names in the list if" -#~ " new contributors were added in the" -#~ " meantime)." +#~ "You should then check out the open" +#~ " `issues " +#~ "`_" +#~ " for baseline requests. If you find" +#~ " a baseline that you'd like to " +#~ "work on and that has no assignes," +#~ " feel free to assign it to " +#~ "yourself and start working on it!" #~ msgstr "" #~ msgid "" -#~ "Update the ``changelog.md`` section header " -#~ "``Unreleased`` to contain the version " -#~ "number and date for the release " -#~ "you are building. Create a pull " -#~ "request with the change." +#~ "If you're familiar with how contributing" +#~ " on GitHub works, you can directly" +#~ " checkout our `getting started guide " +#~ "for contributors `_." #~ msgstr "" #~ msgid "" -#~ "Tag the release commit with the " -#~ "version number as soon as the PR" -#~ " is merged: ``git tag v0.12.3``, then" -#~ " ``git push --tags``. This will " -#~ "create a draft release on GitHub " -#~ "containing the correct artifacts and the" -#~ " relevant part of the changelog." +#~ "Git is a distributed version control " +#~ "tool. This allows for an entire " +#~ "codebase's history to be stored and " +#~ "every developer's machine. It is a " +#~ "software that will need to be " +#~ "installed on your local machine, you " +#~ "can follow this `guide " +#~ "`_ to set it up." #~ msgstr "" #~ msgid "" -#~ "Note that, in order to build the" -#~ " documentation locally (with ``poetry run" -#~ " make html``, like described below), " -#~ "`Pandoc _` needs " -#~ "to be installed on the system." +#~ "A fork is a personal copy of " +#~ "a GitHub repository. To create one " +#~ "for Flower, you must navigate to " +#~ "https://github.com/adap/flower (while connected to" +#~ " your GitHub account) and click the" +#~ " ``Fork`` button situated on the top" +#~ " right of the page." #~ msgstr "" #~ msgid "" -#~ "If you're familiar with how contributing" -#~ " on GitHub works, you can directly" -#~ " checkout our `getting started guide " -#~ "for contributors `_ and examples " -#~ "of `good first contributions " -#~ "`_." +#~ "Now we will add an upstream " +#~ "address to our repository. Still in " +#~ "the same directroy, we must run " +#~ "the following command:" #~ msgstr "" #~ msgid "" -#~ "This will create a `flower/` (or " -#~ "the name of your fork if you " -#~ "renamed it) folder in the current " -#~ "working directory." +#~ "This can be achieved by following " +#~ "this `getting started guide for " +#~ "contributors`_ (note that you won't need" +#~ " to clone the repository). Once you" +#~ " are able to write code and " +#~ "test it, you can finally start " +#~ "making changes!" #~ msgstr "" -#~ msgid "Otherwise you can always find this option in the `Branches` page." +#~ msgid "" +#~ "For our documentation, we’ve started to" +#~ " use the `Diàtaxis framework " +#~ "`_." #~ msgstr "" #~ msgid "" -#~ "Once you click the `Compare & pull" -#~ " request` button, you should see " -#~ "something similar to this:" +#~ "Our “How to” guides should have " +#~ "titles that continue the sencence “How" +#~ " to …”, for example, “How to " +#~ "upgrade to Flower 1.0”." #~ msgstr "" -#~ msgid "Find the source file in `doc/source`" +#~ msgid "" +#~ "This issue is about changing the " +#~ "title of a doc from present " +#~ "continious to present simple." #~ msgstr "" #~ msgid "" -#~ "Make the change in the `.rst` file" -#~ " (beware, the dashes under the title" -#~ " should be the same length as " -#~ "the title itself)" +#~ "Let's take the example of “Saving " +#~ "Progress” which we changed to “Save " +#~ "Progress”. Does this pass our check?" #~ msgstr "" -#~ msgid "Change the file name to `save-progress.rst`" +#~ msgid "Before: ”How to saving progress” ❌" #~ msgstr "" -#~ msgid "Add a redirect rule to `doc/source/conf.py`" +#~ msgid "After: ”How to save progress” ✅" #~ msgstr "" #~ msgid "" -#~ "This will cause a redirect from " -#~ "`saving-progress.html` to `save-progress.html`," -#~ " old links will continue to work." +#~ "This is a tiny change, but it’ll" +#~ " allow us to test your end-" +#~ "to-end setup. After cloning and " +#~ "setting up the Flower repo, here’s " +#~ "what you should do:" #~ msgstr "" #~ msgid "" -#~ "For the lateral navigation bar to " -#~ "work properly, it is very important " -#~ "to update the `index.rst` file as " -#~ "well. This is where we define the" -#~ " whole arborescence of the navbar." +#~ "Build the docs and check the " +#~ "result: ``_" #~ msgstr "" -#~ msgid "Find and modify the file name in `index.rst`" +#~ msgid "Here’s how to change the file name:" #~ msgstr "" -#~ msgid "Add CI job to deploy the staging system when the `main` branch changes" +#~ msgid "" +#~ "Commit the changes (commit messages are" +#~ " always imperative: “Do something”, in " +#~ "this case “Change …”)" #~ msgstr "" -#~ msgid "`Python 3.7 `_ or above" +#~ msgid "" +#~ "`Good first contributions " +#~ "`_, where you should" +#~ " particularly look into the " +#~ ":code:`baselines` contributions." #~ msgstr "" #~ msgid "" -#~ "First, clone the `Flower repository " -#~ "`_ from GitHub::" +#~ "If the section is completely empty " +#~ "(without any token) or non-existant, " +#~ "the changelog will just contain the " +#~ "title of the PR for the changelog" +#~ " entry, without any description." #~ msgstr "" #~ msgid "" -#~ "Second, create a virtual environment " -#~ "(and activate it). If you chose to" -#~ " use :code:`pyenv` (with the :code" -#~ ":`pyenv-virtualenv` plugin) and already " -#~ "have it installed , you can use" -#~ " the following convenience script (by " -#~ "default it will use :code:`Python " -#~ "3.8.17`, but you can change it by" -#~ " providing a specific :code:``)::" +#~ "Flower uses :code:`pyproject.toml` to manage" +#~ " dependencies and configure development " +#~ "tools (the ones which support it). " +#~ "Poetry is a build tool which " +#~ "supports `PEP 517 " +#~ "`_." #~ msgstr "" #~ msgid "" -#~ "If you don't have :code:`pyenv` " -#~ "installed, you can use the following " -#~ "script that will install pyenv, set " -#~ "it up and create the virtual " -#~ "environment (with :code:`Python 3.8.17` by " -#~ "default)::" +#~ "This tutorial will show you how to" +#~ " use Flower to build a federated " +#~ "version of an existing machine learning" +#~ " workload with `FedBN `_, a federated training strategy" +#~ " designed for non-iid data. We " +#~ "are using PyTorch to train a " +#~ "Convolutional Neural Network(with Batch " +#~ "Normalization layers) on the CIFAR-10 " +#~ "dataset. When applying FedBN, only few" +#~ " changes needed compared to `Example: " +#~ "PyTorch - From Centralized To Federated" +#~ " `_." #~ msgstr "" #~ msgid "" -#~ "Third, install the Flower package in " -#~ "development mode (think :code:`pip install " -#~ "-e`) along with all necessary " -#~ "dependencies::" +#~ "All files are revised based on " +#~ "`Example: PyTorch - From Centralized To" +#~ " Federated `_. The " +#~ "only thing to do is modifying the" +#~ " file called :code:`cifar.py`, revised part" +#~ " is shown below:" #~ msgstr "" #~ msgid "" -#~ "Developers could run the full set " -#~ "of Github Actions workflows under their" -#~ " local environment by using `Act " -#~ "_`. Please refer to" -#~ " the installation instructions under the" -#~ " linked repository and run the next" -#~ " command under Flower main cloned " -#~ "repository folder::" +#~ "So far this should all look fairly" +#~ " familiar if you've used PyTorch " +#~ "before. Let's take the next step " +#~ "and use what we've built to create" +#~ " a federated learning system within " +#~ "FedBN, the sytstem consists of one " +#~ "server and two clients." +#~ msgstr "" + +#~ msgid "" +#~ "If you have read `Example: PyTorch " +#~ "- From Centralized To Federated " +#~ "`_, the following" +#~ " parts are easy to follow, onyl " +#~ ":code:`get_parameters` and :code:`set_parameters` " +#~ "function in :code:`client.py` needed to " +#~ "revise. If not, please read the " +#~ "`Example: PyTorch - From Centralized To" +#~ " Federated `_. first." +#~ msgstr "" + +#~ msgid "" +#~ "We can go a bit deeper and " +#~ "see that :code:`server.py` simply launches " +#~ "a server that will coordinate three " +#~ "rounds of training. Flower Servers are" +#~ " very customizable, but for simple " +#~ "workloads, we can start a server " +#~ "using the :ref:`start_server ` function and leave " +#~ "all the configuration possibilities at " +#~ "their default values, as seen below." +#~ msgstr "" + +#~ msgid "Differential privacy" +#~ msgstr "" + +#~ msgid "" +#~ "Flower provides differential privacy (DP) " +#~ "wrapper classes for the easy integration" +#~ " of the central DP guarantees " +#~ "provided by DP-FedAvg into training " +#~ "pipelines defined in any of the " +#~ "various ML frameworks that Flower is " +#~ "compatible with." #~ msgstr "" #~ msgid "" #~ "Please note that these components are" -#~ " still experimental, the correct " +#~ " still experimental; the correct " #~ "configuration of DP for a specific " #~ "task is still an unsolved problem." #~ msgstr "" +#~ msgid "" +#~ "The name DP-FedAvg is misleading " +#~ "since it can be applied on top " +#~ "of any FL algorithm that conforms " +#~ "to the general structure prescribed by" +#~ " the FedOpt family of algorithms." +#~ msgstr "" + +#~ msgid "DP-FedAvg" +#~ msgstr "" + +#~ msgid "" +#~ "DP-FedAvg, originally proposed by " +#~ "McMahan et al. [mcmahan]_ and extended" +#~ " by Andrew et al. [andrew]_, is " +#~ "essentially FedAvg with the following " +#~ "modifications." +#~ msgstr "" + +#~ msgid "" +#~ "**Clipping** : The influence of each " +#~ "client's update is bounded by clipping" +#~ " it. This is achieved by enforcing" +#~ " a cap on the L2 norm of " +#~ "the update, scaling it down if " +#~ "needed." +#~ msgstr "" + +#~ msgid "" +#~ "**Noising** : Gaussian noise, calibrated " +#~ "to the clipping threshold, is added " +#~ "to the average computed at the " +#~ "server." +#~ msgstr "" + #~ msgid "" #~ "The distribution of the update norm " #~ "has been shown to vary from " #~ "task-to-task and to evolve as " -#~ "training progresses. Therefore, we use " -#~ "an adaptive approach [andrew]_ that " -#~ "continuously adjusts the clipping threshold" -#~ " to track a prespecified quantile of" -#~ " the update norm distribution." +#~ "training progresses. This variability is " +#~ "crucial in understanding its impact on" +#~ " differential privacy guarantees, emphasizing " +#~ "the need for an adaptive approach " +#~ "[andrew]_ that continuously adjusts the " +#~ "clipping threshold to track a " +#~ "prespecified quantile of the update norm" +#~ " distribution." +#~ msgstr "" + +#~ msgid "Simplifying Assumptions" #~ msgstr "" #~ msgid "" #~ "We make (and attempt to enforce) a" #~ " number of assumptions that must be" #~ " satisfied to ensure that the " -#~ "training process actually realises the " +#~ "training process actually realizes the " #~ ":math:`(\\epsilon, \\delta)` guarantees the " #~ "user has in mind when configuring " #~ "the setup." #~ msgstr "" +#~ msgid "" +#~ "**Fixed-size subsampling** :Fixed-size " +#~ "subsamples of the clients must be " +#~ "taken at each round, as opposed to" +#~ " variable-sized Poisson subsamples." +#~ msgstr "" + +#~ msgid "" +#~ "**Unweighted averaging** : The contributions" +#~ " from all the clients must weighted" +#~ " equally in the aggregate to " +#~ "eliminate the requirement for the server" +#~ " to know in advance the sum of" +#~ " the weights of all clients available" +#~ " for selection." +#~ msgstr "" + +#~ msgid "" +#~ "**No client failures** : The set " +#~ "of available clients must stay constant" +#~ " across all rounds of training. In" +#~ " other words, clients cannot drop out" +#~ " or fail." +#~ msgstr "" + #~ msgid "" #~ "The first two are useful for " #~ "eliminating a multitude of complications " #~ "associated with calibrating the noise to" -#~ " the clipping threshold while the " +#~ " the clipping threshold, while the " #~ "third one is required to comply " #~ "with the assumptions of the privacy " #~ "analysis." #~ msgstr "" +#~ msgid "" +#~ "These restrictions are in line with " +#~ "constraints imposed by Andrew et al. " +#~ "[andrew]_." +#~ msgstr "" + +#~ msgid "Customizable Responsibility for Noise injection" +#~ msgstr "" + +#~ msgid "" +#~ "In contrast to other implementations " +#~ "where the addition of noise is " +#~ "performed at the server, you can " +#~ "configure the site of noise injection" +#~ " to better match your threat model." +#~ " We provide users with the " +#~ "flexibility to set up the training " +#~ "such that each client independently adds" +#~ " a small amount of noise to the" +#~ " clipped update, with the result that" +#~ " simply aggregating the noisy updates " +#~ "is equivalent to the explicit addition" +#~ " of noise to the non-noisy " +#~ "aggregate at the server." +#~ msgstr "" + +#~ msgid "" +#~ "To be precise, if we let :math:`m`" +#~ " be the number of clients sampled " +#~ "each round and :math:`\\sigma_\\Delta` be " +#~ "the scale of the total Gaussian " +#~ "noise that needs to be added to" +#~ " the sum of the model updates, " +#~ "we can use simple maths to show" +#~ " that this is equivalent to each " +#~ "client adding noise with scale " +#~ ":math:`\\sigma_\\Delta/\\sqrt{m}`." +#~ msgstr "" + +#~ msgid "Wrapper-based approach" +#~ msgstr "" + +#~ msgid "" +#~ "Introducing DP to an existing workload" +#~ " can be thought of as adding an" +#~ " extra layer of security around it." +#~ " This inspired us to provide the " +#~ "additional server and client-side logic" +#~ " needed to make the training process" +#~ " differentially private as wrappers for " +#~ "instances of the :code:`Strategy` and " +#~ ":code:`NumPyClient` abstract classes respectively." +#~ " This wrapper-based approach has the" +#~ " advantage of being easily composable " +#~ "with other wrappers that someone might" +#~ " contribute to the Flower library in" +#~ " the future, e.g., for secure " +#~ "aggregation. Using Inheritance instead can " +#~ "be tedious because that would require" +#~ " the creation of new sub- classes " +#~ "every time a new class implementing " +#~ ":code:`Strategy` or :code:`NumPyClient` is " +#~ "defined." +#~ msgstr "" + +#~ msgid "Server-side logic" +#~ msgstr "" + #~ msgid "" #~ "The first version of our solution " #~ "was to define a decorator whose " #~ "constructor accepted, among other things, " -#~ "a boolean valued variable indicating " +#~ "a boolean-valued variable indicating " #~ "whether adaptive clipping was to be " #~ "enabled or not. We quickly realized " #~ "that this would clutter its " @@ -18063,6 +20823,34 @@ msgstr "" #~ "noising is to be performed." #~ msgstr "" +#~ msgid "" +#~ "The server-side capabilities required " +#~ "for the original version of DP-" +#~ "FedAvg, i.e., the one which performed" +#~ " fixed clipping, can be completely " +#~ "captured with the help of wrapper " +#~ "logic for just the following two " +#~ "methods of the :code:`Strategy` abstract " +#~ "class." +#~ msgstr "" + +#~ msgid "" +#~ ":code:`configure_fit()` : The config " +#~ "dictionary being sent by the wrapped " +#~ ":code:`Strategy` to each client needs to" +#~ " be augmented with an additional " +#~ "value equal to the clipping threshold" +#~ " (keyed under :code:`dpfedavg_clip_norm`) and," +#~ " if :code:`server_side_noising=true`, another one" +#~ " equal to the scale of the " +#~ "Gaussian noise that needs to be " +#~ "added at the client (keyed under " +#~ ":code:`dpfedavg_noise_stddev`). This entails " +#~ "*post*-processing of the results returned " +#~ "by the wrappee's implementation of " +#~ ":code:`configure_fit()`." +#~ msgstr "" + #~ msgid "" #~ ":code:`aggregate_fit()`: We check whether any" #~ " of the sampled clients dropped out" @@ -18083,7 +20871,7 @@ msgstr "" #~ "is perturbed with an amount of " #~ "noise equal to what it would have" #~ " been subjected to had client-side" -#~ " noising being enabled. This entails " +#~ " noising being enabled. This entails " #~ "*pre*-processing of the arguments to " #~ "this method before passing them on " #~ "to the wrappee's implementation of " @@ -18091,291 +20879,491 @@ msgstr "" #~ msgstr "" #~ msgid "" -#~ "McMahan, H. Brendan, et al. \"Learning" -#~ " differentially private recurrent language " -#~ "models.\" arXiv preprint arXiv:1710.06963 " -#~ "(2017)." +#~ "We can't directly change the aggregation" +#~ " function of the wrapped strategy to" +#~ " force it to add noise to the" +#~ " aggregate, hence we simulate client-" +#~ "side noising to implement server-side" +#~ " noising." +#~ msgstr "" + +#~ msgid "" +#~ "These changes have been put together " +#~ "into a class called :code:`DPFedAvgFixed`, " +#~ "whose constructor accepts the strategy " +#~ "being decorated, the clipping threshold " +#~ "and the number of clients sampled " +#~ "every round as compulsory arguments. The" +#~ " user is expected to specify the " +#~ "clipping threshold since the order of" +#~ " magnitude of the update norms is " +#~ "highly dependent on the model being " +#~ "trained and providing a default value" +#~ " would be misleading. The number of" +#~ " clients sampled at every round is" +#~ " required to calculate the amount of" +#~ " noise that must be added to " +#~ "each individual update, either by the" +#~ " server or the clients." +#~ msgstr "" + +#~ msgid "" +#~ "The additional functionality required to " +#~ "facilitate adaptive clipping has been " +#~ "provided in :code:`DPFedAvgAdaptive`, a " +#~ "subclass of :code:`DPFedAvgFixed`. It " +#~ "overrides the above-mentioned methods to" +#~ " do the following." +#~ msgstr "" + +#~ msgid "" +#~ ":code:`configure_fit()` : It intercepts the" +#~ " config dict returned by " +#~ ":code:`super.configure_fit()` to add the " +#~ "key-value pair " +#~ ":code:`dpfedavg_adaptive_clip_enabled:True` to it, " +#~ "which the client interprets as an " +#~ "instruction to include an indicator bit" +#~ " (1 if update norm <= clipping " +#~ "threshold, 0 otherwise) in the results" +#~ " returned by it." +#~ msgstr "" + +#~ msgid "" +#~ ":code:`aggregate_fit()` : It follows a " +#~ "call to :code:`super.aggregate_fit()` with one" +#~ " to :code:`__update_clip_norm__()`, a procedure" +#~ " which adjusts the clipping threshold " +#~ "on the basis of the indicator bits" +#~ " received from the sampled clients." +#~ msgstr "" + +#~ msgid "Client-side logic" +#~ msgstr "" + +#~ msgid "" +#~ "The client-side capabilities required " +#~ "can be completely captured through " +#~ "wrapper logic for just the :code:`fit()`" +#~ " method of the :code:`NumPyClient` abstract" +#~ " class. To be precise, we need " +#~ "to *post-process* the update computed" +#~ " by the wrapped client to clip " +#~ "it, if necessary, to the threshold " +#~ "value supplied by the server as " +#~ "part of the config dictionary. In " +#~ "addition to this, it may need to" +#~ " perform some extra work if either" +#~ " (or both) of the following keys " +#~ "are also present in the dict." +#~ msgstr "" + +#~ msgid "" +#~ ":code:`dpfedavg_noise_stddev` : Generate and " +#~ "add the specified amount of noise " +#~ "to the clipped update." +#~ msgstr "" + +#~ msgid "" +#~ ":code:`dpfedavg_adaptive_clip_enabled` : Augment the" +#~ " metrics dict in the :code:`FitRes` " +#~ "object being returned to the server " +#~ "with an indicator bit, calculated as " +#~ "described earlier." +#~ msgstr "" + +#~ msgid "Performing the :math:`(\\epsilon, \\delta)` analysis" +#~ msgstr "" + +#~ msgid "" +#~ "Assume you have trained for :math:`n`" +#~ " rounds with sampling fraction :math:`q`" +#~ " and noise multiplier :math:`z`. In " +#~ "order to calculate the :math:`\\epsilon` " +#~ "value this would result in for a" +#~ " particular :math:`\\delta`, the following " +#~ "script may be used." +#~ msgstr "" + +#~ msgid "" +#~ "McMahan et al. \"Learning Differentially " +#~ "Private Recurrent Language Models.\" " +#~ "International Conference on Learning " +#~ "Representations (ICLR), 2017." #~ msgstr "" #~ msgid "" #~ "Andrew, Galen, et al. \"Differentially " -#~ "private learning with adaptive clipping.\" " +#~ "Private Learning with Adaptive Clipping.\" " #~ "Advances in Neural Information Processing " -#~ "Systems 34 (2021): 17455-17466." +#~ "Systems (NeurIPS), 2021." #~ msgstr "" #~ msgid "" -#~ "The following command can be used " -#~ "to verfiy if Flower was successfully " -#~ "installed. If everything worked, it " -#~ "should print the version of Flower " -#~ "to the command line::" +#~ "This can be achieved by customizing " +#~ "an existing strategy or by `implementing" +#~ " a custom strategy from scratch " +#~ "`_. Here's a nonsensical " +#~ "example that customizes :code:`FedAvg` by " +#~ "adding a custom ``\"hello\": \"world\"`` " +#~ "configuration key/value pair to the " +#~ "config dict of a *single client* " +#~ "(only the first client in the " +#~ "list, the other clients in this " +#~ "round to not receive this \"special\"" +#~ " config value):" #~ msgstr "" -#~ msgid "flwr (Python API reference)" +#~ msgid "" +#~ "More sophisticated implementations can use " +#~ ":code:`configure_fit` to implement custom " +#~ "client selection logic. A client will" +#~ " only participate in a round if " +#~ "the corresponding :code:`ClientProxy` is " +#~ "included in the the list returned " +#~ "from :code:`configure_fit`." #~ msgstr "" -#~ msgid "start_client" +#~ msgid "" +#~ "More sophisticated implementations can use " +#~ ":code:`configure_evaluate` to implement custom " +#~ "client selection logic. A client will" +#~ " only participate in a round if " +#~ "the corresponding :code:`ClientProxy` is " +#~ "included in the the list returned " +#~ "from :code:`configure_evaluate`." #~ msgstr "" -#~ msgid "start_numpy_client" +#~ msgid "" +#~ "`How to run Flower using Docker " +#~ "`_" #~ msgstr "" -#~ msgid "start_simulation" +#~ msgid "" +#~ "Ray Dashboard: ``_" #~ msgstr "" -#~ msgid "server.start_server" +#~ msgid "" +#~ "Ray Metrics: ``_" #~ msgstr "" -#~ msgid "server.strategy" +#~ msgid "Enjoy building more robust and flexible ``ClientApp``s with mods!" #~ msgstr "" -#~ msgid "server.strategy.Strategy" +#~ msgid "" +#~ ":py:obj:`ClientApp `\\ " +#~ "\\(client\\_fn\\[\\, mods\\]\\)" #~ msgstr "" -#~ msgid "server.strategy.FedAvg" +#~ msgid ":py:obj:`flwr.server.driver `\\" #~ msgstr "" -#~ msgid "server.strategy.FedAvgM" +#~ msgid "Flower driver SDK." #~ msgstr "" -#~ msgid "server.strategy.FedMedian" +#~ msgid "driver" #~ msgstr "" -#~ msgid "server.strategy.QFedAvg" +#~ msgid "" +#~ ":py:obj:`start_driver `\\ " +#~ "\\(\\*\\[\\, server\\_address\\, server\\, ...\\]\\)" #~ msgstr "" -#~ msgid "server.strategy.FaultTolerantFedAvg" +#~ msgid "" +#~ ":py:obj:`Driver `\\ " +#~ "\\(\\[driver\\_service\\_address\\, ...\\]\\)" #~ msgstr "" -#~ msgid "server.strategy.FedOpt" +#~ msgid "" +#~ ":py:obj:`GrpcDriver `\\ " +#~ "\\(\\[driver\\_service\\_address\\, ...\\]\\)" #~ msgstr "" -#~ msgid "server.strategy.FedProx" +#~ msgid "`GrpcDriver` provides access to the gRPC Driver API/service." #~ msgstr "" -#~ msgid "server.strategy.FedAdagrad" +#~ msgid ":py:obj:`get_nodes `\\ \\(\\)" #~ msgstr "" -#~ msgid "server.strategy.FedAdam" +#~ msgid "" +#~ ":py:obj:`pull_task_res " +#~ "`\\ \\(task\\_ids\\)" #~ msgstr "" -#~ msgid "server.strategy.FedYogi" +#~ msgid "Get task results." #~ msgstr "" -#~ msgid "server.strategy.FedTrimmedAvg" +#~ msgid "" +#~ ":py:obj:`push_task_ins " +#~ "`\\ " +#~ "\\(task\\_ins\\_list\\)" #~ msgstr "" -#~ msgid "server.strategy.Krum" +#~ msgid "Schedule tasks." #~ msgstr "" -#~ msgid "server.strategy.FedXgbNnAvg" +#~ msgid "GrpcDriver" #~ msgstr "" -#~ msgid "server.strategy.DPFedAvgAdaptive" +#~ msgid ":py:obj:`connect `\\ \\(\\)" #~ msgstr "" -#~ msgid "server.strategy.DPFedAvgFixed" +#~ msgid "Connect to the Driver API." #~ msgstr "" #~ msgid "" -#~ "**Fix the incorrect return types of " -#~ "Strategy** " -#~ "([#2432](https://github.com/adap/flower/pull/2432/files))" +#~ ":py:obj:`create_run " +#~ "`\\ \\(req\\)" +#~ msgstr "" + +#~ msgid "Request for run ID." #~ msgstr "" #~ msgid "" -#~ "The types of the return values in" -#~ " the docstrings in two methods " -#~ "(`aggregate_fit` and `aggregate_evaluate`) now " -#~ "match the hint types in the code." +#~ ":py:obj:`disconnect " +#~ "`\\ \\(\\)" +#~ msgstr "" + +#~ msgid "Disconnect from the Driver API." #~ msgstr "" #~ msgid "" -#~ "Using the `client_fn`, Flower clients " -#~ "can interchangeably run as standalone " -#~ "processes (i.e. via `start_client`) or " -#~ "in simulation (i.e. via `start_simulation`)" -#~ " without requiring changes to how the" -#~ " client class is defined and " -#~ "instantiated. Calling `start_numpy_client` is " -#~ "now deprecated." +#~ ":py:obj:`get_nodes `\\" +#~ " \\(req\\)" +#~ msgstr "" + +#~ msgid "Get client IDs." #~ msgstr "" #~ msgid "" -#~ "**Update Flower Examples** " -#~ "([#2384](https://github.com/adap/flower/pull/2384)), " -#~ "([#2425](https://github.com/adap/flower/pull/2425))" +#~ ":py:obj:`pull_task_res " +#~ "`\\ \\(req\\)" #~ msgstr "" #~ msgid "" -#~ "**General updates to baselines** " -#~ "([#2301](https://github.com/adap/flower/pull/2301), " -#~ "[#2305](https://github.com/adap/flower/pull/2305), " -#~ "[#2307](https://github.com/adap/flower/pull/2307), " -#~ "[#2327](https://github.com/adap/flower/pull/2327), " -#~ "[#2435](https://github.com/adap/flower/pull/2435))" +#~ ":py:obj:`push_task_ins " +#~ "`\\ \\(req\\)" #~ msgstr "" #~ msgid "" -#~ "**General updates to the simulation " -#~ "engine** ([#2331](https://github.com/adap/flower/pull/2331), " -#~ "[#2447](https://github.com/adap/flower/pull/2447), " -#~ "[#2448](https://github.com/adap/flower/pull/2448))" +#~ "Optionally specify the type of actor " +#~ "to use. The actor object, which " +#~ "persists throughout the simulation, will " +#~ "be the process in charge of " +#~ "running the clients' jobs (i.e. their" +#~ " `fit()` method)." #~ msgstr "" #~ msgid "" -#~ "**General improvements** " -#~ "([#2309](https://github.com/adap/flower/pull/2309), " -#~ "[#2310](https://github.com/adap/flower/pull/2310), " -#~ "[2313](https://github.com/adap/flower/pull/2313), " -#~ "[#2316](https://github.com/adap/flower/pull/2316), " -#~ "[2317](https://github.com/adap/flower/pull/2317),[#2349](https://github.com/adap/flower/pull/2349)," -#~ " [#2360](https://github.com/adap/flower/pull/2360), " -#~ "[#2402](https://github.com/adap/flower/pull/2402), " -#~ "[#2446](https://github.com/adap/flower/pull/2446))" +#~ "Much effort went into a completely " +#~ "restructured Flower docs experience. The " +#~ "documentation on [flower.ai/docs](flower.ai/docs) is" +#~ " now divided into Flower Framework, " +#~ "Flower Baselines, Flower Android SDK, " +#~ "Flower iOS SDK, and code example " +#~ "projects." #~ msgstr "" #~ msgid "" -#~ "`flower-superlink --driver-api-address " -#~ "\"0.0.0.0:8081\" --fleet-api-address " -#~ "\"0.0.0.0:8086\"`" +#~ "The first preview release of Flower " +#~ "Baselines has arrived! We're kickstarting " +#~ "Flower Baselines with implementations of " +#~ "FedOpt (FedYogi, FedAdam, FedAdagrad), FedBN," +#~ " and FedAvgM. Check the documentation " +#~ "on how to use [Flower " +#~ "Baselines](https://flower.ai/docs/using-baselines.html). " +#~ "With this first preview release we're" +#~ " also inviting the community to " +#~ "[contribute their own " +#~ "baselines](https://flower.ai/docs/contributing-baselines.html)." #~ msgstr "" #~ msgid "" -#~ "That's it for the client. We only" -#~ " have to implement :code:`Client` or " -#~ ":code:`NumPyClient` and call " -#~ ":code:`fl.client.start_client()`. The string " -#~ ":code:`\"0.0.0.0:8080\"` tells the client " -#~ "which server to connect to. In our" -#~ " case we can run the server and" -#~ " the client on the same machine, " -#~ "therefore we use :code:`\"0.0.0.0:8080\"`. If" -#~ " we run a truly federated workload" -#~ " with the server and clients running" -#~ " on different machines, all that " -#~ "needs to change is the " -#~ ":code:`server_address` we pass to the " -#~ "client." +#~ "`Quickstart TensorFlow (Tutorial) " +#~ "`_" #~ msgstr "" #~ msgid "" -#~ "That's it for the client. We only" -#~ " have to implement :code:`Client` or " -#~ ":code:`NumPyClient` and call " -#~ ":code:`fl.client.start_client()`. The string " -#~ ":code:`\"[::]:8080\"` tells the client which" -#~ " server to connect to. In our " -#~ "case we can run the server and " -#~ "the client on the same machine, " -#~ "therefore we use :code:`\"[::]:8080\"`. If " -#~ "we run a truly federated workload " -#~ "with the server and clients running " -#~ "on different machines, all that needs" -#~ " to change is the :code:`server_address`" -#~ " we point the client at." +#~ "`Quickstart PyTorch (Tutorial) " +#~ "`_" #~ msgstr "" #~ msgid "" -#~ "Let's build a horizontal federated " -#~ "learning system using XGBoost and " -#~ "Flower!" +#~ "`PyTorch: From Centralized To Federated " +#~ "(Tutorial) `_" #~ msgstr "" #~ msgid "" -#~ "Please refer to the `full code " -#~ "example `_ to learn " -#~ "more." +#~ "The useage examples in `flwr_example` " +#~ "are deprecated and will be removed " +#~ "in the future. New examples are " +#~ "provided as standalone projects in " +#~ "`examples `_." #~ msgstr "" #~ msgid "" -#~ "In this notebook, we'll build a " -#~ "federated learning system using Flower " -#~ "and PyTorch. In part 1, we use " -#~ "PyTorch for the model training pipeline" -#~ " and data loading. In part 2, " -#~ "we continue to federate the PyTorch-" -#~ "based pipeline using Flower." +#~ "`ImageNet-2012 `_ is " +#~ "one of the major computer vision " +#~ "datasets. The Flower ImageNet example " +#~ "uses PyTorch to train a ResNet-18 " +#~ "classifier in a federated learning setup" +#~ " with ten clients." +#~ msgstr "" + +#~ msgid ":fa:`eye,mr-1` Can Flower run on Juptyter Notebooks / Google Colab?" #~ msgstr "" #~ msgid "" -#~ "Next, we install the necessary packages" -#~ " for PyTorch (``torch`` and " -#~ "``torchvision``) and Flower (``flwr``):" +#~ "`Flower meets KOSMoS `_." #~ msgstr "" #~ msgid "" -#~ "Federated learning can be applied to " -#~ "many different types of tasks across " -#~ "different domains. In this tutorial, we" -#~ " introduce federated learning by training" -#~ " a simple convolutional neural network " -#~ "(CNN) on the popular CIFAR-10 dataset." -#~ " CIFAR-10 can be used to train " -#~ "image classifiers that distinguish between " -#~ "images from ten different classes:" +#~ "If you want to check out " +#~ "everything put together, you should " +#~ "check out the full code example: " +#~ "[https://github.com/adap/flower/tree/main/examples/quickstart-" +#~ "huggingface](https://github.com/adap/flower/tree/main/examples" +#~ "/quickstart-huggingface)." #~ msgstr "" #~ msgid "" -#~ "Each organization will act as a " -#~ "client in the federated learning system." -#~ " So having ten organizations participate" -#~ " in a federation means having ten " -#~ "clients connected to the federated " -#~ "learning server:" +#~ "First of all, for running the " +#~ "Flower Python server, it is recommended" +#~ " to create a virtual environment and" +#~ " run everything within a `virtualenv " +#~ "`_. " +#~ "For the Flower client implementation in" +#~ " iOS, it is recommended to use " +#~ "Xcode as our IDE." #~ msgstr "" #~ msgid "" -#~ "Let's now load the CIFAR-10 training " -#~ "and test set, partition them into " -#~ "ten smaller datasets (each split into" -#~ " training and validation set), and " -#~ "wrap the resulting partitions by " -#~ "creating a PyTorch ``DataLoader`` for " -#~ "each of them:" +#~ "Since CoreML does not allow the " +#~ "model parameters to be seen before " +#~ "training, and accessing the model " +#~ "parameters during or after the training" +#~ " can only be done by specifying " +#~ "the layer name, we need to know" +#~ " this informations beforehand, through " +#~ "looking at the model specification, " +#~ "which are written as proto files. " +#~ "The implementation can be seen in " +#~ ":code:`MLModelInspect`." #~ msgstr "" -#~ msgid "|ed6498a023f2477a9ccd57ee4514bda4|" +#~ msgid "" +#~ "After we have all of the necessary" +#~ " informations, let's create our Flower " +#~ "client." #~ msgstr "" -#~ msgid "|5a4f742489ac4f819afefdd4dc9ab272|" +#~ msgid "" +#~ "MXNet is no longer maintained and " +#~ "has been moved into `Attic " +#~ "`_. As a " +#~ "result, we would encourage you to " +#~ "use other ML frameworks alongise Flower," +#~ " for example, PyTorch. This tutorial " +#~ "might be removed in future versions " +#~ "of Flower." #~ msgstr "" -#~ msgid "|3331c80cd05045f6a56524d8e3e76d0c|" +#~ msgid "" +#~ "It is recommended to create a " +#~ "virtual environment and run everything " +#~ "within this `virtualenv `_." #~ msgstr "" -#~ msgid "|4987b26884ec4b2c8f06c1264bcebe60|" +#~ msgid "" +#~ "First of all, it is recommended to" +#~ " create a virtual environment and run" +#~ " everything within a `virtualenv " +#~ "`_." #~ msgstr "" -#~ msgid "|ec8ae2d778aa493a986eb2fa29c220e5|" +#~ msgid "Since we want to use scikt-learn, let's go ahead and install it:" #~ msgstr "" -#~ msgid "|b8949d0669fe4f8eadc9a4932f4e9c57|" +#~ msgid "" +#~ "We load the MNIST dataset from " +#~ "`OpenML `_, a popular" +#~ " image classification dataset of " +#~ "handwritten digits for machine learning. " +#~ "The utility :code:`utils.load_mnist()` downloads " +#~ "the training and test data. The " +#~ "training set is split afterwards into" +#~ " 10 partitions with :code:`utils.partition()`." #~ msgstr "" -#~ msgid "|94ff30bdcd09443e8488b5f29932a541|" +#~ msgid "" +#~ "Now that you have known how " +#~ "federated XGBoost work with Flower, it's" +#~ " time to run some more comprehensive" +#~ " experiments by customising the " +#~ "experimental settings. In the xgboost-" +#~ "comprehensive example (`full code " +#~ "`_), we provide more options " +#~ "to define various experimental setups, " +#~ "including aggregation strategies, data " +#~ "partitioning and centralised/distributed evaluation." +#~ " We also support `Flower simulation " +#~ "`_ making it easy to " +#~ "simulate large client cohorts in a " +#~ "resource-aware manner. Let's take a " +#~ "look!" #~ msgstr "" -#~ msgid "|48dccf1d6d0544bba8917d2783a47719|" +#~ msgid "|31e4b1afa87c4b968327bbeafbf184d4|" #~ msgstr "" -#~ msgid "|0366618db96b4f329f0d4372d1150fde|" +#~ msgid "|c9d935b4284e4c389a33d86b33e07c0a|" #~ msgstr "" -#~ msgid "|ac80eddc76e6478081b1ca35eed029c0|" +#~ msgid "|00727b5faffb468f84dd1b03ded88638|" #~ msgstr "" -#~ msgid "|1ac94140c317450e89678db133c7f3c2|" +#~ msgid "|daf0cf0ff4c24fd29439af78416cf47b|" #~ msgstr "" -#~ msgid "|f8850c6e96fc4430b55e53bba237a7c0|" +#~ msgid "|9f093007080d471d94ca90d3e9fde9b6|" #~ msgstr "" -#~ msgid "|4a368fdd3fc34adabd20a46752a68582|" +#~ msgid "|46a26e6150e0479fbd3dfd655f36eb13|" #~ msgstr "" -#~ msgid "|40f69c17bb444652a7c8dfe577cd120e|" +#~ msgid "|3daba297595c4c7fb845d90404a6179a|" +#~ msgstr "" + +#~ msgid "|5769874fa9c4455b80b2efda850d39d7|" +#~ msgstr "" + +#~ msgid "|ba47ffb421814b0f8f9fa5719093d839|" +#~ msgstr "" + +#~ msgid "|aeac5bf79cbf497082e979834717e01b|" +#~ msgstr "" + +#~ msgid "|ce27ed4bbe95459dba016afc42486ba2|" +#~ msgstr "" + +#~ msgid "|ae94a7f71dda443cbec2385751427d41|" +#~ msgstr "" + +#~ msgid "|e61fce4d43d243e7bb08bdde97d81ce6|" +#~ msgstr "" + +#~ msgid "|08cb60859b07461588fe44e55810b050|" #~ msgstr "" diff --git a/doc/locales/zh_Hans/LC_MESSAGES/framework-docs.po b/doc/locales/zh_Hans/LC_MESSAGES/framework-docs.po index f22b74db8896..38ccb5239d30 100644 --- a/doc/locales/zh_Hans/LC_MESSAGES/framework-docs.po +++ b/doc/locales/zh_Hans/LC_MESSAGES/framework-docs.po @@ -7,18 +7,17 @@ msgid "" msgstr "" "Project-Id-Version: Flower main\n" "Report-Msgid-Bugs-To: \n" -"POT-Creation-Date: 2024-02-13 11:23+0100\n" +"POT-Creation-Date: 2024-03-15 14:32+0000\n" "PO-Revision-Date: 2024-02-19 11:37+0000\n" "Last-Translator: Yan Gao \n" -"Language-Team: Chinese (Simplified) \n" "Language: zh_Hans\n" +"Language-Team: Chinese (Simplified) \n" +"Plural-Forms: nplurals=1; plural=0;\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=utf-8\n" "Content-Transfer-Encoding: 8bit\n" -"Plural-Forms: nplurals=1; plural=0;\n" -"X-Generator: Weblate 5.4\n" -"Generated-By: Babel 2.13.1\n" +"Generated-By: Babel 2.14.0\n" #: ../../source/contributor-explanation-architecture.rst:2 msgid "Flower Architecture" @@ -85,9 +84,8 @@ msgstr "" #: ../../source/contributor-how-to-build-docker-images.rst:19 msgid "" -"Please follow the first section on `Run Flower using Docker " -"`_ " -"which covers this step in more detail." +"Please follow the first section on :doc:`Run Flower using Docker ` which covers this step in more detail." msgstr "" #: ../../source/contributor-how-to-build-docker-images.rst:23 @@ -293,6 +291,7 @@ msgid "Contribute translations" msgstr "贡献译文" #: ../../source/contributor-how-to-contribute-translations.rst:4 +#, fuzzy msgid "" "Since `Flower 1.5 `_ we have introduced translations to " @@ -301,7 +300,7 @@ msgid "" "to help us in our effort to make Federated Learning accessible to as many" " people as possible by contributing to those translations! This might " "also be a great opportunity for those wanting to become open source " -"contributors with little prerequistes." +"contributors with little prerequisites." msgstr "" "从 `Flower 1.5 `_ " @@ -362,8 +361,9 @@ msgid "This is what the interface looks like:" msgstr "这就是界面的样子:" #: ../../source/contributor-how-to-contribute-translations.rst:47 +#, fuzzy msgid "" -"You input your translation in the textbox at the top and then, once you " +"You input your translation in the text box at the top and then, once you " "are happy with it, you either press ``Save and continue`` (to save the " "translation and go to the next untranslated string), ``Save and stay`` " "(to save the translation and stay on the same page), ``Suggest`` (to add " @@ -408,11 +408,11 @@ msgstr "添加新语言" #: ../../source/contributor-how-to-contribute-translations.rst:69 msgid "" "If you want to add a new language, you will first have to contact us, " -"either on `Slack `_, or by opening an " -"issue on our `GitHub repo `_." +"either on `Slack `_, or by opening an issue" +" on our `GitHub repo `_." msgstr "" -"如果您想添加新语言,请先联系我们,可以在 `Slack `_ 上联系,也可以在我们的" -" `GitHub repo `_ 上提交问题。" +"如果您想添加新语言,请先联系我们,可以在 `Slack `_ 上联系,也可以在我们的 " +"`GitHub repo `_ 上提交问题。" #: ../../source/contributor-how-to-create-new-messages.rst:2 msgid "Creating New Messages" @@ -449,12 +449,13 @@ msgid "Message Types for Protocol Buffers" msgstr "协议缓冲区的信息类型" #: ../../source/contributor-how-to-create-new-messages.rst:32 +#, fuzzy msgid "" "The first thing we need to do is to define a message type for the RPC " "system in :code:`transport.proto`. Note that we have to do it for both " "the request and response messages. For more details on the syntax of " -"proto3, please see the `official documentation " -"`_." +"proto3, please see the `official documentation `_." msgstr "" "我们需要做的第一件事是在脚本code:`transport.proto`中定义 RPC " "系统的消息类型。请注意,我们必须对请求信息和响应信息都这样做。有关 proto3 语法的更多详情,请参阅官方文档 " @@ -575,9 +576,10 @@ msgid "" msgstr "工作区文件从本地文件系统加载,或复制或克隆到容器中。扩展在容器内安装和运行,在容器内它们可以完全访问工具、平台和文件系统。这意味着,只需连接到不同的容器,就能无缝切换整个开发环境。" #: ../../source/contributor-how-to-develop-in-vscode-dev-containers.rst:11 +#, fuzzy msgid "" "Source: `Official VSCode documentation " -"`_" +"`_" msgstr "来源:`VSCode 官方文档 `_" #: ../../source/contributor-how-to-develop-in-vscode-dev-containers.rst:15 @@ -618,18 +620,20 @@ msgid "" msgstr "在某些情况下,您的设置可能更复杂。有关这些情况,请参考以下资料:" #: ../../source/contributor-how-to-develop-in-vscode-dev-containers.rst:23 +#, fuzzy msgid "" "`Developing inside a Container " -"`_" msgstr "" "在容器内开发 `_" #: ../../source/contributor-how-to-develop-in-vscode-dev-containers.rst:24 +#, fuzzy msgid "" "`Remote development in Containers " -"`_" +"`_" msgstr "容器中的远程开发 `_" #: ../../source/contributor-how-to-install-development-versions.rst:2 @@ -909,8 +913,8 @@ msgstr "在 ``changelog.md`` 中添加新的 ``Unreleased`` 部分。" #: ../../source/contributor-how-to-release-flower.rst:25 msgid "" -"Merge the pull request on the same day (i.e., before a new nightly release" -" gets published to PyPI)." +"Merge the pull request on the same day (i.e., before a new nightly " +"release gets published to PyPI)." msgstr "在同一天合并拉取请求(即在新版本发布到 PyPI 之前)。" #: ../../source/contributor-how-to-release-flower.rst:28 @@ -923,8 +927,8 @@ msgstr "释放前命名" #: ../../source/contributor-how-to-release-flower.rst:33 msgid "" -"PyPI supports pre-releases (alpha, beta, release candidate). Pre-releases " -"MUST use one of the following naming patterns:" +"PyPI supports pre-releases (alpha, beta, release candidate). Pre-releases" +" MUST use one of the following naming patterns:" msgstr "PyPI 支持预发布版本(alpha、beta、release candidate)。预发布版本必须使用以下命名模式之一:" #: ../../source/contributor-how-to-release-flower.rst:35 @@ -1193,8 +1197,8 @@ msgid "" "where to start to increase your chances of getting your PR accepted into " "the Flower codebase." msgstr "" -"我们欢迎为Flower做出代码贡献!然而,要知道从哪里开始并非易事。因此,我们提出" -"了一些建议,告诉您从哪里开始,以增加您的 PR 被 Flower 代码库接受的机会。" +"我们欢迎为Flower做出代码贡献!然而,要知道从哪里开始并非易事。因此,我们提出了一些建议,告诉您从哪里开始,以增加您的 PR 被 Flower" +" 代码库接受的机会。" #: ../../source/contributor-ref-good-first-contributions.rst:11 msgid "Where to start" @@ -1224,33 +1228,33 @@ msgid "Request for Flower Baselines" msgstr "Flower Baselines的申请" #: ../../source/contributor-ref-good-first-contributions.rst:25 +#, fuzzy msgid "" "If you are not familiar with Flower Baselines, you should probably check-" -"out our `contributing guide for baselines `_." +"out our `contributing guide for baselines " +"`_." msgstr "" "如果您对 Flower Baselines 还不熟悉,也许可以看看我们的 `Baselines贡献指南 " "`_。" #: ../../source/contributor-ref-good-first-contributions.rst:27 +#, fuzzy msgid "" "You should then check out the open `issues " "`_" " for baseline requests. If you find a baseline that you'd like to work on" -" and that has no assignes, feel free to assign it to yourself and start " +" and that has no assignees, feel free to assign it to yourself and start " "working on it!" msgstr "" -"然后查看开放的 `issues `_ baseline请求。如" -"果您发现了自己想做的baseline,而它还没有被分配,请随时把它分配给自己,然后开" -"始工作!" +"然后查看开放的 `issues " +"`_" +" baseline请求。如果您发现了自己想做的baseline,而它还没有被分配,请随时把它分配给自己,然后开始工作!" #: ../../source/contributor-ref-good-first-contributions.rst:31 msgid "" "Otherwise, if you don't find a baseline you'd like to work on, be sure to" " open a new issue with the baseline request template!" -msgstr "如果您没有找到想要做的baseline,请务必使用baseline请求模板打开一个新问题(" -"GitHub issue)!" +msgstr "如果您没有找到想要做的baseline,请务必使用baseline请求模板打开一个新问题(GitHub issue)!" #: ../../source/contributor-ref-good-first-contributions.rst:34 msgid "Request for examples" @@ -1261,8 +1265,7 @@ msgid "" "We wish we had more time to write usage examples because we believe they " "help users to get started with building what they want to build. Here are" " a few ideas where we'd be happy to accept a PR:" -msgstr "我们希望有更多的时间来撰写使用示例,因为我们相信这些示例可以帮助用户开始构建" -"他们想要的东西。以下是我们乐意接受 PR 的几个想法:" +msgstr "我们希望有更多的时间来撰写使用示例,因为我们相信这些示例可以帮助用户开始构建他们想要的东西。以下是我们乐意接受 PR 的几个想法:" #: ../../source/contributor-ref-good-first-contributions.rst:40 msgid "Llama 2 fine-tuning, with Hugging Face Transformers and PyTorch" @@ -1330,50 +1333,50 @@ msgid "" msgstr "本指南适用于想参与 Flower,但不习惯为 GitHub 项目贡献的人。" #: ../../source/contributor-tutorial-contribute-on-github.rst:6 +#, fuzzy msgid "" "If you're familiar with how contributing on GitHub works, you can " -"directly checkout our `getting started guide for contributors " -"`_." +"directly checkout our :doc:`getting started guide for contributors " +"`." msgstr "" -"如果您熟悉如何在 GitHub 上贡献,可以直接查看我们的 \"贡献者入门指南\" " -"`_ 和 " -"\"优秀的首次贡献示例\" `_。" +"如果您熟悉如何在 GitHub 上贡献,可以直接查看我们的 \"贡献者入门指南\" `_ 和 \"优秀的首次贡献示例\" " +"`_。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:11 +#: ../../source/contributor-tutorial-contribute-on-github.rst:10 msgid "Setting up the repository" msgstr "建立资源库" -#: ../../source/contributor-tutorial-contribute-on-github.rst:22 +#: ../../source/contributor-tutorial-contribute-on-github.rst:21 msgid "**Create a GitHub account and setup Git**" msgstr "**创建 GitHub 账户并设置 Git**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:14 +#: ../../source/contributor-tutorial-contribute-on-github.rst:13 +#, fuzzy msgid "" "Git is a distributed version control tool. This allows for an entire " "codebase's history to be stored and every developer's machine. It is a " "software that will need to be installed on your local machine, you can " -"follow this `guide `_ to set it up." +"follow this `guide `_ to set it up." msgstr "" "Git 是一种分布式版本控制工具。它可以将整个代码库的历史记录保存在每个开发人员的机器上。您需要在本地计算机上安装该软件,可以按照本指南 " "`_ 进行设置。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:17 +#: ../../source/contributor-tutorial-contribute-on-github.rst:16 msgid "" "GitHub, itself, is a code hosting platform for version control and " "collaboration. It allows for everyone to collaborate and work from " "anywhere on remote repositories." msgstr "GitHub 本身是一个用于版本控制和协作的代码托管平台。它允许每个人在任何地方对远程仓库进行协作和工作。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:19 +#: ../../source/contributor-tutorial-contribute-on-github.rst:18 msgid "" "If you haven't already, you will need to create an account on `GitHub " "`_." msgstr "如果还没有,您需要在 `GitHub `_ 上创建一个账户。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:21 +#: ../../source/contributor-tutorial-contribute-on-github.rst:20 msgid "" "The idea behind the generic Git and GitHub workflow boils down to this: " "you download code from a remote repository on GitHub, make changes " @@ -1383,21 +1386,22 @@ msgstr "" "通用的 Git 和 GitHub 工作流程背后的理念可以归结为:从 GitHub 上的远程仓库下载代码,在本地进行修改并使用 Git " "进行跟踪,然后将新的历史记录上传回 GitHub。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:33 +#: ../../source/contributor-tutorial-contribute-on-github.rst:32 msgid "**Forking the Flower repository**" msgstr "**叉花仓库**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:25 +#: ../../source/contributor-tutorial-contribute-on-github.rst:24 +#, fuzzy msgid "" "A fork is a personal copy of a GitHub repository. To create one for " -"Flower, you must navigate to https://github.com/adap/flower (while " +"Flower, you must navigate to ``_ (while " "connected to your GitHub account) and click the ``Fork`` button situated " "on the top right of the page." msgstr "" "fork 是 GitHub 仓库的个人副本。要为 Flower 创建一个 fork,您必须导航到 " "https://github.com/adap/flower(同时连接到您的 GitHub 账户),然后点击页面右上方的 ``Fork`` 按钮。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:30 +#: ../../source/contributor-tutorial-contribute-on-github.rst:29 msgid "" "You can change the name if you want, but this is not necessary as this " "version of Flower will be yours and will sit inside your own account " @@ -1407,11 +1411,11 @@ msgstr "" "您可以更改名称,但没有必要,因为这个版本的 Flower " "将是您自己的,并位于您自己的账户中(即,在您自己的版本库列表中)。创建完成后,您会在左上角看到自己的 Flower 版本。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:48 +#: ../../source/contributor-tutorial-contribute-on-github.rst:47 msgid "**Cloning your forked repository**" msgstr "**克隆你的分叉仓库**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:36 +#: ../../source/contributor-tutorial-contribute-on-github.rst:35 msgid "" "The next step is to download the forked repository on your machine to be " "able to make changes to it. On your forked repository page, you should " @@ -1421,28 +1425,28 @@ msgstr "" "下一步是在你的机器上下载分叉版本库,以便对其进行修改。在分叉版本库页面上,首先点击右侧的 \"代码 \"按钮,这样就能复制版本库的 HTTPS " "链接。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:42 +#: ../../source/contributor-tutorial-contribute-on-github.rst:41 msgid "" "Once you copied the \\, you can open a terminal on your machine, " "navigate to the place you want to download the repository to and type:" msgstr "一旦复制了 (),你就可以在你的机器上打开一个终端,导航到你想下载软件源的地方,然后键入:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:48 +#: ../../source/contributor-tutorial-contribute-on-github.rst:47 #, fuzzy msgid "" "This will create a ``flower/`` (or the name of your fork if you renamed " "it) folder in the current working directory." msgstr "这将在当前工作目录下创建一个 `flower/`(如果重命名了,则使用 fork 的名称)文件夹。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:67 +#: ../../source/contributor-tutorial-contribute-on-github.rst:66 msgid "**Add origin**" msgstr "**添加原产地**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:51 +#: ../../source/contributor-tutorial-contribute-on-github.rst:50 msgid "You can then go into the repository folder:" msgstr "然后,您就可以进入存储库文件夹:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:57 +#: ../../source/contributor-tutorial-contribute-on-github.rst:56 msgid "" "And here we will need to add an origin to our repository. The origin is " "the \\ of the remote fork repository. To obtain it, we can do as " @@ -1452,27 +1456,28 @@ msgstr "" "在这里,我们需要为我们的版本库添加一个 origin。origin 是远程 fork 仓库的 " "\\。要获得它,我们可以像前面提到的那样,访问 GitHub 账户上的分叉仓库并复制链接。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:62 +#: ../../source/contributor-tutorial-contribute-on-github.rst:61 msgid "" "Once the \\ is copied, we can type the following command in our " "terminal:" msgstr "一旦复制了 \\ ,我们就可以在终端中键入以下命令:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:91 +#: ../../source/contributor-tutorial-contribute-on-github.rst:90 msgid "**Add upstream**" msgstr "**增加上游**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:70 +#: ../../source/contributor-tutorial-contribute-on-github.rst:69 +#, fuzzy msgid "" "Now we will add an upstream address to our repository. Still in the same " -"directroy, we must run the following command:" +"directory, we must run the following command:" msgstr "现在,我们要为版本库添加一个上游地址。还是在同一目录下,我们必须运行以下命令:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:77 +#: ../../source/contributor-tutorial-contribute-on-github.rst:76 msgid "The following diagram visually explains what we did in the previous steps:" msgstr "下图直观地解释了我们在前面步骤中的操作:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:81 +#: ../../source/contributor-tutorial-contribute-on-github.rst:80 msgid "" "The upstream is the GitHub remote address of the parent repository (in " "this case Flower), i.e. the one we eventually want to contribute to and " @@ -1483,110 +1488,111 @@ msgstr "" "上游是父版本库(这里是 Flower)的 GitHub 远程地址,即我们最终要贡献的版本库,因此需要最新的历史记录。origin " "只是我们创建的分叉仓库的 GitHub 远程地址,即我们自己账户中的副本(分叉)。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:85 +#: ../../source/contributor-tutorial-contribute-on-github.rst:84 msgid "" "To make sure our local version of the fork is up-to-date with the latest " "changes from the Flower repository, we can execute the following command:" msgstr "为了确保本地版本的分叉程序与 Flower 代码库的最新更改保持一致,我们可以执行以下命令:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:94 +#: ../../source/contributor-tutorial-contribute-on-github.rst:93 msgid "Setting up the coding environment" msgstr "设置编码环境" -#: ../../source/contributor-tutorial-contribute-on-github.rst:96 +#: ../../source/contributor-tutorial-contribute-on-github.rst:95 +#, fuzzy msgid "" -"This can be achieved by following this `getting started guide for " -"contributors`_ (note that you won't need to clone the repository). Once " -"you are able to write code and test it, you can finally start making " -"changes!" +"This can be achieved by following this :doc:`getting started guide for " +"contributors ` (note " +"that you won't need to clone the repository). Once you are able to write " +"code and test it, you can finally start making changes!" msgstr "您可以按照这份 \"贡献者入门指南\"__(注意,您不需要克隆版本库)来实现这一点。一旦您能够编写代码并进行测试,您就可以开始修改了!" -#: ../../source/contributor-tutorial-contribute-on-github.rst:101 +#: ../../source/contributor-tutorial-contribute-on-github.rst:100 msgid "Making changes" msgstr "做出改变" -#: ../../source/contributor-tutorial-contribute-on-github.rst:103 +#: ../../source/contributor-tutorial-contribute-on-github.rst:102 msgid "" "Before making any changes make sure you are up-to-date with your " "repository:" msgstr "在进行任何更改之前,请确保您的版本库是最新的:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:109 +#: ../../source/contributor-tutorial-contribute-on-github.rst:108 msgid "And with Flower's repository:" msgstr "还有Flower的存储库:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:123 +#: ../../source/contributor-tutorial-contribute-on-github.rst:122 msgid "**Create a new branch**" msgstr "**创建一个新分支**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:116 +#: ../../source/contributor-tutorial-contribute-on-github.rst:115 msgid "" "To make the history cleaner and easier to work with, it is good practice " "to create a new branch for each feature/project that needs to be " "implemented." msgstr "为了使历史记录更简洁、更易于操作,为每个需要实现的功能/项目创建一个新分支是个不错的做法。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:119 +#: ../../source/contributor-tutorial-contribute-on-github.rst:118 msgid "" "To do so, just run the following command inside the repository's " "directory:" msgstr "为此,只需在版本库目录下运行以下命令即可:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:126 +#: ../../source/contributor-tutorial-contribute-on-github.rst:125 msgid "**Make changes**" msgstr "**进行修改**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:126 +#: ../../source/contributor-tutorial-contribute-on-github.rst:125 msgid "Write great code and create wonderful changes using your favorite editor!" msgstr "使用您最喜欢的编辑器编写优秀的代码并创建精彩的更改!" -#: ../../source/contributor-tutorial-contribute-on-github.rst:139 +#: ../../source/contributor-tutorial-contribute-on-github.rst:138 msgid "**Test and format your code**" msgstr "**测试并格式化您的代码**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:129 +#: ../../source/contributor-tutorial-contribute-on-github.rst:128 msgid "" "Don't forget to test and format your code! Otherwise your code won't be " "able to be merged into the Flower repository. This is done so the " "codebase stays consistent and easy to understand." msgstr "不要忘记测试和格式化您的代码!否则您的代码将无法并入 Flower 代码库。这样做是为了使代码库保持一致并易于理解。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:132 +#: ../../source/contributor-tutorial-contribute-on-github.rst:131 msgid "To do so, we have written a few scripts that you can execute:" msgstr "为此,我们编写了一些脚本供您执行:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:151 +#: ../../source/contributor-tutorial-contribute-on-github.rst:150 msgid "**Stage changes**" msgstr "**舞台变化**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:142 +#: ../../source/contributor-tutorial-contribute-on-github.rst:141 msgid "" "Before creating a commit that will update your history, you must specify " "to Git which files it needs to take into account." msgstr "在创建更新历史记录的提交之前,必须向 Git 说明需要考虑哪些文件。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:144 +#: ../../source/contributor-tutorial-contribute-on-github.rst:143 msgid "This can be done with:" msgstr "这可以通过:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:150 +#: ../../source/contributor-tutorial-contribute-on-github.rst:149 msgid "" "To check which files have been modified compared to the last version " "(last commit) and to see which files are staged for commit, you can use " "the :code:`git status` command." msgstr "要查看与上一版本(上次提交)相比哪些文件已被修改,以及哪些文件处于提交阶段,可以使用 :code:`git status` 命令。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:161 +#: ../../source/contributor-tutorial-contribute-on-github.rst:160 msgid "**Commit changes**" msgstr "**提交更改**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:154 +#: ../../source/contributor-tutorial-contribute-on-github.rst:153 msgid "" "Once you have added all the files you wanted to commit using :code:`git " "add`, you can finally create your commit using this command:" msgstr "使用 :code:`git add` 添加完所有要提交的文件后,就可以使用此命令创建提交了:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:160 +#: ../../source/contributor-tutorial-contribute-on-github.rst:159 msgid "" "The \\ is there to explain to others what the commit " "does. It should be written in an imperative style and be concise. An " @@ -1595,61 +1601,61 @@ msgstr "" " 用于向他人解释提交的作用。它应该以命令式风格书写,并且简明扼要。例如 :code:`git commit " "-m \"Add images to README\"`。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:172 +#: ../../source/contributor-tutorial-contribute-on-github.rst:171 msgid "**Push the changes to the fork**" msgstr "**将更改推送到分叉**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:164 +#: ../../source/contributor-tutorial-contribute-on-github.rst:163 msgid "" "Once we have committed our changes, we have effectively updated our local" " history, but GitHub has no way of knowing this unless we push our " "changes to our origin's remote address:" msgstr "一旦提交了修改,我们就有效地更新了本地历史记录,但除非我们将修改推送到原点的远程地址,否则 GitHub 无法得知:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:171 +#: ../../source/contributor-tutorial-contribute-on-github.rst:170 msgid "" "Once this is done, you will see on the GitHub that your forked repo was " "updated with the changes you have made." msgstr "完成此操作后,您将在 GitHub 上看到您的分叉仓库已根据您所做的更改进行了更新。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:175 +#: ../../source/contributor-tutorial-contribute-on-github.rst:174 msgid "Creating and merging a pull request (PR)" msgstr "创建和合并拉取请求 (PR)" -#: ../../source/contributor-tutorial-contribute-on-github.rst:206 +#: ../../source/contributor-tutorial-contribute-on-github.rst:205 msgid "**Create the PR**" msgstr "**创建 PR**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:178 +#: ../../source/contributor-tutorial-contribute-on-github.rst:177 msgid "" "Once you have pushed changes, on the GitHub webpage of your repository " "you should see the following message:" msgstr "推送更改后,在仓库的 GitHub 网页上应该会看到以下信息:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:182 +#: ../../source/contributor-tutorial-contribute-on-github.rst:181 #, fuzzy msgid "Otherwise you can always find this option in the ``Branches`` page." msgstr "否则,您可以在 \"分支 \"页面找到该选项。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:184 +#: ../../source/contributor-tutorial-contribute-on-github.rst:183 #, fuzzy msgid "" "Once you click the ``Compare & pull request`` button, you should see " "something similar to this:" msgstr "点击 \"比较和拉取请求 \"按钮后,您应该会看到类似下面的内容:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:188 +#: ../../source/contributor-tutorial-contribute-on-github.rst:187 msgid "At the top you have an explanation of which branch will be merged where:" msgstr "在顶部,你可以看到关于哪个分支将被合并的说明:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:192 +#: ../../source/contributor-tutorial-contribute-on-github.rst:191 msgid "" "In this example you can see that the request is to merge the branch " "``doc-fixes`` from my forked repository to branch ``main`` from the " "Flower repository." msgstr "在这个例子中,你可以看到请求将我分叉的版本库中的分支 ``doc-fixes`` 合并到 Flower 版本库中的分支 ``main``。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:194 +#: ../../source/contributor-tutorial-contribute-on-github.rst:193 msgid "" "The input box in the middle is there for you to describe what your PR " "does and to link it to existing issues. We have placed comments (that " @@ -1657,7 +1663,7 @@ msgid "" "process." msgstr "中间的输入框供您描述 PR 的作用,并将其与现有问题联系起来。我们在此放置了注释(一旦 PR 打开,注释将不会显示),以指导您完成整个过程。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:197 +#: ../../source/contributor-tutorial-contribute-on-github.rst:196 msgid "" "It is important to follow the instructions described in comments. For " "instance, in order to not break how our changelog system works, you " @@ -1666,167 +1672,175 @@ msgid "" ":ref:`changelogentry` appendix." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:201 +#: ../../source/contributor-tutorial-contribute-on-github.rst:200 msgid "" "At the bottom you will find the button to open the PR. This will notify " "reviewers that a new PR has been opened and that they should look over it" " to merge or to request changes." msgstr "在底部,您可以找到打开 PR 的按钮。这将通知审核人员新的 PR 已经打开,他们应该查看该 PR 以进行合并或要求修改。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:204 +#: ../../source/contributor-tutorial-contribute-on-github.rst:203 msgid "" "If your PR is not yet ready for review, and you don't want to notify " "anyone, you have the option to create a draft pull request:" msgstr "如果您的 PR 尚未准备好接受审核,而且您不想通知任何人,您可以选择创建一个草案拉取请求:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:209 +#: ../../source/contributor-tutorial-contribute-on-github.rst:208 msgid "**Making new changes**" msgstr "**作出新的改变**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:209 +#: ../../source/contributor-tutorial-contribute-on-github.rst:208 msgid "" "Once the PR has been opened (as draft or not), you can still push new " "commits to it the same way we did before, by making changes to the branch" " associated with the PR." msgstr "一旦 PR 被打开(无论是否作为草案),你仍然可以像以前一样,通过修改与 PR 关联的分支来推送新的提交。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:231 +#: ../../source/contributor-tutorial-contribute-on-github.rst:230 msgid "**Review the PR**" msgstr "**审查 PR**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:212 +#: ../../source/contributor-tutorial-contribute-on-github.rst:211 msgid "" "Once the PR has been opened or once the draft PR has been marked as " "ready, a review from code owners will be automatically requested:" msgstr "一旦 PR 被打开或 PR 草案被标记为就绪,就会自动要求代码所有者进行审核:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:216 +#: ../../source/contributor-tutorial-contribute-on-github.rst:215 msgid "" "Code owners will then look into the code, ask questions, request changes " "or validate the PR." msgstr "然后,代码所有者会查看代码、提出问题、要求修改或验证 PR。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:218 +#: ../../source/contributor-tutorial-contribute-on-github.rst:217 msgid "Merging will be blocked if there are ongoing requested changes." msgstr "如果有正在进行的更改请求,合并将被阻止。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:222 +#: ../../source/contributor-tutorial-contribute-on-github.rst:221 msgid "" "To resolve them, just push the necessary changes to the branch associated" " with the PR:" msgstr "要解决这些问题,只需将必要的更改推送到与 PR 关联的分支即可:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:226 +#: ../../source/contributor-tutorial-contribute-on-github.rst:225 msgid "And resolve the conversation:" msgstr "并解决对话:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:230 +#: ../../source/contributor-tutorial-contribute-on-github.rst:229 msgid "" "Once all the conversations have been resolved, you can re-request a " "review." msgstr "一旦所有对话都得到解决,您就可以重新申请审核。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:251 +#: ../../source/contributor-tutorial-contribute-on-github.rst:250 msgid "**Once the PR is merged**" msgstr "**一旦 PR 被合并**" -#: ../../source/contributor-tutorial-contribute-on-github.rst:234 +#: ../../source/contributor-tutorial-contribute-on-github.rst:233 msgid "" "If all the automatic tests have passed and reviewers have no more changes" " to request, they can approve the PR and merge it." msgstr "如果所有自动测试都已通过,且审核员不再需要修改,他们就可以批准 PR 并将其合并。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:238 +#: ../../source/contributor-tutorial-contribute-on-github.rst:237 msgid "" "Once it is merged, you can delete the branch on GitHub (a button should " "appear to do so) and also delete it locally by doing:" msgstr "合并后,您可以在 GitHub 上删除该分支(会出现一个删除按钮),也可以在本地删除该分支:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:245 +#: ../../source/contributor-tutorial-contribute-on-github.rst:244 msgid "Then you should update your forked repository by doing:" msgstr "然后,你应该更新你的分叉仓库:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:254 +#: ../../source/contributor-tutorial-contribute-on-github.rst:253 msgid "Example of first contribution" msgstr "首次捐款实例" -#: ../../source/contributor-tutorial-contribute-on-github.rst:257 +#: ../../source/contributor-tutorial-contribute-on-github.rst:256 msgid "Problem" msgstr "问题" -#: ../../source/contributor-tutorial-contribute-on-github.rst:259 +#: ../../source/contributor-tutorial-contribute-on-github.rst:258 +#, fuzzy msgid "" -"For our documentation, we’ve started to use the `Diàtaxis framework " +"For our documentation, we've started to use the `Diàtaxis framework " "`_." msgstr "对于我们的文档,我们已经开始使用 \"Diàtaxis 框架 `_\"。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:261 +#: ../../source/contributor-tutorial-contribute-on-github.rst:260 +#, fuzzy msgid "" -"Our “How to” guides should have titles that continue the sencence “How to" -" …”, for example, “How to upgrade to Flower 1.0”." +"Our \"How to\" guides should have titles that continue the sentence \"How" +" to …\", for example, \"How to upgrade to Flower 1.0\"." msgstr "我们的 \"如何 \"指南的标题应延续 \"如何...... \"的句式,例如 \"如何升级到 Flower 1.0\"。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:263 +#: ../../source/contributor-tutorial-contribute-on-github.rst:262 msgid "" "Most of our guides do not follow this new format yet, and changing their " "title is (unfortunately) more involved than one might think." msgstr "我们的大多数指南还没有采用这种新格式,而更改其标题(不幸的是)比人们想象的要复杂得多。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:265 +#: ../../source/contributor-tutorial-contribute-on-github.rst:264 +#, fuzzy msgid "" -"This issue is about changing the title of a doc from present continious " +"This issue is about changing the title of a doc from present continuous " "to present simple." msgstr "这个问题是关于将文档标题从现在进行时改为现在进行时。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:267 +#: ../../source/contributor-tutorial-contribute-on-github.rst:266 +#, fuzzy msgid "" -"Let's take the example of “Saving Progress” which we changed to “Save " -"Progress”. Does this pass our check?" +"Let's take the example of \"Saving Progress\" which we changed to \"Save " +"Progress\". Does this pass our check?" msgstr "以 \"保存进度 \"为例,我们将其改为 \"保存进度\"。这是否通过了我们的检查?" -#: ../../source/contributor-tutorial-contribute-on-github.rst:269 -msgid "Before: ”How to saving progress” ❌" +#: ../../source/contributor-tutorial-contribute-on-github.rst:268 +#, fuzzy +msgid "Before: \"How to saving progress\" ❌" msgstr "之前: \"如何保存进度\" ❌" -#: ../../source/contributor-tutorial-contribute-on-github.rst:271 -msgid "After: ”How to save progress” ✅" +#: ../../source/contributor-tutorial-contribute-on-github.rst:270 +#, fuzzy +msgid "After: \"How to save progress\" ✅" msgstr "之后: \"如何保存进度\"✅" -#: ../../source/contributor-tutorial-contribute-on-github.rst:274 +#: ../../source/contributor-tutorial-contribute-on-github.rst:273 msgid "Solution" msgstr "解决方案" -#: ../../source/contributor-tutorial-contribute-on-github.rst:276 +#: ../../source/contributor-tutorial-contribute-on-github.rst:275 +#, fuzzy msgid "" -"This is a tiny change, but it’ll allow us to test your end-to-end setup. " -"After cloning and setting up the Flower repo, here’s what you should do:" +"This is a tiny change, but it'll allow us to test your end-to-end setup. " +"After cloning and setting up the Flower repo, here's what you should do:" msgstr "这只是一个很小的改动,但可以让我们测试你的端到端设置。克隆并设置好 Flower repo 后,你应该这样做:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:278 +#: ../../source/contributor-tutorial-contribute-on-github.rst:277 #, fuzzy msgid "Find the source file in ``doc/source``" msgstr "在 `doc/source` 中查找源文件" -#: ../../source/contributor-tutorial-contribute-on-github.rst:279 +#: ../../source/contributor-tutorial-contribute-on-github.rst:278 #, fuzzy msgid "" "Make the change in the ``.rst`` file (beware, the dashes under the title " "should be the same length as the title itself)" msgstr "在 `.rst` 文件中进行修改(注意,标题下的破折号应与标题本身的长度相同)" -#: ../../source/contributor-tutorial-contribute-on-github.rst:280 +#: ../../source/contributor-tutorial-contribute-on-github.rst:279 +#, fuzzy msgid "" -"Build the docs and check the result: ``_" msgstr "" "构建文档并检查结果: ``_" -#: ../../source/contributor-tutorial-contribute-on-github.rst:283 +#: ../../source/contributor-tutorial-contribute-on-github.rst:282 msgid "Rename file" msgstr "重命名文件" -#: ../../source/contributor-tutorial-contribute-on-github.rst:285 +#: ../../source/contributor-tutorial-contribute-on-github.rst:284 msgid "" "You might have noticed that the file name still reflects the old wording." " If we just change the file, then we break all existing links to it - it " @@ -1836,32 +1850,33 @@ msgstr "" "您可能已经注意到,文件名仍然反映了旧的措辞。如果我们只是更改文件,那么就会破坏与该文件的所有现有链接--" "避免这种情况是***重要的,破坏链接会损害我们的搜索引擎排名。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:288 -msgid "Here’s how to change the file name:" +#: ../../source/contributor-tutorial-contribute-on-github.rst:287 +#, fuzzy +msgid "Here's how to change the file name:" msgstr "下面是更改文件名的方法:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:290 +#: ../../source/contributor-tutorial-contribute-on-github.rst:289 #, fuzzy msgid "Change the file name to ``save-progress.rst``" msgstr "将文件名改为`save-progress.rst`" -#: ../../source/contributor-tutorial-contribute-on-github.rst:291 +#: ../../source/contributor-tutorial-contribute-on-github.rst:290 #, fuzzy msgid "Add a redirect rule to ``doc/source/conf.py``" msgstr "在 `doc/source/conf.py` 中添加重定向规则" -#: ../../source/contributor-tutorial-contribute-on-github.rst:293 +#: ../../source/contributor-tutorial-contribute-on-github.rst:292 #, fuzzy msgid "" "This will cause a redirect from ``saving-progress.html`` to ``save-" "progress.html``, old links will continue to work." msgstr "这将导致从 `saving-progress.html` 重定向到 `save-progress.html`,旧链接将继续工作。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:296 +#: ../../source/contributor-tutorial-contribute-on-github.rst:295 msgid "Apply changes in the index file" msgstr "应用索引文件中的更改" -#: ../../source/contributor-tutorial-contribute-on-github.rst:298 +#: ../../source/contributor-tutorial-contribute-on-github.rst:297 #, fuzzy msgid "" "For the lateral navigation bar to work properly, it is very important to " @@ -1869,49 +1884,50 @@ msgid "" "arborescence of the navbar." msgstr "要使横向导航栏正常工作,更新 `index.rst` 文件也非常重要。我们就是在这里定义整个导航栏的结构。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:301 +#: ../../source/contributor-tutorial-contribute-on-github.rst:300 #, fuzzy msgid "Find and modify the file name in ``index.rst``" msgstr "查找并修改 `index.rst` 中的文件名" -#: ../../source/contributor-tutorial-contribute-on-github.rst:304 +#: ../../source/contributor-tutorial-contribute-on-github.rst:303 msgid "Open PR" msgstr "开放式 PR" -#: ../../source/contributor-tutorial-contribute-on-github.rst:306 +#: ../../source/contributor-tutorial-contribute-on-github.rst:305 +#, fuzzy msgid "" -"Commit the changes (commit messages are always imperative: “Do " -"something”, in this case “Change …”)" +"Commit the changes (commit messages are always imperative: \"Do " +"something\", in this case \"Change …\")" msgstr "提交更改(提交信息总是命令式的:\"做某事\",这里是 \"更改......\")" -#: ../../source/contributor-tutorial-contribute-on-github.rst:307 +#: ../../source/contributor-tutorial-contribute-on-github.rst:306 msgid "Push the changes to your fork" msgstr "将更改推送到分叉" -#: ../../source/contributor-tutorial-contribute-on-github.rst:308 +#: ../../source/contributor-tutorial-contribute-on-github.rst:307 msgid "Open a PR (as shown above)" msgstr "打开 PR(如上图所示)" -#: ../../source/contributor-tutorial-contribute-on-github.rst:309 +#: ../../source/contributor-tutorial-contribute-on-github.rst:308 msgid "Wait for it to be approved!" msgstr "等待审批!" -#: ../../source/contributor-tutorial-contribute-on-github.rst:310 +#: ../../source/contributor-tutorial-contribute-on-github.rst:309 msgid "Congrats! 🥳 You're now officially a Flower contributor!" msgstr "祝贺你 🥳 您现在正式成为 \"Flower \"贡献者!" -#: ../../source/contributor-tutorial-contribute-on-github.rst:314 +#: ../../source/contributor-tutorial-contribute-on-github.rst:313 msgid "How to write a good PR title" msgstr "如何撰写好的公关标题" -#: ../../source/contributor-tutorial-contribute-on-github.rst:316 +#: ../../source/contributor-tutorial-contribute-on-github.rst:315 msgid "" "A well-crafted PR title helps team members quickly understand the purpose" " and scope of the changes being proposed. Here's a guide to help you " "write a good GitHub PR title:" msgstr "一个精心撰写的公关标题能帮助团队成员迅速了解所提修改的目的和范围。以下指南可帮助您撰写一个好的 GitHub PR 标题:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:318 +#: ../../source/contributor-tutorial-contribute-on-github.rst:317 msgid "" "1. Be Clear and Concise: Provide a clear summary of the changes in a " "concise manner. 1. Use Actionable Verbs: Start with verbs like \"Add,\" " @@ -1924,63 +1940,63 @@ msgstr "" "\"等动词来表明目的。1. 包含相关信息: 提及受影响的功能或模块以了解上下文。1. 简短:避免冗长的标题,以方便阅读。1. " "使用正确的大小写和标点符号: 遵守语法规则,以确保清晰。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:324 +#: ../../source/contributor-tutorial-contribute-on-github.rst:323 msgid "" "Let's start with a few examples for titles that should be avoided because" " they do not provide meaningful information:" msgstr "让我们先举例说明几个应该避免使用的标题,因为它们不能提供有意义的信息:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:326 +#: ../../source/contributor-tutorial-contribute-on-github.rst:325 msgid "Implement Algorithm" msgstr "执行算法" -#: ../../source/contributor-tutorial-contribute-on-github.rst:327 +#: ../../source/contributor-tutorial-contribute-on-github.rst:326 msgid "Database" msgstr "数据库" -#: ../../source/contributor-tutorial-contribute-on-github.rst:328 +#: ../../source/contributor-tutorial-contribute-on-github.rst:327 msgid "Add my_new_file.py to codebase" msgstr "在代码库中添加 my_new_file.py" -#: ../../source/contributor-tutorial-contribute-on-github.rst:329 +#: ../../source/contributor-tutorial-contribute-on-github.rst:328 msgid "Improve code in module" msgstr "改进模块中的代码" -#: ../../source/contributor-tutorial-contribute-on-github.rst:330 +#: ../../source/contributor-tutorial-contribute-on-github.rst:329 msgid "Change SomeModule" msgstr "更改 SomeModule" -#: ../../source/contributor-tutorial-contribute-on-github.rst:332 +#: ../../source/contributor-tutorial-contribute-on-github.rst:331 msgid "" "Here are a few positive examples which provide helpful information " "without repeating how they do it, as that is already visible in the " "\"Files changed\" section of the PR:" msgstr "这里有几个正面的例子,提供了有用的信息,但没有重复他们是如何做的,因为在 PR 的 \"已更改文件 \"部分已经可以看到:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:334 +#: ../../source/contributor-tutorial-contribute-on-github.rst:333 msgid "Update docs banner to mention Flower Summit 2023" msgstr "更新文件横幅,提及 2023 年 Flower 峰会" -#: ../../source/contributor-tutorial-contribute-on-github.rst:335 +#: ../../source/contributor-tutorial-contribute-on-github.rst:334 msgid "Remove unnecessary XGBoost dependency" msgstr "移除不必要的 XGBoost 依赖性" -#: ../../source/contributor-tutorial-contribute-on-github.rst:336 +#: ../../source/contributor-tutorial-contribute-on-github.rst:335 msgid "Remove redundant attributes in strategies subclassing FedAvg" msgstr "删除 FedAvg 子类化策略中的多余属性" -#: ../../source/contributor-tutorial-contribute-on-github.rst:337 +#: ../../source/contributor-tutorial-contribute-on-github.rst:336 #, fuzzy msgid "Add CI job to deploy the staging system when the ``main`` branch changes" msgstr "添加 CI 作业,以便在 \"主 \"分支发生变化时部署暂存系统" -#: ../../source/contributor-tutorial-contribute-on-github.rst:338 +#: ../../source/contributor-tutorial-contribute-on-github.rst:337 msgid "" "Add new amazing library which will be used to improve the simulation " "engine" msgstr "添加新的惊人库,用于改进模拟引擎" -#: ../../source/contributor-tutorial-contribute-on-github.rst:342 +#: ../../source/contributor-tutorial-contribute-on-github.rst:341 #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:548 #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:946 #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:727 @@ -1989,153 +2005,154 @@ msgstr "添加新的惊人库,用于改进模拟引擎" msgid "Next steps" msgstr "接下来的步骤" -#: ../../source/contributor-tutorial-contribute-on-github.rst:344 +#: ../../source/contributor-tutorial-contribute-on-github.rst:343 msgid "" "Once you have made your first PR, and want to contribute more, be sure to" " check out the following :" msgstr "一旦您完成了第一份 PR,并希望做出更多贡献,请务必查看以下内容:" -#: ../../source/contributor-tutorial-contribute-on-github.rst:346 +#: ../../source/contributor-tutorial-contribute-on-github.rst:345 +#, fuzzy msgid "" -"`Good first contributions `_, where you should particularly look " -"into the :code:`baselines` contributions." +":doc:`Good first contributions `, where you should particularly look into the " +":code:`baselines` contributions." msgstr "" "`优秀的首次贡献 `_,在这里你应该特别看看 :code:`baselines` 的贡献。" -#: ../../source/contributor-tutorial-contribute-on-github.rst:350 +#: ../../source/contributor-tutorial-contribute-on-github.rst:349 #: ../../source/fed/0000-20200102-fed-template.md:60 msgid "Appendix" msgstr "附录" -#: ../../source/contributor-tutorial-contribute-on-github.rst:355 +#: ../../source/contributor-tutorial-contribute-on-github.rst:354 #, fuzzy msgid "Changelog entry" msgstr "更新日志" -#: ../../source/contributor-tutorial-contribute-on-github.rst:357 +#: ../../source/contributor-tutorial-contribute-on-github.rst:356 msgid "" "When opening a new PR, inside its description, there should be a " "``Changelog entry`` header." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:359 +#: ../../source/contributor-tutorial-contribute-on-github.rst:358 msgid "" "Above this header you should see the following comment that explains how " "to write your changelog entry:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:361 +#: ../../source/contributor-tutorial-contribute-on-github.rst:360 msgid "" "Inside the following 'Changelog entry' section, you should put the " "description of your changes that will be added to the changelog alongside" " your PR title." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:364 +#: ../../source/contributor-tutorial-contribute-on-github.rst:363 msgid "" -"If the section is completely empty (without any token) or non-existant, " +"If the section is completely empty (without any token) or non-existent, " "the changelog will just contain the title of the PR for the changelog " "entry, without any description." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:367 +#: ../../source/contributor-tutorial-contribute-on-github.rst:366 msgid "" "If the section contains some text other than tokens, it will use it to " "add a description to the change." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:369 +#: ../../source/contributor-tutorial-contribute-on-github.rst:368 msgid "" "If the section contains one of the following tokens it will ignore any " "other text and put the PR under the corresponding section of the " "changelog:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:371 +#: ../../source/contributor-tutorial-contribute-on-github.rst:370 msgid " is for classifying a PR as a general improvement." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:373 +#: ../../source/contributor-tutorial-contribute-on-github.rst:372 msgid " is to not add the PR to the changelog" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:375 +#: ../../source/contributor-tutorial-contribute-on-github.rst:374 msgid " is to add a general baselines change to the PR" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:377 +#: ../../source/contributor-tutorial-contribute-on-github.rst:376 msgid " is to add a general examples change to the PR" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:379 +#: ../../source/contributor-tutorial-contribute-on-github.rst:378 msgid " is to add a general sdk change to the PR" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:381 +#: ../../source/contributor-tutorial-contribute-on-github.rst:380 msgid " is to add a general simulations change to the PR" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:383 +#: ../../source/contributor-tutorial-contribute-on-github.rst:382 msgid "Note that only one token should be used." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:385 +#: ../../source/contributor-tutorial-contribute-on-github.rst:384 msgid "" "Its content must have a specific format. We will break down what each " "possibility does:" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:387 +#: ../../source/contributor-tutorial-contribute-on-github.rst:386 msgid "" "If the ``### Changelog entry`` section contains nothing or doesn't exist," " the following text will be added to the changelog::" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:391 +#: ../../source/contributor-tutorial-contribute-on-github.rst:390 msgid "" "If the ``### Changelog entry`` section contains a description (and no " "token), the following text will be added to the changelog::" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:397 +#: ../../source/contributor-tutorial-contribute-on-github.rst:396 msgid "" "If the ``### Changelog entry`` section contains ````, nothing will " "change in the changelog." msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:399 +#: ../../source/contributor-tutorial-contribute-on-github.rst:398 msgid "" "If the ``### Changelog entry`` section contains ````, the " "following text will be added to the changelog::" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:403 +#: ../../source/contributor-tutorial-contribute-on-github.rst:402 msgid "" "If the ``### Changelog entry`` section contains ````, the " "following text will be added to the changelog::" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:407 +#: ../../source/contributor-tutorial-contribute-on-github.rst:406 msgid "" "If the ``### Changelog entry`` section contains ````, the " "following text will be added to the changelog::" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:411 +#: ../../source/contributor-tutorial-contribute-on-github.rst:410 msgid "" "If the ``### Changelog entry`` section contains ````, the following " "text will be added to the changelog::" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:415 +#: ../../source/contributor-tutorial-contribute-on-github.rst:414 msgid "" "If the ``### Changelog entry`` section contains ````, the " "following text will be added to the changelog::" msgstr "" -#: ../../source/contributor-tutorial-contribute-on-github.rst:419 +#: ../../source/contributor-tutorial-contribute-on-github.rst:418 msgid "" "Note that only one token must be provided, otherwise, only the first " "action (in the order listed above), will be performed." @@ -2167,10 +2184,11 @@ msgid "(Optional) `pyenv-virtualenv ` msgstr "(可选) `pyenv-virtualenv `_" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:12 +#, fuzzy msgid "" "Flower uses :code:`pyproject.toml` to manage dependencies and configure " "development tools (the ones which support it). Poetry is a build tool " -"which supports `PEP 517 `_." +"which supports `PEP 517 `_." msgstr "" "Flower 使用 :code:`pyproject.toml` 来管理依赖关系和配置开发工具(支持它的)。Poetry 是一种支持 `PEP " "517 `_ 的构建工具。" @@ -2348,15 +2366,16 @@ msgid "Example: FedBN in PyTorch - From Centralized To Federated" msgstr "示例: PyTorch 中的 FedBN - 从集中式到联邦式" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:4 +#, fuzzy msgid "" "This tutorial will show you how to use Flower to build a federated " "version of an existing machine learning workload with `FedBN " "`_, a federated training strategy " "designed for non-iid data. We are using PyTorch to train a Convolutional " "Neural Network(with Batch Normalization layers) on the CIFAR-10 dataset. " -"When applying FedBN, only few changes needed compared to `Example: " -"PyTorch - From Centralized To Federated `_." +"When applying FedBN, only few changes needed compared to :doc:`Example: " +"PyTorch - From Centralized To Federated `." msgstr "" "本教程将向您展示如何使用 Flower 为现有的机器学习框架构建一个联邦学习的版本,并使用 \"FedBN `_\"(一种针对非 iid 数据设计的联邦训练策略)。我们使用 PyTorch 在 CIFAR-10 " @@ -2370,11 +2389,12 @@ msgid "Centralized Training" msgstr "集中式训练" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:10 +#, fuzzy msgid "" -"All files are revised based on `Example: PyTorch - From Centralized To " -"Federated `_. The only thing to do is modifying the file called " -":code:`cifar.py`, revised part is shown below:" +"All files are revised based on :doc:`Example: PyTorch - From Centralized " +"To Federated `. The only " +"thing to do is modifying the file called :code:`cifar.py`, revised part " +"is shown below:" msgstr "" "所有文件均根据 `示例: PyTorch -从集中式到联邦式 `_。唯一要做的就是修改名为 :code:`cifar.py` " @@ -2392,11 +2412,12 @@ msgid "You can now run your machine learning workload:" msgstr "现在,您可以运行您的机器学习工作了:" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:47 +#, fuzzy msgid "" "So far this should all look fairly familiar if you've used PyTorch " "before. Let's take the next step and use what we've built to create a " -"federated learning system within FedBN, the sytstem consists of one " -"server and two clients." +"federated learning system within FedBN, the system consists of one server" +" and two clients." msgstr "" "到目前为止,如果您以前使用过 PyTorch,这一切看起来应该相当熟悉。让我们进行下一步,使用我们所构建的内容在 FedBN " "中创建一个联邦学习系统,该系统由一个服务器和两个客户端组成。" @@ -2407,14 +2428,14 @@ msgid "Federated Training" msgstr "联邦培训" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:53 +#, fuzzy msgid "" -"If you have read `Example: PyTorch - From Centralized To Federated " -"`_, the following parts are easy to follow, onyl " -":code:`get_parameters` and :code:`set_parameters` function in " -":code:`client.py` needed to revise. If not, please read the `Example: " -"PyTorch - From Centralized To Federated `_. first." +"If you have read :doc:`Example: PyTorch - From Centralized To Federated " +"`, the following parts are" +" easy to follow, only :code:`get_parameters` and :code:`set_parameters` " +"function in :code:`client.py` needed to revise. If not, please read the " +":doc:`Example: PyTorch - From Centralized To Federated `. first." msgstr "" "如果你读过 `示例: PyTorch - 从集中式到联邦式 `_,下面的部分就很容易理解了,只需要修改 " @@ -3004,8 +3025,8 @@ msgid "" "Implementing a Flower *client* basically means implementing a subclass of" " either :code:`flwr.client.Client` or :code:`flwr.client.NumPyClient`. " "Our implementation will be based on :code:`flwr.client.NumPyClient` and " -"we'll call it :code:`MNISTClient`. :code:`NumPyClient` is slightly easier " -"to implement than :code:`Client` if you use a framework with good NumPy " +"we'll call it :code:`MNISTClient`. :code:`NumPyClient` is slightly easier" +" to implement than :code:`Client` if you use a framework with good NumPy " "interoperability (like PyTorch or MXNet) because it avoids some of the " "boilerplate that would otherwise be necessary. :code:`MNISTClient` needs " "to implement four methods, two methods for getting/setting model " @@ -3223,8 +3244,8 @@ msgid "" "Implementing a Flower *client* basically means implementing a subclass of" " either :code:`flwr.client.Client` or :code:`flwr.client.NumPyClient`. " "Our implementation will be based on :code:`flwr.client.NumPyClient` and " -"we'll call it :code:`CifarClient`. :code:`NumPyClient` is slightly easier " -"to implement than :code:`Client` if you use a framework with good NumPy " +"we'll call it :code:`CifarClient`. :code:`NumPyClient` is slightly easier" +" to implement than :code:`Client` if you use a framework with good NumPy " "interoperability (like PyTorch or TensorFlow/Keras) because it avoids " "some of the boilerplate that would otherwise be necessary. " ":code:`CifarClient` needs to implement four methods, two methods for " @@ -3389,13 +3410,15 @@ msgid "" msgstr "在服务器辅助脚本 *run-server.sh* 中,你可以找到以下代码,这些代码基本上都是运行 :code:`server.py` 的代码" #: ../../source/example-walkthrough-pytorch-mnist.rst:78 +#, fuzzy msgid "" "We can go a bit deeper and see that :code:`server.py` simply launches a " "server that will coordinate three rounds of training. Flower Servers are " "very customizable, but for simple workloads, we can start a server using " -"the :ref:`start_server ` function and " -"leave all the configuration possibilities at their default values, as " -"seen below." +"the `start_server `_ function " +"and leave all the configuration possibilities at their default values, as" +" seen below." msgstr "" "我们可以再深入一点,:code:`server.py` 只是启动了一个服务器,该服务器将协调三轮训练。Flower " "服务器是非常容易修改的,但对于简单的工作,我们可以使用 :ref:`start_server