Skip to content

Commit

Permalink
Merge branch 'intro-flower-superexec-plugin' of https://github.com/pa…
Browse files Browse the repository at this point in the history
…nh99/NVFlare into intro-flower-superexec-plugin
  • Loading branch information
panh99 committed Sep 10, 2024
2 parents 58d7a4f + 1fde0ed commit 162030e
Show file tree
Hide file tree
Showing 212 changed files with 3,443 additions and 2,209 deletions.
14 changes: 7 additions & 7 deletions docs/examples/hello_fedavg_numpy.rst
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ The ``fedavg_script_executor_hello-numpy.py`` script builds the job with the Job

Define a FedJob
^^^^^^^^^^^^^^^^
:class:`FedJob<nvflare.job_config.fed_job.FedJob>` allows you to generate job configurations in a Pythonic way. It is initialized with the
:class:`FedJob<nvflare.job_config.api.FedJob>` allows you to generate job configurations in a Pythonic way. It is initialized with the
name for the job, which will also be used as the directory name if the job is exported.

.. code-block:: python
Expand All @@ -73,7 +73,7 @@ name for the job, which will also be used as the directory name if the job is ex
Define the Controller Workflow
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Define the controller workflow and send to server. We use :class:`FedAvg<nvflare.app_common.workflows.fedavg.FedAvg>` and specify the number of
clients and rounds, then use the :func:`to<nvflare.job_config.fed_job.FedJob.to>` routine to send the component to the server for the job.
clients and rounds, then use the :func:`to<nvflare.job_config.api.FedJob.to>` routine to send the component to the server for the job.

.. code-block:: python
Expand All @@ -91,7 +91,7 @@ Add Clients
Next, we can use the :class:`ScriptExecutor<nvflare.app_common.executors.script_executor.ScriptExecutor>` and send it to each of the
clients to run our training script. We will examine the training script ``hello-numpy_fl.py`` in the next main section.

The :func:`to<nvflare.job_config.fed_job.FedJob.to>` routine sends the component to the specified client for the job. Here, our clients
The :func:`to<nvflare.job_config.api.FedJob.to>` routine sends the component to the specified client for the job. Here, our clients
are named "site-0" and "site-1" and we are using the same training script for both.

.. code-block:: python
Expand All @@ -109,22 +109,22 @@ are named "site-0" and "site-1" and we are using the same training script for bo
Optionally Export the Job or Run in Simulator
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
With all the components needed for the job, you can export the job to a directory with :func:`export<nvflare.job_config.fed_job.FedJob.export>`
With all the components needed for the job, you can export the job to a directory with :func:`export<nvflare.job_config.api.FedJob.export>`
if you want to look at what is built and configured for each client. You can use the exported job to submit it to a real NVFlare deployment
using the :ref:`FLARE Console <operating_nvflare>` or :ref:`flare_api`.

.. code-block:: python
job.export_job("/tmp/nvflare/jobs/job_config")
This is optional if you just want to run the job in a simulator environment directly, as :class:`FedJob<nvflare.job_config.fed_job.FedJob>` has
a :func:`simulator_run<nvflare.job_config.fed_job.FedJob.simulator_run>` function.
This is optional if you just want to run the job in a simulator environment directly, as :class:`FedJob<nvflare.job_config.api.FedJob>` has
a :func:`simulator_run<nvflare.job_config.api.FedJob.simulator_run>` function.

.. code-block:: python
job.simulator_run("/tmp/nvflare/jobs/workdir")
The results are saved in the specified directory provided as an argument to the :func:`simulator_run<nvflare.job_config.fed_job.FedJob.simulator_run>` function.
The results are saved in the specified directory provided as an argument to the :func:`simulator_run<nvflare.job_config.api.FedJob.simulator_run>` function.


NVIDIA FLARE Client Training Script
Expand Down
4 changes: 2 additions & 2 deletions docs/examples/hello_pt_job_api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@ NVIDIA FLARE Server & Application
---------------------------------
In this example, the server runs :class:`FedAvg<nvflare.app_common.workflows.fedavg.FedAvg>` with the default settings.

If you export the job with the :func:`export<nvflare.job_config.fed_job.FedJob.export>` function, you will see the
If you export the job with the :func:`export<nvflare.job_config.api.FedJob.export>` function, you will see the
configurations for the server and each client. The server configuration is ``config_fed_server.json`` in the config folder
in app_server:

Expand Down Expand Up @@ -206,7 +206,7 @@ in app_server:
This is automatically created by the Job API. The server application configuration leverages NVIDIA FLARE built-in components.

Note that ``persistor`` points to ``PTFileModelPersistor``. This is automatically configured when the model SimpleNetwork is added
to the server with the :func:`to<nvflare.job_config.fed_job.FedJob.to>` function. The Job API detects that the model is a PyTorch model
to the server with the :func:`to<nvflare.job_config.api.FedJob.to>` function. The Job API detects that the model is a PyTorch model
and automatically configures :class:`PTFileModelPersistor<nvflare.app_opt.pt.file_model_persistor.PTFileModelPersistor>`
and :class:`PTFileModelLocator<nvflare.app_opt.pt.file_model_locator.PTFileModelLocator>`.

Expand Down
4 changes: 2 additions & 2 deletions docs/examples/hello_tf_job_api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ NVIDIA FLARE Server & Application
---------------------------------
In this example, the server runs :class:`FedAvg<nvflare.app_common.workflows.fedavg.FedAvg>` with the default settings.

If you export the job with the :func:`export<nvflare.job_config.fed_job.FedJob.export>` function, you will see the
If you export the job with the :func:`export<nvflare.job_config.api.FedJob.export>` function, you will see the
configurations for the server and each client. The server configuration is ``config_fed_server.json`` in the config folder
in app_server:

Expand Down Expand Up @@ -167,7 +167,7 @@ in app_server:
This is automatically created by the Job API. The server application configuration leverages NVIDIA FLARE built-in components.

Note that ``persistor`` points to ``TFModelPersistor``. This is automatically configured when the model is added
to the server with the :func:`to<nvflare.job_config.fed_job.FedJob.to>` function. The Job API detects that the model is a TensorFlow model
to the server with the :func:`to<nvflare.job_config.api.FedJob.to>` function. The Job API detects that the model is a TensorFlow model
and automatically configures :class:`TFModelPersistor<nvflare.app_opt.tf.model_persistor.TFModelPersistor>`.


Expand Down
Loading

0 comments on commit 162030e

Please sign in to comment.