Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add the Bootstrapped Dual Policy Iteration algorithm for discrete action spaces #35

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

steckdenis
Copy link

This sb3_contrib pull request follows an issue opened in stable-baselines3.

Description

This pull request adds the Bootstrapped Dual Policy Iteration algorithm to stable-baselines3-contrib, with documentation and updated unit tests (I was able to make them work by replacing logger.record, as in SB3 algos, with self.logger.record, as in the other SB3-contrib algos).

The original BDPI paper is https://arxiv.org/abs/1903.04193. The main reason I propose to have BDPI in stable-baselines3-contrib is that it is quite different from other algorithms, as it heavily focuses on sample-efficiency at the cost of compute-efficiency (which is nice for slow-sampled robotic tasks). The main results in the paper show that BDPI outperforms several state-of-the-art RL algorithms on many environments (Table is difficult to explore into, and Hallway comes from gym-miniworld and is a 3D environment):

bdpi_results

I have reproduced these results with the BDPI algorithm I propose in this PR, on LunarLander. PPO, DQN and A2C have been run using the default hyper-parameters used by rl-baselines3-zoo (I suppose the tuned ones?), for 8 random seeds each. The BDPI curve is also the result of 8 random seeds. I apologize for the truncated runs, BDPI and DQN only ran for 100K time-steps (for BDPI, due to time constraints, as it takes about an hour per run to perform 100K time-steps):

bdpi_stablebaselines3

Types of changes

  • Bug fix: none
  • New feature: Policies for the BDPI actor and critics, with documentation. Supported policies are MlpPolicy, CnnPolicy and MultiInputPolicy.
  • New feature: The BDPI algorithm, with a simple (non-invasive in the code) multi-processing approach, to help with the compute requirements. I aimed at writing the code in the easiest way possible to understand.
  • Breaking change: none
  • Tests: The unit tests have been updated for BDPI, to the best of my knowledge. Every test with BDPI passes, there is no regression.
  • Documentation: I have updated the main page, TOC and Examples. I have written a page dedicated to the BDPI algorithm. Once compiled, the documentation looks nice on my machine, but I'm quite a novice Sphinx user.
  • My changes to rl-baselines3-zoo (hyper-parameter definition, and tuned hyper-parameters for LunarLander) are here: https://github.com/steckdenis/rl-baselines3-zoo

Checklist:

  • I've read the CONTRIBUTION guide (required)
  • The functionality/performance matches that of the source (required for new training algorithms or training-related features).
  • I have updated the tests accordingly (required for a bug fix or a new feature).
  • I have included an example of using the feature (required for new features).
  • I have included baseline results: I'm not sure if my plot above counts as baseline results, or if there is something more specific to do?
  • I have updated the documentation accordingly.
  • I have updated the changelog accordingly: There is no alpha version in the changelog that I can easily update (only Release 1.1.0 that is finalized). I would need help regarding the creation of a new alpha version after 1.1.0
  • I have reformatted the code using make format (required)
  • I have checked the codestyle using make check-codestyle and make lint (required)
  • I have ensured make pytest and make type both pass. (required)

Large experience buffer sizes lead to warnings about memory allocations,
and on the Github CI, to memory allocation failures. So, small
experience buffers are important.
============ =========== ============ ================= =============== ================


.. note::
Non-array spaces such as ``Dict`` or ``Tuple`` are not currently supported by any algorithm.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it seems that we forgot to update the contrib doc when adding support for Dict obs, the correct formulation should be the one from https://github.com/DLR-RM/stable-baselines3/blob/master/docs/guide/algos.rst

@@ -10,11 +10,12 @@ Name ``Box`` ``Discrete`` ``MultiDiscrete`` ``MultiBinary`` Multi Pr
============ =========== ============ ================= =============== ================
TQC ✔️ ❌ ❌ ❌ ❌
QR-DQN ️❌ ️✔️ ❌ ❌ ❌
BDPI ❌ ✔️ ❌ ❌ ✔️
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it says "multiprocessing" but I only see tests with one environment in the code...
and you probably need DLR-RM/stable-baselines3#439 to make it work with multiple envs.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I may have mis-understood what "Multiprocessing" in this document means, and what PPO in stable-baselines is doing.

BDPI distributes its training updates on several processes, even if only one environment is used. To me, this is multi-processing, comparable to PPO that uses MPI to distribute compute. But if PPO uses multiple environments to be able to do multiprocessing, then I understand that "Multiprocessing" in the documentation means "compatible with multiple envs", not just "fast because it uses several processes".

Should I add a note, or a second column to distinguish "multiple envs" from "multi-processing with one env"?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To me, this is multi-processing, comparable to PPO that uses MPI to distribute compute.

PPO with MPI distributes env and training compute (and is currently not implemented in SB3).

then I understand that "Multiprocessing" in the documentation means "compatible with multiple envs"

yes, that's the meaning (because we can use SubProcEnv to distribute data collection)

Should I add a note, or a second column to distinguish "multiple envs" from "multi-processing with one env"?

no, I think we have already enough columns in this table.

LunarLander
^^^^^^^^^^^

Results for BDPI are available in `this Github issue <https://github.com/DLR-RM/stable-baselines3/issues/499>`_.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as you are aiming for sample efficiency, I would prefer a comparison to DQN, QR-DQN (with tuned hyperparameters, results are already linked in the documentation: #13)

Regarding which envs to compare too, please do the classic control ones + 2 Atari games at least (Pong, Breakout) using the zoo, so we can compare the results with QR-DQN and DQN.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would also like a comparison of the compromise sample efficiency vs training time (how much more time does it take to train?)


# Update the critic (code taken from DQN)
with th.no_grad():
qvA = criticA(replay_data.next_observations)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please use meaningful variable names

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants