README last updated on: 02/19/2018
Reinforcement learning framework and algorithms implemented in PyTorch.
Some implemented algorithms:
- Temporal Difference Models (TDMs)
- Deep Deterministic Policy Gradient (DDPG)
- (Double) Deep Q-Network (DQN)
- Soft Actor Critic (SAC)
- Twin Dueling Deep Determinstic Policy Gradient (TD3)
To get started, checkout the example scripts, linked above.
Install and use the included Ananconda environment
$ conda env create -f docker/rlkit/rlkit-env.yml
$ source activate rlkit
(rlkit) $ python examples/ddpg.py
There is also a GPU-version in docker/rlkit-gpu
$ conda env create -f docker/rlkit_gpu/rlkit-env.yml
$ source activate rlkit-gpu
(rlkit-gpu) $ python examples/ddpg.py
NOTE: these Anaconda environments use MuJoCo 1.5 and gym 0.10.5, unlike previous versions.
For an even more portable solution, try using the docker image provided in docker/rlkit_gpu
.
The Anaconda env should be enough, but this docker image addresses some of the rendering issues that may arise when using MuJoCo 1.5 and GPUs.
To use the GPU docker image, you will need a GPU and nvidia-docker installed.
Note that you'll need to get your own MuJoCo key if you want to use MuJoCo.
During training, the results will be saved to a file called under
LOCAL_LOG_DIR/<exp_prefix>/<foldername>
LOCAL_LOG_DIR
is the directory set byrlkit.launchers.config.LOCAL_LOG_DIR
. Default name is 'output'.<exp_prefix>
is given either tosetup_logger
.<foldername>
is auto-generated and based off ofexp_prefix
.- inside this folder, you should see a file called
params.pkl
. To visualize a policy, run
(rlkit) $ python scripts/sim_policy.py LOCAL_LOG_DIR/<exp_prefix>/<foldername>/params.pkl
If you have rllab installed, you can also visualize the results
using rllab
's viskit, described at
the bottom of this page
tl;dr run
python rllab/viskit/frontend.py LOCAL_LOG_DIR/<exp_prefix>/
Alternatively, if you don't want to clone all of rllab
, a repository containing only viskit can be found here.
Then you can similarly visualize results with.
python viskit/viskit/frontend.py LOCAL_LOG_DIR/<exp_prefix>/
To visualize a TDM policy, run
(rlkit) $ python scripts/sim_tdm_policy.py LOCAL_LOG_DIR/<exp_prefix>/<foldername>/params.pkl
The SAC implementation provided here only uses Gaussian policy, rather than a Gaussian mixture model, as described in the original SAC paper.
A lot of the coding infrastructure is based on rllab. The serialization and logger code are basically a carbon copy of the rllab versions.
The Dockerfile is based on the OpenAI mujoco-py Dockerfile.