A commented and documented implementation of MuZero based on the Google DeepMind paper and the associated pseudocode. It is designed to be easily adaptable for every games or reinforcement learning environments (like gym). You only need to edit the game file with the parameters and the game class. Please refer to the documentation and the example.
MuZero is a model based reinforcement learning algorithm, successor of AlphaZero. It learns to master games without knowing the rules. It only knows actions and then learn to play and master the game. It is at least more efficient than similar algorithms like AlphaZero, SimPLe and World Models. See How it works
- Fully connected network in PyTorch
- Multi-Threaded with Ray
- CPU/GPU support
- TensorBoard real-time monitoring
- Single and multiplayer mode
- Commented and documented
- Easily adaptable for new games
- Examples of board and Gym games (See list below)
- Pretrained weights available
- Add human vs MuZero tracking in TensorBoard
- Residual Network
- Atari games
- Appendix Reanalyse of the paper
- Windows support (workaround by ihexx)
All performances are tracked and displayed in real time in tensorboard :
Testing Lunar Lander :
- Cartpole
- Lunar Lander
- Connect4
cd muzero-general
pip install -r requirements.txt
Edit the end of muzero.py:
muzero = Muzero("cartpole")
muzero.train()
Then run:
python muzero.py
To visualize the training results, run in a new terminal:
tensorboard --logdir ./
Edit the end of muzero.py:
muzero = Muzero("cartpole")
muzero.load_model()
muzero.test()
Then run:
python muzero.py
- Werner Duvaud
- Aurèle Hainaut
- Paul Lenoir