NOTE: I am currently busy with other tasks, so this project is on hiatus. Development will resume soon.
rl is a fully Rust-native reinforcement learning library with the goal of providing a unified RL development experience, aiming to do for RL what libraries like PyTorch did for deep learning. By leveraging Rust's powerful type system and the burn library, rl enables users to reuse production-ready SoTA algorithms with arbitrary environments, state spaces, and action spaces.
This project also aims to provide a clean platform for experimentation with new RL algorithms. By combining burn's powerful deep learning features with rl's provided RL sub-algorithms and components, users can create, test, and benchmark their own new experimental agents without having to start from scratch.
Currently, rl is in its early stages. Contributors are more than welcome!
- High-performance production-ready implementations of all SoTA RL algorithms
- Detailed logging and training visualization TUI (see image below)
- Maximum extensibility for creating and testing new experimental algorithms
- Gym environments
- A comfortable learning experience for those new to RL
- General RL peripherals and utility functions