This repository contains the ROS implementation of the GP-MPPI control strategy, an online learning-based control strategy that integrates the state-of-the-art sampling-based Model Predictive Path Integral (MPPI) controller with a local perception model based on Sparse Gaussian Process (SGP).
The key idea is to leverage the learning capability of SGP to construct a variance (uncertainty) surface, enabling the robot to learn about the navigable space surrounding it, identify a set of suggested subgoals, and ultimately recommend the optimal subgoal that minimizes a predefined cost function for the local MPPI planner. MPPI then computes the optimal control sequence that satisfies the robot and collision avoidance constraints. This approach eliminates the need for a global map of the environment or an offline training process.
If you find this code useful in your research, please cite:
Ihab S. Mohamed, Mahmoud Ali, and Lantao Liu. "GP-guided MPPI for Efficient Navigation in Complex Unknown Cluttered Environments." IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023.
Paper: https://arxiv.org/abs/2306.12369
Video: https://youtu.be/et9t8X1wHKI
Bibtex:
@article{mohamed2023gp,
title={GP-guided MPPI for Efficient Navigation in Complex Unknown Cluttered Environments},
author={Mohamed, Ihab S and Ali, Mahmoud and Liu, Lantao},
booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
year={2023}
}
Please refer to the provided guidelines in our log-MPPI repository for detailed instructions.
The jackal_ros
package contains the common packages for Jackal, including Gazebo simulator with various world definitions and their ROS launch files, robot description, messages, and controllers.
The gp_subgoal
package utilizes the SGP occupancy model to learn the navigable space around the robot, identifies a set of suggested subgoals, and recommends the optimal subgoal to the MPPI local planner by minimizing a predefined cost function.
The mppi_control
node encompasses the implementation of vanilla MPPI, log-MPPI, and GP-MPPI control strategies. It subscribes to: (i) the 2D local costmap generated by the robot's onboard sensor for achieving collision-free navigation and (ii) the recommended GP-subgoal published by the gp_subgoal node. Subsequently, it publishes the control commands, specifically the linear and angular velocities of the robot.
URDF description and Gazebo plugins to simulate Velodyne laser scanners. To enhance the real-time performance of the GP-subgoal recommender node (gp_subgoal
), we've reduced the sample count to 500 (namely, samples:=500
) from the original 1875 in VLP-16.urdf.xacro.
-
To initiate the Gazebo simulation with Jackal in the provided cluttered environment, execute:
roslaunch jackal_gazebo world_stage.launch env_name:=maze1
maze1
is a maze-like environment spanning 20 meters by 20 meters, featuring three U-shaped rooms (U1, U2, and U3), along with several additional obstacles.forest1
is a forest-like environment measuring 50 meters by 50 meters, with a tree density of 0.2 trees per square meter. To create your own forest-like environment, please utilize the forest_gen package.
-
To start the GP-subgoal recommender policy, run:
roslaunch gp_subgoal gp_subgoal_sim.launch
- Note that this package is only activated when GP-MPPI is executed, i.e., when the
gp_mppi
argument in the following comment is set totrue
.
- Note that this package is only activated when GP-MPPI is executed, i.e., when the
-
To start the MPPI, log-MPPI, or GP-MPPI control strategies, 2D costmap node, and visualization of the system, run:
- For MPPI, run:
roslaunch mppi_control control_stage.launch normal_dist:=true gp_mppi:=false
- For log-MPPI, run:
roslaunch mppi_control control_stage.launch normal_dist:=false gp_mppi:=false
- To execute GP-MPPI with the sample mode (SM), run:
roslaunch mppi_control control_stage.launch gp_mppi:=true recovery_mode:=false
To enable the recovery mode (RM), set
recovery_mode
totrue
.
Ihab S. Mohamed and Mahmoud Ali (e-mail: {mohamedi, alimaa}@iu.edu)
Vehicle Autonomy and Intelligence Lab (VAIL)
Indiana University - Bloomington, USA