Skip to content

Latest commit

 

History

History
executable file
·
71 lines (33 loc) · 2.08 KB

README.md

File metadata and controls

executable file
·
71 lines (33 loc) · 2.08 KB

Deep Tracking Control with Lite3

Main contribution

To summarize, this project aims at combining the traditional MPC-based and terrain-aware foothold planner with the deep reinforcement learning(DRL) . The goal is to achieve robust control in extremely risky terrains such as stepping stone.

You can find the modifications in legged_robot_dtc.py and legged_robot_config.py.

Foothold Planner

In this project, we adapt a method similar to TAMOLS and Mini-Cheetah.

An estimated foothold will firstly be calculated by the formula:

$$ r_i^{cmd} = p_{shoulder, i} + p_{symmetry} + p_{centrifugal} $$

where

$$ p_{shoulder,i} = p_k + R_z(\Psi_k)l_i $$

$$ p_{symmetry} = \frac{t_{stance}}{2}v + k(v - v ^{cmd}) $$

The centrifugal term is omitted. $p_k$ is body position at k-timestep. $l_i$ is the shoulder position for $i^{th}$ leg with respect to local frame. $R_z(\Psi_k)$ is the rotation matrix translating velocity to global frame. $t_{stance}$ is time cycle and $k=0.03$ is the feedback gain.

However, we choose the footholds solely based on quantitative score from various aspects (distance current pos, terrain variance/gradient, support area etc.), rather than solving a optimization problem.

DRL

We use the framework from isaac-gym, with PPO algorithm. With the following feature added:

  • Remove teacher-student framework

  • Add GRU and CE-net as terrain encoder. Latent dimension was increased from 64 to 512.

  • TODO: symmetric data augmentation

To integrate the foothold into DRL, the relative position to the optimized foothold was fed as observations for both actor and critic network. Moreover, a sparse reward term was also added, which will be triggered in the touch-down time.

Estimated training time is 10 hours.

Set up

pip install -e rsl_rl

pip install -e .

Reference

  1. DTC: Deep Tracking Control