J. Hyeon Park, Wonhyuk Choi, Sunpyo Hong, Hoseong Seo, Joonmo Ahn, Changsu Ha, Heungwoo Han, Junghyun Kwon, and Sungchul Kang "Hierarchical Action Chunk Transformer: Learning Temporal Multimodality from Demonstrations with Fast Imitation Behavior", IEEE International Conference on Intelligent Robots and Systems (IROS), 2024
- Jaehyeon Park ([email protected])
- Wonhyuk Choi ([email protected])
conda create -n robot_action_learner python=3.8
conda activate robot_action_learner
pip install -r requirements.txt
python ./train.py \
--exp_dir ./experiment \
--data_root ./data_samples \
--data_config ./configs/dataset/srrc_dual_frankas/stack_cups.gin \
--model_config ./configs/model/hact_vq/srrc_dual_frankas/base.gin \
--task_config ./configs/task/hact_vq/srrc_dual_frankas/base_local.gin \
--gpu 0
model.pt
will be created inexp_dir
.
python ./serve.py --exp_dir ./experiment
agent.pt
will be created inexp_dir
.
agent = torch.jit.load("./experiment/agent.pt")
agent = agent.eval()
obs = env.reset()
while not done:
action = agent.forward(obs, timestep=t)
obs, reward, done, truncated, info = env.step(action)
- We will open the simulation (Viper-X) and datasets soon.
This code is implemented on the top of ACT. Several codes are adopted or modified from ACT.
All members of our remarkable robotics team: Joonmo Ahn, Rakjoon Chung, Changsu Ha, Heungwoo Han, Sunpyo Hong, Jaesik Jang, Rijeong Kang, Hosang Lee, Dongwoo Park, Hoseong Seo, Jaemin Yoon (in alphabetic order)