In this work, we present an end-to-end reinforcement learning (RL) framework, which aims at singulating and simultaneously picking the objects one by one from a random clutter. We present a novel solution to incorporate object interaction in policy learning and a gripper designed for this technique is capable of changing relative digit lengths. This repository provides the implementation.
- Universal Robot UR10
- Robotiq 140mm Adaptive parallel-jaw gripper
- RealSense Camera L515
- Extendable Finger for realizing the adjustable finger length. The CAD model can be found here.
The code is built with Python 3.6. Dependencies are listed in [requirements.yaml] and can be installed via Anaconda by running:
conda env create -n learn_interaction -f requirements.yaml
If you want to train your own model, please run the following code:
python main.py --play-only==False
We provide a testing script to evaluate our trained model in simulation. The following code runs the test on three trained objects, and report the average grasp success rates.
python main.py --play-only==True
Here we provide the steps to test our method on a real robot.
Robot control
Robot is controlled via this python software.
Camera setup
To deploy RealSense L515 camera,
- Download and install the librealsense SDK 2.0
- Our camera setting can be found in
real/640X480_L_short_default.json
Start testing
Then run the following code to start testing:
cd real
python test_in_real.py
For any technical issues, please contact: Chao Zhao ([email protected]).