See the gripper in action, learning to rotate the valve by 90 degrees:
Exploration Phase | During Training | Final Policy |
---|---|---|
The repository was tested using Python 3.11.4 on a machine running Ubuntu 22.04.2 LTS. The repository relies on Pytorch relying on NVIDIA CUDA GPUs. Instructions for enabling CUDA in Pytorch can be found here.
A comprehensive list of dependencies is available in requirements.txt
. Ensure that the hardware components for the gripper are turned on and connected to the machine.
-
Clone the repository using
git clone
. -
Run
pip3 install -r requirements.txt
in the root directory of the package. -
Install the two CARES libraries as instructed in CARES Lib and CARES Reinforcement Learning Package.
-
Create a folder named
your_folder_name
to store config files. To get started, copy and paste the files inscripts/config_examples
into the folder. For a guide on changing configs, see the wiki.
Consult the repository's wiki for a guide on how to use the package.
The current setup uses Dynamixel XL-320 servo motors (4 for Two-Finger and 9 for Three-Finger Gripper), which are being controlled using a U2D2. Below is an example of the wiring setup for the three-fingered gripper. This is easily changed for other configurations, with a maximum of 4 daisy chains coming from the U2D2.
A list of items required to build the grippers can be found in Grippers BOM.
3D printed parts for both grippers can be found in Two-Finger STL and Three-Finger STL.
You can specify the folder to save results in using the local_results_path
argument; otherwise, it defaults to {home_path}/gripper_training
. The folder containing the results is named according to the following convention:
{date}_{robot_id}_{environment_type}_{observation_type}_{seed}_{algorithm}
Results are stored using the format as specified by standardised CARES RL format.