You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 17, 2023. It is now read-only.
I got data from one of the depth cameras on Tahoma and was able to test the fully manual and more automated grasping algorithms. They needed a bit of tuning (most of the tuning was to resize and downsample the point cloud into the expected formats, the algorithms themselves worked pretty well without too many changes.
Blue is the fully manual algorithm (grey dots are where the user clicked)
Red is the automatic grasping algorithm (the user clicks on the center of the object and specifies a direction)
Even without running the code on the real robot, testing & tuning on example grasps would be extremely useful.
@mayacakmak mentioned that we want to focus specifically on bagged and deformable objects.
The text was updated successfully, but these errors were encountered: