Skip to content

Commit c8fce4c

Browse files
ROS1 fixes for no output for RGBD hand gesture recognition (opendr-eu#343)
* Change from TimeSynchronizer to ApproximateTimeSynchronizer * Update rgbd-hand-gesture-learner.md Add information about gesture class and learner output ID correspondence. * Upload hand gesture examples * Delete hand_gesture_example.png * Upload hand gesture image with modified image name * Update rgbd-hand-gesture-learner.md Add hand gesture example image Co-authored-by: Kostas Tsampazis <[email protected]>
1 parent a427fd2 commit c8fce4c

File tree

3 files changed

+25
-3
lines changed

3 files changed

+25
-3
lines changed
Loading

docs/reference/rgbd-hand-gesture-learner.md

+14
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,20 @@
22

33
The *rgbd_hand_gesture_learner* module contains the *RgbdHandGestureLearner* class, which inherits from the abstract class *Learner*.
44

5+
On the table below you can find the gesture classes and their corresponding IDs:
6+
7+
| **ID** | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 |
8+
|:------:|:------:|:-----:|:----:|:----:|:--------------:|:--------------:|:----:|:---:|:-----:|:-----:|:---:|:----:|:-----:|:-------:|:---:|:-----:|
9+
| Class | COLLAB | Eight | Five | Four | Horiz HBL, HFR | Horiz HFL, HBR | Nine | One | Punch | Seven | Six | Span | Three | TimeOut | Two | XSign |
10+
11+
The naming convention of the gestures classes is as follow:
12+
- V is used for vertical gestures, while H is used for horizontal gestures.
13+
- F identifies the version of the gesture where the front of the hand is facing the camera, while B identifies the version where the back of the hand is facing the camera.
14+
- R is used for right-hand gestures, while L is used for left-hand gestures.
15+
16+
Below is an illustration image of hand gestures, the image is copied from [[1]](#dataset).
17+
![Hand gesture examples](images/hand_gesture_examples.png)
18+
519
### Class RgbdHandGestureLearner
620
Bases: `opendr.engine.learners.Learner`
721

projects/opendr_ws/src/perception/scripts/rgbd_hand_gesture_recognition.py

+11-3
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ class RgbdHandGestureNode:
3434

3535
def __init__(self, input_rgb_image_topic="/kinect2/qhd/image_color_rect",
3636
input_depth_image_topic="/kinect2/qhd/image_depth_rect",
37-
output_gestures_topic="/opendr/gestures", device="cuda"):
37+
output_gestures_topic="/opendr/gestures", device="cuda", delay=0.1):
3838
"""
3939
Creates a ROS Node for gesture recognition from RGBD. Assuming that the following drivers have been installed:
4040
https://github.com/OpenKinect/libfreenect2 and https://github.com/code-iai/iai_kinect2.
@@ -46,10 +46,13 @@ def __init__(self, input_rgb_image_topic="/kinect2/qhd/image_color_rect",
4646
:type output_gestures_topic: str
4747
:param device: device on which we are running inference ('cpu' or 'cuda')
4848
:type device: str
49+
:param delay: Define the delay (in seconds) with which rgb message and depth message can be synchronized
50+
:type delay: float
4951
"""
5052

5153
self.input_rgb_image_topic = input_rgb_image_topic
5254
self.input_depth_image_topic = input_depth_image_topic
55+
self.delay = delay
5356

5457
self.gesture_publisher = rospy.Publisher(output_gestures_topic, Classification2D, queue_size=10)
5558

@@ -75,7 +78,8 @@ def listen(self):
7578
image_sub = message_filters.Subscriber(self.input_rgb_image_topic, ROS_Image, queue_size=1, buff_size=10000000)
7679
depth_sub = message_filters.Subscriber(self.input_depth_image_topic, ROS_Image, queue_size=1, buff_size=10000000)
7780
# synchronize image and depth data topics
78-
ts = message_filters.TimeSynchronizer([image_sub, depth_sub], 10)
81+
ts = message_filters.ApproximateTimeSynchronizer([image_sub, depth_sub], queue_size=10, slop=self.delay,
82+
allow_headerless=True)
7983
ts.registerCallback(self.callback)
8084

8185
rospy.loginfo("RGBD gesture recognition node started!")
@@ -137,6 +141,8 @@ def preprocess(self, rgb_image, depth_image):
137141
type=str, default="/opendr/gestures")
138142
parser.add_argument("--device", help="Device to use (cpu, cuda)", type=str, default="cuda",
139143
choices=["cuda", "cpu"])
144+
parser.add_argument("--delay", help="The delay (in seconds) with which RGB message and"
145+
"depth message can be synchronized", type=float, default=0.1)
140146

141147
args = parser.parse_args()
142148

@@ -156,5 +162,7 @@ def preprocess(self, rgb_image, depth_image):
156162

157163
gesture_node = RgbdHandGestureNode(input_rgb_image_topic=args.input_rgb_image_topic,
158164
input_depth_image_topic=args.input_depth_image_topic,
159-
output_gestures_topic=args.output_gestures_topic, device=device)
165+
output_gestures_topic=args.output_gestures_topic, device=device,
166+
delay=args.delay)
167+
160168
gesture_node.listen()

0 commit comments

Comments
 (0)