Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ROS2] Separate MoveToMouth into Two Actions #127

Closed
amalnanavati opened this issue Nov 13, 2023 · 0 comments · Fixed by #128
Closed

[ROS2] Separate MoveToMouth into Two Actions #127

amalnanavati opened this issue Nov 13, 2023 · 0 comments · Fixed by #128
Assignees

Comments

@amalnanavati
Copy link
Contributor

Currently, the MoveToMouth tree moves to the staging location, waits to detect the mouth, and then moves to the user. The challenge with this is that while the robot is detecting the face, it is unclear to the user what is going on (the app just shows "thinking") and hence it is unclear to them what they should do to facilitate successful robot behaviors (e.g., move their head, teleoperate the arm down, etc.)

Instead, we should separate bite transfer into two separate action calls:

  • MoveToStagingConfiguration should move to the staging configuration.
  • Then, the app should subscribe to face detection and display the face detection stream for the user.
  • MoveToMouth should take in the results of face detection and move to the mouth.

For robustness, MoveToMouth should have some fallbacks in case the detected face is stale, plus it should convert it to the base frame at the timestamp in the message in case the robot has moved since.

@amalnanavati amalnanavati self-assigned this Nov 13, 2023
@amalnanavati amalnanavati linked a pull request Nov 13, 2023 that will close this issue
24 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant