Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support in ROS2 #2

Open
SteveMacenski opened this issue Feb 26, 2021 · 18 comments
Open

Support in ROS2 #2

SteveMacenski opened this issue Feb 26, 2021 · 18 comments

Comments

@SteveMacenski
Copy link

Hi,

I notice only monocular is supported in ROS2 (with ROS1 having all 3). This is something me and my colleague can help commit to implementing as part of the Nav2 / ROS2 integration work. We'll likely have other features / things we want to expose (like adding a feature map publishing topic, among others) but figured this is a good place to get the ball rolling.

@mirellameelo
Copy link
Contributor

@SteveMacenski, there are new files modification regarding the stereo support for ROS2 in my last PR. I appreciate it if you could review it. I used the @ymd-stella work as a base and follow the same approach.
I think it would be a good idea to tag the next steps with respect to ROS2.
Thanks in advance.

@SteveMacenski
Copy link
Author

@mirellameelo I think you should split your PR into 2. Pushing too much in a PR makes things slow down dramatically. One for doc updates and other for a big feature update would be best so that @ymd-stella can quickly review

@SteveMacenski
Copy link
Author

SteveMacenski commented Mar 2, 2021

A concern I have here is that the ROS wrapper isn't making use of ROS time, so there's going to be issues in synchronizing the outputs of OpenVSLAM with the rest of a ROS-based system since the timing of the output poses and transforms may not be from the same clock. I think all times (for mono also) should be based on the ROS clock provided in the node. In many cases, it will still be chrono time, but not for simulation or if someone wants a specific clock for their nodes to use.

Similar with logging, all of them use std::cout rather than the node's logger (which also logs to file for review of any errors later)

https://github.com/OpenVSLAM-Community/openvslam/tree/main/ros/2/src/publisher I wonder if it wouldn't also make sense to delete these publisher directories. Obviously these are provided using ROS drivers of cameras so this seems unnecessary specifically for ROS.

@mirellameelo
Copy link
Contributor

Thanks for replying. I agree regarding the timers and loggings. I'll change it.
Alghtout there are ROS2 packages responsible to publish frames, I think it is interesting just from a didactic point of view to keep this "publisher". These ones are aligned with the example to run in ROS. So I believe is a practical and easy way to do at least a test, especially for new users, besides its use as a reference to add new publishers for different dataset patter.

@SteveMacenski
Copy link
Author

For those in the Nav2 slack group, we're discussing on the mapping and localization channel how to port ROS2 + expose all the stuff necessary.

@SteveMacenski SteveMacenski changed the title Support Stereo and RGBD in ROS2 Support in ROS2 Mar 4, 2021
@SteveMacenski
Copy link
Author

Also working on making the parameters ROS-y so that people can modify the OpenVSLAM settings like any other ROS node

@AlexeyMerzlyakov
Copy link
Contributor

AlexeyMerzlyakov commented Mar 5, 2021

Copying the task plan from Slack discussion:

[In progress by @mirellameelo in stella-cv/stella_vslam#41] task1: Support for Stereo cameras in ROS2 OpenVSLAM
task2: Support for RGB-D cameras in ROS2 OpenVSLAM
[Done by @mirellameelo in a repository fork] task3: Also, it is need to add TF2 in there. Currently, there is nav_msgs::msg::Odometry pose messages are being published. I think we need to have supplementary full "map->odom" frames transform for ROS2 SLAM/localization nodes.

Also the rectification for calibrated cameras is made only for ROS1, so we have to:
task4: Add camera images rectification into ROS2 slam/localization nodes.
Here is a few open questions: How could we verify the camera calibrations? Can we use a Gazebo simulator, say on TurtleBot3 model?
Does this model support a non-calibrated cameras simulation?
Real H/W verification also does not hurt.
All these questions are in the scope under investigation of this task.

After all nodes will be prepared, this might be an extra activity:
task5: Make unification process of what we've done. This will be required to get one ROS2 node acting in both SLAM or Localization modes (selected by ROS2 parameter), and also supporting all types of previously elaborated cameras (mono/stero/rgbd).

Currently, OpenVSLAM has only internally used map database which is not ROS2 representative. Therefore we need
task6: Investigate and extract Occupancy Grid map (and/or PCL map reconstuction) from OpenVSLAM internals.
This could be divided into 2 parts (@SteveMacenski suggestion):

  • A) expose the feature map to the user as a pointcloud (probably, XYZ, color, probably the point of origin, any other metadata associated with it) and
  • B) how do use it to do useful things to compute rough occupancy information as a separate node in Nav2. So these are really 2 tasks interfacing over a topic

For this task I have a personal interest, since I previously was touched OccGrid maps making in a Visual SLAM and also having an experience of working with maps (map server re-designs and costmap filters development).

task7: Based on above task there should be prepared probably documentation update for OpenVSLAM

@ymd-stella
Copy link
Contributor

The ROS wrappers have been migrated to an independent repository https://github.com/OpenVSLAM-Community/openvslam_ros.
Please contribute there. If you want to create a new ROS issue, please create it in openvslam_ros.
The issues that already exist can be left.
The ros directory will be removed after confirming that there is no problem and updating the documentation.

@SteveMacenski
Copy link
Author

SteveMacenski commented Mar 8, 2021

@ymd-stella should you remove the ros/ directory in openvslam then?

I see a main branch, what version is that? I think you can delete ros since noetic-devel is the last ROS1 so there's nothing else to add to later. For now, I'd recommend also removing foxy-devel until we have something for ROS2 working on ros2 worth releasing to foxy (e.g. its alot of effort to keep those in sync while we're still developing it)

Can you transfer this issue to there?

@ymd-stella ymd-stella transferred this issue from stella-cv/stella_vslam Mar 8, 2021
@ymd-stella
Copy link
Contributor

The current ROS version of openvslam_ros works even before noetic, as long as the dependent OpenCV versions are resolved correctly. Therefore, I will delete the noetic-devel branch and keep the ros branch.

@SteveMacenski
Copy link
Author

What's the main branch in this context?

@ymd-stella
Copy link
Contributor

ymd-stella commented Mar 8, 2021

main was the default branch.
The ros2 branch is now the default. (I deleted the ros2 branch and renamed the main branch to ros2.)

@SteveMacenski
Copy link
Author

@AlexeyMerzlyakov please provide a concise, bulleted list of things to do to go from where we are until a full release of a functioning ROS2 integratable openvSLAM system (support rgbd, stereo, ROS timing, parameters, calibration, documentation, testing, CI, release, occ grid, etc) so that we can see explicitly the status of all of the ongoing things / future subtrasks

@AlexeyMerzlyakov
Copy link
Contributor

AlexeyMerzlyakov commented Mar 16, 2021

Sure, the scope of current activities is listed below (will be dynamically updating as tasks are completed or appeared):

Development items

Testing

  • Testing of major features, including Stereo, RGBD, localization in prior map, and timing (make sure we didn't break anything)
  • CPU/memory benchmarking
  • Full-system testing in a reasonable sized environment on a robot and record dataset, localize in sometime later in different lighting conditions as a demo (office building, etc)

Documentation

  • Documentation update for OpenVSLAM based on above tasks

@AlexeyMerzlyakov
Copy link
Contributor

AlexeyMerzlyakov commented Mar 22, 2021

  • testing of major features, including Stereo, RGBD, localization in prior map, and timing (make sure we didnt break anything)
  • TF transformation parameterized by publish_tf bool parameterization (default true)
  • CPU/memory benchmarking
  • parameterize topic names / frames
  • full-system testing in a reasonable sized environment on a robot and record dataset, localize in sometime later in different lighting conditions as a demo (office building, etc)

These items are clear. Agree.

  • Publish Odometry or Pose message for fusion into R_L if fusing with multiple SLAM/localization systems

Could you please give more context on this? Currently, the pose (nav_msgs::msg::Odometry) is being published by openvslam-ros in a "camera_link" frame. Did you mean adding a TF-reader to obtain arbitrary robot geometry conversions to publish pose in "odom" frame?

  • Localization mode respond to a relocalization request from topic (like AMCL does with initial_pose)

Should we always re-localize a robot when initial_pose messages arrive, or only in the case when tracking is lost?

@SteveMacenski
Copy link
Author

Currently, the pose (nav_msgs::msg::Odometry) is being published by openvslam-ros in a "camera_link" frame.

That's fine then, as long as its not removed and the frame of the camera is actually representative of the input topic frame ID.

Should we always re-localize a robot when initial_pose messages arrive

Yes, if something comes from that, that's the user, it should be followed.

@AlexeyMerzlyakov
Copy link
Contributor

AlexeyMerzlyakov commented Mar 23, 2021

Thanks for noting! Updated #2 (comment) with items from above.

@SteveMacenski
Copy link
Author

How fast is the post estimated in openVSLAM (e.g. how often do we update the pose and worth updating the map->odom transform)?

This topic came up in the Nav2 WG meeting - traditionally we update slowly around 1hz or something for lidar slam so we have enough new features to have a real lock on the changes and rely on odometry for local positioning inbetween.

But VSLAM can update much, much faster which begs the question:

  • Should we buffer some measurements, filter them, and publish at a reduced rate, which would lag the system a little
  • Or should we publish to TF at the super fast rate, which might cause jitter + reduce odometry smoothness use

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants