-
Notifications
You must be signed in to change notification settings - Fork 402
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Marker integration #242
Comments
I'm interested in scale-specified VSLAM and this feature. I have read #241 and wonder how to employ the scale information, that is, the physical size of the ArUco marker. I think Thank you for your cooperation. |
In the first attempt, distance constraints was added between the corners of the markers, but adding such constraints was computationally inefficient. So instead, the corners of the markers are now fixed. This determines the scale.
stella_vslam/src/stella_vslam/marker_model/base.cc Lines 15 to 18 in 3d178c0
stella_vslam/src/stella_vslam/data/marker2d.cc Lines 10 to 18 in 3d178c0
stella_vslam/src/stella_vslam/optimize/global_bundle_adjuster.cc Lines 129 to 164 in c7feabd
stella_vslam/src/stella_vslam/optimize/local_bundle_adjuster.cc Lines 213 to 252 in 3d178c0
|
Thank you very much for your rapid response and detailed description. I understand that the scale information is employed as below:
By the way, without the size-known marker, is there any rule to determine the unit length of the estimated trajectory in this software? For example, I have involved a Structure-from-Motion software, and they used a rule that the translation vector between the first and second camera is a unit vector. |
See stella_vslam/src/stella_vslam/module/initializer.cc Lines 267 to 277 in f5cbbf5
|
Thank you again and sorry for my laziness! |
Reverted due to incorrect update method; multiple updates were made to a marker. Additionally, if the marker is fixed during optimization, their positions should not be updated after optimization. (In particular, updating after local BA makes the mapping unstable.) During loop closing, the marker position needs to be updated. The plan is as follows:
|
Hi @ymd-stella
And then I'm unsure how to go from bearing to the world coordinate. I used ten 3d points of the same images; I made a projection using the bearing and the 3d point in world coordinates. ( Rigidly (+scale) aligns two point clouds with known point-to-point correspondences with least-squares error. ) I get a transformation and rotation matrix; then, I transform my marker points. However, the results are inaccurate; how can I go from bearing to the world coordinate if I'm in Python? I have the trans_wc rot_wc, trans_cw and rot_cw Thank you in Advance ! |
Hello @ymd-stella, I noticed that the marker integration was reverted due to some issues with the update method. I would like to know if the main branch of the library currently contains the marker integration and if it is working well. If not, could you please provide some guidance on what needs to be added or changed to align with the UCOSlam algorithm? |
@AmineDh98 Please set the MarkerModel parameter to enable detection. |
Hi @ymd-stella , Thank you for your previous answer.
To incorporate markers into the existing message file format, I propose adding a new JSON object or array within the existing JSON structure. The map_database::to_json function can iterate over the markers and store their positions in this new JSON object or array, similar to how keyframes and landmarks are currently handled. For the map_database::from_json function, you would need to update the parsing logic to correctly extract and create marker objects from the JSON representation. I believe this addition would enhance the functionality of the stella_vslam library and provide valuable information about markers' positions in the VSLAM frame, enabling further analysis and visualization. Another suggestion could be, as I am using ROS. I would greatly appreciate any guidance or suggestions on how to implement this feature effectively and maintain compatibility with the existing codebase. |
No particular problem is found from the snippet. Pull requests are welcome. Saved maps will no longer be compatible, but I think it is sufficient to create conversion scripts if needed.
I had never thought about it. Is there any benefit to addressing it? I do not intend to work on it, as I am not currently inconvenienced. |
Hi @ymd-stella , When I used the marker integration the result in the socket viewer visualizer is bad and the problem seems to be the scale (the keyframes are too far from each other and the camera is moving very fast) so what should I write in the yml file ? What is the physical unit used to read the yml file ? Is it in meter, cm, mm ... ? |
This library is not dependent on a specific system of units. As long as the settings do not contradict each other, there is no problem. However, the visualizers are tuned in meter, so it is recommended to use meter. |
Hello @ymd-stella, I'm currently working with @AmineDh98 on the marker integration. We are interested in getting the marker position in high precision in order to transform the point cloud and camera trajectory to a coordinate system in which the marker positions are known. In the following I would like to describe the results of the discussions I had with @AmineDh98 and ask you (or someone else who is familiar with the code) to confirm if this is valid. Note that the hints I provide are not intended for @ymd-stella or other developers familiar to the code as I'm sure you know all of this already as you coded it. It's intended to be sure that we understood correctly and to enable other users to understand better the code involved in this issue. Approach as intended by @ymd-stellaAruco marker detection is already implemented in Current appraoch to computing the marker positions in the world coordinate systemIn the current implementation, the marker positions in the world frame are computed in the following way:
The idea (which is not yet implemented) is then to store exactly one reference keyframe for each marker (i.e. the first keyframe by which this marker was seen by). Then the marker position in the WorldCS is updated during global bundle adjustment whenever this reference keyframe is changed. This approach allows to determine the correct scale of the map, as the marker dimension is set in its own CS and only transformed. Then, the scale information is employed implicitly during local bundle adjustment, where the landmarks and keyframes are adjusted in such a way that the marker corner positions stay fixed in the WorldCS. Problems with this approachNoisy initial marker pose estimationHowever, a problem of this approach is that it relies on the initial PNP estimation of the marker pose w.r.t. the camera, which is never changed after it is computed upon the first marker detection. This pose estimation is potentially noisy and even if the reference keyframe poses are correct, the marker positions in the WorldCS might still be wrong as their computation assume a noisy initial PNP estimate. Fixing several markers during local bundle adjustment (not sure about this)Another potential problem with this approach is the presence of several markers: Note that all marker positions are fixed during the local bundle adjustment. See the Given that all initial estimates for the markers in world coordinates are potentially noisy (due to the aforementioned PNP solver and the issue that the marker->camera transform is never adjusted), this might result in inconsistent/noisy map information during the local bundle adjustment. Even when the marker positions are updated during a global bundle adjustment, the noise of the initial PNP marker pose estimation will still be present and contribute to a suboptimal local bundle adjustment. Suggested alternative approachWe instead suggest to include the markers in the local bundle adjustment and treat them just like any other landmark (except that we can identify reference points in the VSLAM point cloud which allow us to transform it and the camera trajectory ). Note that the approach of including the marker corners in the local bundle adjustment is also the approach that was chosen in UcoSlam (except that in UcoSlam, more weight is given to the marker positions, depending on the number of valid markers). This comes with two advantages:
It also comes with two disadvantages:
We accept these disadvantages for the following reason: For us, the markers are a tool to transform the camera trajectory and point cloud in the VSLAM coordinate system to a coordinate system in which the marker position is exactly known. We compute this Sim3 transform using the Kabsch algorithm and can hence tolerate inconsistent marker corner positions as this algorithm aims to minimize root mean squared deviation of reference points and should be robust against a few outliers (at least when Gaussian distributed). As the Kabsch algorithm provides the scale, we also don't depend on the map to be precisely scaled. Where we need helpWe would like to implement the approach described above (we are already working on it with promising intermediate results). However, we want to make sure that we understood correctly the current approach, if the problems we identified are indeed problems or if maybe we overlooked a detail. We can report that the current approach leads to worse results than our approach to include the marker corners in the local bundle adjustment. We believe that this is due to the points described above but of course it could be for different reasons. We can also provide the data that led to our assumptions (it was generated in Unreal Engine, using AirSim to control a drone in a virtual environment with known marker positions). |
It looks fine to me. I tried to give a constraint on the shape, but gave up due to the difficulty of weighting. I do not have enough time to spend on this feature. Also about updating the marker poses. |
Understood @AmineDh98 and @aschnerring ' s suggested alternative approach with their acceptance of the disadvantages of their approach
But I think global BA is also an important step as compared to local BA as far as accuracy is concerned, because when the loop closure happens the landmark points are corrected and not the marker poses, then it is possible to get a huge error which I am experiencing in my use case. I feel markers should be included in global BA, and for much more accuracy it should be included in both local and global BA. What are your thoughts ? |
I am sorry for the above confusing message, what I meant is in the present approach does not correct the marker poses after loop closure. @ymd-stella Can you give any idea regarding the weighting of markers ? Currently I am using this in global BA
Data/marker.cc
I am storing a reference keyframe for the marker in the keyframe_inserter.cc, but I am not sure how can I use the reference keyframe in global BA. Additionaly, I am storing the marker data to the messagepack. |
@ymd-stella , if you have any best way to give constraint to the shape by weighting, request you to please integrate it. I have been trying this with different weighting schemes as well but the results are always different for different videos. |
What issue is the feature request related to?
The application of monocular Visual SLAM is limited because the scale is not known. Marker integration is an inexpensive solution to this problem without the need for additional cameras. This feature is inspired by UcoSLAM.
In addition, by applying marker integration to an equirectangular model, robust Visual SLAM with scale can be realized.
Describe the solution you'd like
See draft.
How to achieve this
Additional context
The text was updated successfully, but these errors were encountered: