-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Message format for RadarDetection #2
Conversation
For Position - did any of the For velocity - Did any of the For the rest - I approve as is |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is expressing the uncertainty of the velocity component important?
Uncertainty will be quite important for leveraging the data into a fused estimate of the situation. I'd echo my comment in #3 about using this to validate the new upper triangular covariance representation.
@tfoote I would agree that uncertainty is good, but only 1 example of radars to date used them, so we didn't know if that was something that the sensors could reasonably provide. We could add the covariance matrix, but if functionally no drivers implement it, is it worth having? That's the open question. If you think the answer is still yes, we should add it. |
My thought would be that the covariance would at least represent the precision of the specific model and not as much a function of the readings. I'm not familiar with the accuracy statistics for the different models of radars but if the measurement errors are insignificant this could be avoided. But I think that there might be a large difference between the $150 radar's accuracy and the $100k radar's accuracy that would be significant when doing sensor fusion. If the uncertainty is not built into the message then then algorithms have to be parameterized to make assumptions about the uncertainty or be hardcoded which means that they won't be robust to changing the sensor. Optimally the message is fully self contained and you can feed hybrid streams of readings from different quality sensors and the fusion can do the right thing with each reading based on their associated uncertainty. So I'm not in a position to say if it's necessary but would like to know if the uncertainty changes significantly between models, in which case it's notably more important. |
I agree with all that, but it seems few existing packages included it. Though others not following best conventions for completeness isn’t an excuse for this to not. I think that’s a good argument. Aaron, any objections to that? Note to future readers: I hope that some drivers later use it if its in the standard message. But given the state of things in our analysis of existing tools, I wouldn’t recommend any applications expecting a properly populated covariance matrix from the sensors themselves. |
Items
@radarAaron I think the way you propose computing size would be the same as the covariance, but I don't think that will be true of all vendors so we should separate the concepts. If they happen to be the same, then it makes it easier to fill in the message 😄 |
|
I think we still need the covariance matrices. @tfoote which elements do you think we should include it for? |
Toggling to get CI to run |
@radarAaron pinging for updates on covariance / info |
The more I think about it, the less I think adding covariance at this stage is a good idea. This sort of uncertainty information does not naturally come of of radar data (signal-to-noise ratios are a more natural expression of uncertainty but are not so useful in this context). What is available, really comes down to the higher-level algorithms which are implemented by vendors. I think that we are better to get started with something that we know will work and consider what expressions of uncertainly would make sense much further down the track once we have experiences working with Radar in an ROS setting. We have already noted that RadarTrack might make sense as a higher-level ObjectTrack which is applicable for many sensors but there is a lot of learning to be done between now and then. |
I still believe that a covariance is a necessary item to understand any error associated with the data. If some drivers don't implement, I suppose that's up to them, but we shouldn't not include it because its currently not commonly implemented. It could be important information for some people and then require a new message definition to include it. |
# This message is not meant to be used alone but as part of a stamped or array message. | ||
|
||
# Radar detections are the discrete clusterings of points | ||
# Detections which are filtered into moving objects are referred to as Tracks |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this description be copied to #3, too?
# Note: The z component of these fields is ignored for 2D tracking. | ||
geometry_msgs/Point position # x, y, z coordinates of the centroid of the object being tracked. | ||
geometry_msgs/Vector3 velocity # The velocity of the object in each spatial dimension. | ||
geometry_msgs/Vector3 size # The object size (m) in the x, y, z, dimensions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it even possible to actually extract 3D dimension information from a radar detection? Isn't it more like 2D dimension what you see in the radar? Because of occlusions, you can probably never correctly estimate "depth", you only have its lower bound.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can get 3D positional data from Radar (at least with FMCW Radar). It really comes down to the configuration of the antennas. Radar can estimate the reflection in the Z plane in a similar way to the X plane.
You are right in that radar works on reflections so can't really access the 3Dness of solid objects, however depending on the object you can get some data out. Each type of object as a kind of "depth signature" based on way it interacts with the reflections, and thin objects (like glass) cause a partial reflection while still allowing other objects behind to be detected.
@radarAaron Based on the discussion in #3 regarding uncertainty, would you mind adding the same |
I wonder now with some time if we should drop this message and just have radar returns (the raw data) and radar tracks (the tracked object instances). I suggested initially to have this so the major stages of radar data processing would have some topic to publish for debug and potential use - but the more I think about it, the more I think the tracked data is really all that's needed. I can go either way on this one. Someone could use the tracks message to contain this detection information, though semantically I dislike having 2 of the same types representing fundamentally different internal concepts (and "tracks" would be an incorrect name for its actual contents). Or the need for detections but not tracks is overblown and not required. 🤷 |
@SteveMacenski The problem is that some radars can only return tracks and not raw data (e.g. the Aptiv ESR) and some can only return raw detections/reflections and not tracks (e.g. the Aptiv MRR). |
So for the ESR case, we have the tracks message (separate from this detections message). For the MRR case, we have the raw data message - but I suppose also having just the grouped non-tracked detection would be valuable. OK, I agree its a good idea to keep this then as a terminal message for those cases. Looking over the |
@SteveMacenski If you're OK having fields that will never be used by one or the other. For example, a Detection will never be able to measure any sort of lateral movement relative to the radar but a Track sometimes will. Conversely, a Track will never contain a return amplitude (because a Track is defined as tracking multiple returns over time) but a Detection always will. There are some fields that are shared between the two but there are also some which are unique to each type. |
Now I'm confused. I don't see any amplitude field here in RadarDetection... Otherwise, I agree that some kind of uncertainty or covariance for all fields in RadarDetection would make sense. If you keep here just 3DOF position and size and linear velocity, the covariance matrices won't be too large (6 numbers each if using the triangular matrix representation; or maybe even just the diagonals?). |
Also, it'd make sense to me to require that the unobserved components of velocity (or size) are set to NAN and not zero. On the other hand, the third component in position should be set to zero in 2D mode to designate that the measurements lie in the plane of measurement (but the corresponding velocity component should rather be NAN unless you can really measure velocity in this unobserved dimension). |
@peci1 I don't see what that has to do with the message representation. You're welcome to populate them and use them as you like. @JWhitleyWork I think you might be conflating the "returns" from the "detections" in this nomenclature. A single radar return is a different message from a post-processed detection (or a clump of readings that a system has defined as a discrete object detected). Which is then different from a track, which is smoothing over time measurements of detections. |
And this is where you can start forgetting about general algorithms... I know that half of the developers will not read the comments anyway (not a complaint/offence, just observation), but if there is chance to make things more straight-forward, I'd go for it. |
As much as I'd love to have a completely standardized message format that can be used for general algorithms, given the current state of affairs, I can't see this being a reality. From the initial research (see link in the original issue) there is a huge variation in what radar sensors/drivers output and it will probably be a rare case that all fields could be completely populated.
I think that seems sensible. I can add this to the message notes. Is ".NAN" the correct nomenclature for this in ROS messages?
Personally, I think dropping RadarDetection and just using the RadarReturn and RadarTrack would be sufficient. Looking again at the original research. If RadarDetection did not exist, then the existing radar messages that were most similar to 'detection' could be made to (mostly) fit either detection or track. The main conceptual difference between a RadarDetection and RadarTrack as they have been conceived of here is the (UU)ID field which is used to correlate objects between frames. If the sensor does not provide tracking information then each UUID should be unique to indicate as such. Other fields in RadarTrack could be set to NAN if they are not supported by the sensor (which will often be the case). RadarTrack then becomes the 'best case' message type which hardware vendors should strive to meet. |
@peci1 you missed my point. You're welcome to only populate certain values in the covariance matrix, but we're going to make sure our systems are compatible with REP 103 which clearly lays out the format for covariance information. If you choose to not use it for your applications, that's not something I can control, but the standards exist and will be followed by this interface and populated by users accordingly. What you're suggesting is against ROS standards. Good comments in the messages are required to have clear meaning. You may have NaN's, though that is not consistent with how any other ROS message expects things so I don't think that's a good idea. Consistency is more important. @JWhitleyWork can you verify your meaning of "detection"? I think Aaron agrees that this RadarDetection message might not be required if we have RadarTrack and the RadarReturn (which is what I think you mean by "detection"). Unless you mean that a clustered but non-smoothed RadarDetection is needed separately from those other 2 concepts. |
@SteveMacenski @peci1 @radarAaron I misunderstood the meaning of Detection as presented here. I agree that just Track and Return are enough with these definitions. I have never interacted with a radar that returns a cluster of returns as a "Detection" - only those that return raw returns or tracked objects ("Tracks"). I think we can include enough information in the Track message to subsume the idea of "Detections." |
Ok, I think that closes this message out. We can revisit this topic if someone comments that they feel this is a necessary item to include. |
The proposed message format for RadarDetection.
Prior to proposing this format, research was done into the radar data message formats that exist out in the wild. The details of this analysis can be found here: https://github.com/radarAaron/radar_msgs/blob/master/ROS%20Message%20format%20comparison.xlsx
Summary
Four drivers appear to output something akin to a detection.
Position in 3D space
Each driver outputs position, however each uses a different format. Two use polar coordinates and two Cartesian. Only 2 of the 4 drivers contained any sort of uncertainty information.
Used position formats:
Velocity
Each driver outputs different velocity information, some containing directional information and some do not. Only one outputs any sort of uncertainty information. These could be reconciled into a Vector3 type with any velocity components which are not supported by the specific driver, being set to zero. The question about how to handle uncertainty remains.
Used velocity formats:
Size/Shape
Only one driver outputs size information. This is outputted as RCS (radar cross section)
Acceleration
No driver exposes acceleration
Additional Fields
Drivers also output the following fields: Amplitude, dynamic property, class_type
Conclusion
There is no consensus on how the position of objects is reported, or even what coordinates system to use. The call was made to report this using Cartesian coordinates.
Velocity could be reconciled into a Vector3 type with the velocity components not supported by specific drivers being set to zero.
There is no consensus about how the size of a shape is reported, with the only option being RCS. The call was made to represent 'size' using an ovaloid shape based on the distribution of points in 3D space.
It is unclear how uncertainty should be presented.
Notes
Object detentions are based on usually based on a point cloud. The point cloud shape is somewhat object specific and does not necessarily cover the full size of the object (eg. points tend to concentrate towards the center). The 'size' format chosen here is based on the distribution of points in 3D space. An alternative approach for expressing the size would be as a covariance matrix.
Questions