You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are three coordinate systems you mention: camera_coord, vehicle_coord and ground_coord. In JSON file the "xyz" are on vehicle_coord, and the output of the model is on ground_coord, so you need to transform using "R_vg" and "R_vc". Am I right? If I am right, how do you define the vehicle_coord?
What is # transformation from apollo camera to openlane camera code used for?
What is the code cam_extrinsics[0:2, 3] = 0.0 used for?
Could you please draw a picture to explain all the coordinate you used in the code?
Thanks very much!
The text was updated successfully, but these errors were encountered:
I visualize the data in JSON file and found it contradict to the explanation in issue #4. In the first picture below, the blue points are the raw points in JSON file, the red points are points transformed from the blue one with the extrinsics matrix in JSON file. So according to issue #4 the blue points are under the vehicle coordinate and the red points are under the camera coordinate. But that isn't a standard camera coordinate system, whose z-axis points up. The second picture below is the RGB image where the red points are drawed according to 'uv' information in JSON file.
# transformation from apollo camera to openlane camera
code used for?cam_extrinsics[0:2, 3] = 0.0
used for?Thanks very much!
The text was updated successfully, but these errors were encountered: