Replies: 1 comment
-
There's a question on Stack Overflow too. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have 2 depth cameras in well known positions. I want to register the two generated pointclouds "manually" providing one trasformation matrix for one of them by myself. How do I do that? What I'd like to do is to say: "hey, I know my camera are here and there in space and they look at this point, give me the transformation matrix to align them".
Ideally this is my plan:
The point both camera look at will be my reference, so in position (0, 0, 0) (this is the lookat vector). Both cameras will be perfectly flat so that the up vectory is exactly (0, 1, 0) and they are in position (x, y, z) in space, roughly at opposite corners of a square room, equally distanced from the reference point and at the same height (with tollerances of 10-20 cm for both), lets say they are in a room 4 x 4 meters at 3 m of height so they will be at positions (2, 3, 2) and (-2, 3, -2).
Is there a built-in function to do so? Maybe I've been lazy with my research and didn't find any. Also is there a general approach for these kind of situations? It seems to me this should be quite a common setup but with my lack of knowledge I've not been able to find an high level solution and instead I'm relying on calculating the affine transformation matrices "by hand"
Beta Was this translation helpful? Give feedback.
All reactions