Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference on own data #56

Open
jb455 opened this issue Mar 29, 2023 · 2 comments
Open

Inference on own data #56

jb455 opened this issue Mar 29, 2023 · 2 comments

Comments

@jb455
Copy link

jb455 commented Mar 29, 2023

Hi,
I'm trying to run inference using the pretrained model on my own data which is gathered from a RealSense camera.
I've updated run_example.py as suggested, updating rgb and depth paths and setting the camera parameters using camera intrinsics values obtained from the librealsense API, but the output normal.png is not as expected:
normal

I have checked my depth data; when I deproject to a point cloud it looks fine.

My question:

I save my depth data as raw uint16 values stored as a binary file, then read this back to a np array where you use cv2.imread - ie, depth_image = np.fromfile("depthdata.bin", dtype="short").reshape(height, width). Are raw depth values like this ok or should I apply a disparity map or something first before passing to SNE? There's no such step for your sample data but I don't know what steps you take to create the sample depth image from your raw depth data.

Thanks for sharing this project and for any help you can offer :)

@jb455
Copy link
Author

jb455 commented Mar 29, 2023

Another thought - how noisy can the depth data be? If it's trained on synthetic data, are the depth values assumed to be 'perfect' or are bumps and holes forgiven by the model?

@PatrickLowin
Copy link

Did you find a solution to your problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants