Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The observation images regenerated by the dataset_state_to_obs script are upside down. #203

Open
pumfish opened this issue Oct 31, 2024 · 2 comments

Comments

@pumfish
Copy link

pumfish commented Oct 31, 2024

Hi,I'm using the Robomimic to regenerate demonstration by dataset_state_to_obs.py, follow this https://robomimic.github.io/docs/datasets/robosuite.html#extracting-observations-from-mujoco-states
But I found that the regenerate image, eg., frontview_image is upside down.
I have set ```macros.IMAGE_CONVENTION="opencv"````, but it's uesless. Is there something I haven't set up correctly?
Below is my scripts

 python dataset_states_to_obs.py --dataset ${DATA_DIR} \
         --output_name "add_agentview_demo.hdf5" \
         --done_mode 0 --shaped \
         --camera_names frontview agentview --depth \
         --camera_height 256 --camera_width 256 \

By the way, I also found that the depth vaule is different from the original. for example, the original depth from frontview_depth is from 0.992 to 0.997, but the regenerate depth value is from 1.3 to 3.8

How can I solve these issues?

@amandlek
Copy link
Member

What version and branch of robosuite and robomimic are you on?

As for the depth value, we convert the normalized values into real-valued depth that corresponds to distances here, since this is the desired behavior for most applications. You can toggle that behavior if you'd like to go with the normalized values.

@pumfish
Copy link
Author

pumfish commented Nov 1, 2024

Thanks for your reply
The version of robosuite is 1.4.1, robomimic is 0.3.0
When I generate point clouds using inverted images and normal images, I observe misalignment even though the depth values have been converted to real-values. I would like to know if the inverted images are the cause of the misalignment in the point clouds.
The code for synthesizing point clouds is referenced https://github.com/haonan16/Stow/blob/b3d3045a64992a190bbad9f25b14e83b45d2ae8b/perception/sample.py#L46-L120
faf3205488324ae2da210c751a3f818

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants