-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test.py settings partial settings not working #8
Comments
Sorry about that. Can you copy-paste the following codes into line 236 in crowd_sim_pred_real_gst.py (right before plt.pause(0.1))?
Also, in |
The code for saving slides is added in save_slides branch. The output figures are saved in |
Awesome, everything you said is correct! What should I do if I want to change the visualization style? Specifically, I want to add a dotted line on the path where the robot moves, so that the trajectory of the robot is more intuitive. |
To achieve this, you can use some data structure to save all positions of the robot before the episode ends, then draw a line between every pair of adjacent positions. |
Thank you! But where is the script for the visualization file |
The code is in another repo. I just added you as a collaborator. The code is in |
Sorry, I thought we were talking about another paper. I just added you to the correct repo. I believe the code is in "adds saving episode trajectory plot capability" commit in "render_traj" branch. And again, please don't overwrite anything in the repo. |
I think the visualization file in the CrowdNav_Prediction_AttnGraph project may be in crowd_sim.py. But it doesn't work when I make simple changes to the visualization, like when I change the color on line 758 |
Sorry about the late reply. The CrowdNav_Prediction repo is indeed the development version of our ICRA 2023 paper, not ICRA 2021 paper. |
Hello, do you still remember me? I still have some issues with the test.py, when I set save_slides=True, the folder trained_models/GST_predictor_rand/social_eval/41665 where the results are saved is empty. There is also a problem with the --test_case parameter setting. When I set any other positive number, the test will still run for 500 episodes. I'm confused and made some modifications to evaluation.py but it doesn't work. Do you have any suggestions? Maybe the uploaded file is incomplete
The text was updated successfully, but these errors were encountered: