-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about the paper/implementation #3
Comments
Hello,
Results for TQC+DroQ look interesting! However, we do not plan to expand this repository and intent to keep it frozen to ensure the reproducibility of the results reported in the paper. |
Thanks for the swift answer =)
given how fast is the implementation, it would make sense to even try UTD > 20, no? Btw, what makes it so fast? jax only or additional special tricks? Did you consider running the training for longer than 20 minutes or does it plateau/breaks? (let's say 1h for the easiest setup) |
Our laptop could run training only with utd=20 in real time, so we didn't try larger values :) Yes, it's just jax.jit. Otherwise, it's a vanilla implementation without any additional engineering. In the wild, we were constrained by the battery capacity :) With more training it gets better and better. |
Alright... still curious to see what it could do in the simplest setting (indoor, no battery, flat ground). fyi, I created a small report for the runs I did today with TQC ;) https://wandb.ai/araffin/a1/reports/TQC-with-DropQ-config-on-walk-in-the-park-env--VmlldzoyNTQxMzgz |
As a follow up, I've got a working version of TQC + DroQ in jax here (borrowed some code from your implementation ;)): vwxyzjn/cleanrl#272 |
@araffin Morning, I was reading the paper and trying to understand where the simulations played a part in the training. I understood the paper was promoting the idea of training using the real environment over a simulation, cutting the steps sim -> real out. But it then mentions using Mujoco and modelling the A1, I am assuming perhaps they initialised the training in mujoco then fine tuned with the real environment? |
no no, they trained on the real robot only, i managed to reproduce the experiment: https://araffin.github.io/slides/design-real-rl-experiments/#/13/2 |
Nice work! But I'm then confused as to which way they Incorporated the simulation (specifically Mujoco) in there training/research. Was it simply to draw a comparison? |
Hello,
thanks for sharing and open sourcing the work.
After a quick read of the paper, I had several questions:
or how do you ensure you are not breaking the robot by sending high-frequency commands (with larger value in motor damping?)
I have a working implementation of TQC + DropQ using Stable-Baselines3 that I can also share ;) (I can do a PR on request, and it will probably part of SB3 soon)
SB3 branch: https://github.com/DLR-RM/stable-baselines3/tree/feat/dropq
SB3 contrib branch: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib/tree/feat/dropq
Training script: https://github.com/araffin/walk_in_the_park/blob/feat/sb3/train_sb3.py
EDIT: SBX = SB3 + Jax is available here: https://github.com/araffin/sbx (with TQC, DroQ and SAC-N)
W&B example run: https://wandb.ai/araffin/a1/runs/2ln32rqx?workspace=user-araffin
The text was updated successfully, but these errors were encountered: