You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From the associated paper, it seems that hardware latencies can play a big part in the overall performance of the policy. I noticed that in eval_real.py there is a action_exec_latency variable that is used for filtering actions inferenced from the policy. At the same time, there is a separate latency value robot_action_latency, that is used in bimanul_umi_env.py. What is the difference between these two latency values? Should they be the same value?
The text was updated successfully, but these errors were encountered:
hope it can develop UMI2 for easy use and assemble. and it can use "Scaling Up and Distilling Down Language-Guided Robot Skill Acquisition " for language control
From the associated paper, it seems that hardware latencies can play a big part in the overall performance of the policy. I noticed that in
eval_real.py
there is aaction_exec_latency
variable that is used for filtering actions inferenced from the policy. At the same time, there is a separate latency valuerobot_action_latency
, that is used inbimanul_umi_env.py
. What is the difference between these two latency values? Should they be the same value?The text was updated successfully, but these errors were encountered: