You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to use centralized_critic in training, i used --centralized-critic agrs while training and the training went well. But when i am tryign to visualize the generated policies it is causing an error
Loading checkpoint from /home/karuturi.t/blue_fujie/karuturi.t/MAgrid_repos/multigrid/scripts/saved/empty8x8/PPO_2024-02-22_17-21-13_CC/PPO_MultiGrid-Empty-8x8-v0_aff78_00000_0_2024-02-22_17-21-13/checkpoint_000005
Traceback (most recent call last):
File "/blue/fujie/karuturi.t/MAgrid_repos/multigrid/scripts/visualize.py", line 144, in
algorithm.restore(str(checkpoint))
File "/home/karuturi.t/blue_fujie/karuturi.t/conda/envs/multigrid/lib/python3.10/site-packages/ray/tune/trainable/trainable.py", line 577, in restore
self.load_checkpoint(checkpoint_dir)
File "/home/karuturi.t/blue_fujie/karuturi.t/conda/envs/multigrid/lib/python3.10/site-packages/ray/rllib/algorithms/algorithm.py", line 2342, in load_checkpoint
self.setstate(checkpoint_data)
File "/home/karuturi.t/blue_fujie/karuturi.t/conda/envs/multigrid/lib/python3.10/site-packages/ray/rllib/algorithms/algorithm.py", line 2794, in setstate
self.workers.local_worker().set_state(state["worker"])
File "/home/karuturi.t/blue_fujie/karuturi.t/conda/envs/multigrid/lib/python3.10/site-packages/ray/rllib/evaluation/rollout_worker.py", line 1463, in set_state
self.policy_map[pid].set_state(policy_state)
File "/home/karuturi.t/blue_fujie/karuturi.t/conda/envs/multigrid/lib/python3.10/site-packages/ray/rllib/policy/torch_mixins.py", line 111, in set_state
super().set_state(state)
File "/home/karuturi.t/blue_fujie/karuturi.t/conda/envs/multigrid/lib/python3.10/site-packages/ray/rllib/policy/torch_policy_v2.py", line 1083, in set_state
o.load_state_dict(optim_state_dict)
File "/home/karuturi.t/blue_fujie/karuturi.t/conda/envs/multigrid/lib/python3.10/site-packages/torch/_compile.py", line 24, in inner
return torch._dynamo.disable(fn, recursive)(*args, **kwargs)
File "/home/karuturi.t/blue_fujie/karuturi.t/conda/envs/multigrid/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 489, in _fn
return fn(*args, **kwargs)
File "/home/karuturi.t/blue_fujie/karuturi.t/conda/envs/multigrid/lib/python3.10/site-packages/torch/optim/optimizer.py", line 747, in load_state_dict
raise ValueError("loaded state dict contains a parameter group "
ValueError: loaded state dict contains a parameter group that doesn't match the size of optimizer's group
I am able to visualize using visualize.py from the policies generated without using centralized_critic for training.
So, should i make any changes in centralized_critic.py when i am training or else changes in visualize.py to match the input format of centralized_critic?
The text was updated successfully, but these errors were encountered:
I am trying to use centralized_critic in training, i used --centralized-critic agrs while training and the training went well. But when i am tryign to visualize the generated policies it is causing an error
Loading checkpoint from /home/karuturi.t/blue_fujie/karuturi.t/MAgrid_repos/multigrid/scripts/saved/empty8x8/PPO_2024-02-22_17-21-13_CC/PPO_MultiGrid-Empty-8x8-v0_aff78_00000_0_2024-02-22_17-21-13/checkpoint_000005
Traceback (most recent call last):
File "/blue/fujie/karuturi.t/MAgrid_repos/multigrid/scripts/visualize.py", line 144, in
algorithm.restore(str(checkpoint))
File "/home/karuturi.t/blue_fujie/karuturi.t/conda/envs/multigrid/lib/python3.10/site-packages/ray/tune/trainable/trainable.py", line 577, in restore
self.load_checkpoint(checkpoint_dir)
File "/home/karuturi.t/blue_fujie/karuturi.t/conda/envs/multigrid/lib/python3.10/site-packages/ray/rllib/algorithms/algorithm.py", line 2342, in load_checkpoint
self.setstate(checkpoint_data)
File "/home/karuturi.t/blue_fujie/karuturi.t/conda/envs/multigrid/lib/python3.10/site-packages/ray/rllib/algorithms/algorithm.py", line 2794, in setstate
self.workers.local_worker().set_state(state["worker"])
File "/home/karuturi.t/blue_fujie/karuturi.t/conda/envs/multigrid/lib/python3.10/site-packages/ray/rllib/evaluation/rollout_worker.py", line 1463, in set_state
self.policy_map[pid].set_state(policy_state)
File "/home/karuturi.t/blue_fujie/karuturi.t/conda/envs/multigrid/lib/python3.10/site-packages/ray/rllib/policy/torch_mixins.py", line 111, in set_state
super().set_state(state)
File "/home/karuturi.t/blue_fujie/karuturi.t/conda/envs/multigrid/lib/python3.10/site-packages/ray/rllib/policy/torch_policy_v2.py", line 1083, in set_state
o.load_state_dict(optim_state_dict)
File "/home/karuturi.t/blue_fujie/karuturi.t/conda/envs/multigrid/lib/python3.10/site-packages/torch/_compile.py", line 24, in inner
return torch._dynamo.disable(fn, recursive)(*args, **kwargs)
File "/home/karuturi.t/blue_fujie/karuturi.t/conda/envs/multigrid/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 489, in _fn
return fn(*args, **kwargs)
File "/home/karuturi.t/blue_fujie/karuturi.t/conda/envs/multigrid/lib/python3.10/site-packages/torch/optim/optimizer.py", line 747, in load_state_dict
raise ValueError("loaded state dict contains a parameter group "
ValueError: loaded state dict contains a parameter group that doesn't match the size of optimizer's group
I am able to visualize using visualize.py from the policies generated without using centralized_critic for training.
So, should i make any changes in centralized_critic.py when i am training or else changes in visualize.py to match the input format of centralized_critic?
The text was updated successfully, but these errors were encountered: