You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your share!
i run this code using env-id:Walker2d-v2, num_of_workers: 2
however, it seems that the worker can't forward in Actor-network, and it shows no error, it just in running state and can't get results
model.py def __call__(self, *input, **kwargs): for hook in self._forward_pre_hooks.values(): hook(self, input) if torch.jit._tracing: result = self._slow_forward(*input, **kwargs) else: result = self.forward(*input, **kwargs) for hook in self._forward_hooks.values(): hook_result = hook(self, input, result) if hook_result is not None: raise RuntimeError( "forward hooks should never return any values, but '{}'" "didn't return None".format(hook)) if len(self._backward_hooks) > 0: var = result while not isinstance(var, torch.Tensor): if isinstance(var, dict): var = next((v for v in var.values() if isinstance(v, torch.Tensor))) else: var = var[0] grad_fn = var.grad_fn if grad_fn is not None: for hook in self._backward_hooks.values(): wrapper = functools.partial(hook, self) functools.update_wrapper(wrapper, hook) grad_fn.register_hook(wrapper) return result
it is my first time to use multiprocessing, i do't know what's wrong with it.
thank you for your help
The text was updated successfully, but these errors were encountered:
@jiameij Sorry for reply so late, it's the problem of thread blocking. I have solved that, need to add the os.environ['OMP_NUM_THREADS'] = '1'. I will revise it to pytorch-0.4.1 in the next few weeks. Hope this can help you.
Thanks for your share!
i run this code using env-id:Walker2d-v2, num_of_workers: 2
however, it seems that the worker can't forward in Actor-network, and it shows no error, it just in running state and can't get results
model.py
def __call__(self, *input, **kwargs): for hook in self._forward_pre_hooks.values(): hook(self, input) if torch.jit._tracing: result = self._slow_forward(*input, **kwargs) else: result = self.forward(*input, **kwargs) for hook in self._forward_hooks.values(): hook_result = hook(self, input, result) if hook_result is not None: raise RuntimeError( "forward hooks should never return any values, but '{}'" "didn't return None".format(hook)) if len(self._backward_hooks) > 0: var = result while not isinstance(var, torch.Tensor): if isinstance(var, dict): var = next((v for v in var.values() if isinstance(v, torch.Tensor))) else: var = var[0] grad_fn = var.grad_fn if grad_fn is not None: for hook in self._backward_hooks.values(): wrapper = functools.partial(hook, self) functools.update_wrapper(wrapper, hook) grad_fn.register_hook(wrapper) return result
it is my first time to use multiprocessing, i do't know what's wrong with it.
thank you for your help
The text was updated successfully, but these errors were encountered: