-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PPO dual clip #37
base: master
Are you sure you want to change the base?
PPO dual clip #37
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @MehdiZouitine ,
Thanks for the PR. I've added some comments, mostly about formatting. In addition, could you please add some unit tests for this implementation? (Simple ones, testing for some edge cases.)
I am not familiar with this Dual-Clip PPO, do you have experiments showing its efficacy?
|
||
**References** | ||
|
||
1. Schulman et al. 2017. “Proximal Policy Optimization Algorithms.” arXiv [cs.LG]. | ||
1. Deheng Ye et al. 2020 . “ Mastering Complex Control in MOBA Games with Deep Reinforcement Learning.” arXiv:1912.09729 . |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please keep the original reference -- you can add the new one as well.
""" | ||
[[Source]](https://github.com/seba-1511/cherry/blob/master/cherry/algorithms/ppo.py) | ||
|
||
**Description** | ||
|
||
The clipped policy loss of Proximal Policy Optimization. | ||
The dual clipped policy loss of Dual-Clip Proximal Policy Optimization. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add to the original description, while keeping the original message.
msg = "new_values, old_values, and rewards must have equal size." | ||
assert new_values.size() == old_values.size() == rewards.size(), msg | ||
if debug.IS_DEBUGGING: | ||
if old_values.requires_grad: | ||
debug.logger.warning('PPO:state_value_loss: old_values.requires_grad is True.') | ||
debug.logger.warning( | ||
"PPO:state_value_loss: old_values.requires_grad is True." | ||
) | ||
if rewards.requires_grad: | ||
debug.logger.warning('PPO:state_value_loss: rewards.requires_grad is True.') | ||
debug.logger.warning("PPO:state_value_loss: rewards.requires_grad is True.") | ||
if not new_values.requires_grad: | ||
debug.logger.warning('PPO:state_value_loss: new_values.requires_grad is False.') | ||
loss = (rewards - new_values)**2 | ||
debug.logger.warning( | ||
"PPO:state_value_loss: new_values.requires_grad is False." | ||
) | ||
loss = (rewards - new_values) ** 2 | ||
clipped_values = old_values + (new_values - old_values).clamp(-clip, clip) | ||
clipped_loss = (rewards - clipped_values)**2 | ||
clipped_loss = (rewards - clipped_values) ** 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Those lines shouldn't be modified.
msg = 'new_log_probs, old_log_probs and advantages must have equal size.' | ||
assert new_log_probs.size() == old_log_probs.size() == advantages.size(),\ | ||
msg | ||
msg = "new_log_probs, old_log_probs and advantages must have equal size." | ||
assert new_log_probs.size() == old_log_probs.size() == advantages.size(), msg | ||
if debug.IS_DEBUGGING: | ||
if old_log_probs.requires_grad: | ||
debug.logger.warning('PPO:policy_loss: old_log_probs.requires_grad is True.') | ||
debug.logger.warning( | ||
"PPO:policy_loss: old_log_probs.requires_grad is True." | ||
) | ||
if advantages.requires_grad: | ||
debug.logger.warning('PPO:policy_loss: advantages.requires_grad is True.') | ||
debug.logger.warning("PPO:policy_loss: advantages.requires_grad is True.") | ||
if not new_log_probs.requires_grad: | ||
debug.logger.warning('PPO:policy_loss: new_log_probs.requires_grad is False.') | ||
debug.logger.warning( | ||
"PPO:policy_loss: new_log_probs.requires_grad is False." | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Those lines shouldn't be modified.
Description
I added a new option in the PPO loss. This option allows to set up the dual clip PPO (https://arxiv.org/pdf/1912.09729.pdf). This option is very important in complex environments (MOBA, Starcraft and multi-agent environments) because trajectories can be sampled from various source of policies.
Contribution Checklist
If your contribution modifies code in the core library (not docs, tests, or examples), please fill the following checklist.