-
Notifications
You must be signed in to change notification settings - Fork 175
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PPO attention net (GTrXLNet) #176
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a forgotten file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, my bad
Convert numpy array to torch array (for evaluation)
Remove model call when episode starts (memory dimension and features sequence not always the same)
Add assertion sanity check on batch_size and n_steps (as in PPO)
Hey @RemiG3 I hope everything is going well. 👋 I've been following the development of the attention PPO feature, and I'm really excited about the progress being made! Could you provide an update on the current status of this feature? I'd love to know where it stands and if there's anything new to be excited about since the last time you commented. I came across this example you shared: from sb3_contrib.ppo_attention.ppo_attention import AttentionPPO
from sb3_contrib.ppo_attention.policies import MlpAttnPolicy
VE = DummyVecEnv([lambda: gym.make("CartPole-v1")])
model = AttentionPPO(
"MlpAttnPolicy",
VE,
n_steps=240,
learning_rate=0.0003,
verbose=1,
batch_size=12,
ent_coef=0.03,
vf_coef=0.5,
seed=1,
n_epochs=10,
max_grad_norm=1,
gae_lambda=0.95,
gamma=0.99,
device='cpu',
policy_kwargs=dict(
net_arch=dict(pi=[64, 32], vf=[64, 32]),
)
) Does it still work like this? If there's any example available to better understand how this feature is being implemented or if it's already possible to test a prototype, I'd be incredibly grateful for any information in this regard. Thank you so much for the hard work you're putting into this. Many thanks, and I'm eagerly looking forward to your response. 🚀 |
In igibson, I compared the three algorithms PPO, Recurrent_PPO, and Attention_PPO. Unfortunately even if I try to change the network parameters of GTrXL, it works poorly and requires more training time. |
Description
Add PPO attention network (GTrXLNet, paper: Stabilizing Transformers for Reinforcement Learning).
Comparisons have to be made (with the implementation of RLlib for example).
closes #165
Note: I have cleaned up most of the code, but it's still under development.
Context
Types of changes
Checklist:
make format
(required)make check-codestyle
andmake lint
(required)make pytest
andmake type
both pass. (required)Note: we are using a maximum length of 127 characters per line.