You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
PPO2 is generally more stable and gives better results than TRPO. However, there is no version of GAIL with PPO. Could we have this feature in the repo? ( I was trying to dabble around with a PPO-GAIL anyway)
Thanks
The text was updated successfully, but these errors were encountered:
Hello,
We are now focusing on the migration to tf2 (cf #366 ) so I would avoid adding new algorithms now.
We plan (cf #576 ) to drop GAIL support but you should still be able to use it thanks to this repo also maintained by @AdamGleave and depends on Stable Baselines.
Btw, @AdamGleave what algorithms are supported with GAIL? (maybe PPO is already supported)
imitation uses PPO by default in its GAIL (and AIRL) implementation. But the RL algorithm is configurable by specifying a different model_classhere. We've not tested with non-PPO, but other algorithms should work OK.
PPO2 is generally more stable and gives better results than TRPO. However, there is no version of GAIL with PPO. Could we have this feature in the repo? ( I was trying to dabble around with a PPO-GAIL anyway)
Thanks
The text was updated successfully, but these errors were encountered: