You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Impressive work! I was wondering, what would it take to convert this to continuous? Break the time step down to the fps and make a decision every frame on what force to apply in which axis and direction? Is this doable with DQN?
The text was updated successfully, but these errors were encountered:
Sorry for the late answer, I had strange notification settings.
Also extending our agents to continuous action spaces is straightforward by applying the same changes which the continuous-action version of DQN applies to the original algorithm. The result is DDPG and is already implemented in keras-rl.
So you have to change the action space in AirGym environment and use keras-rl's DDPG.
Impressive work! I was wondering, what would it take to convert this to continuous? Break the time step down to the fps and make a decision every frame on what force to apply in which axis and direction? Is this doable with DQN?
The text was updated successfully, but these errors were encountered: