Skip to content

Advantage Actor Critic with Temporal Difference on Cartpole-v0

License

Notifications You must be signed in to change notification settings

jaysonph/TD-A2C

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TD-based Advantage Actor Critic

Advantage Actor Critic (A2C) algorithm is a widely used algorithm in reinforcement learning. Conventionally, the vanilla A2C is always implemented in Monte Carlo setting. In this repository, an off-policy A2C trained under Temporal Difference setting is implemented to solve the Cartpole problem.

Highlighted features

Replay buffer

More efficient use of previous experience by training the model with the same past experience for multiple times.

Target network

Allowing a more stable learning of the value network by enabling more consistent TD targets predicted by a freezed value network.

Importance sampling

Allowing the policy to learn from experience generated by different policies (different distributions from the current policy). The importance sampling ratio acts as a weight to correctly adjust the gradient for the current policy as the train data are generated from a different policy (i.e. behaviour policy)

Results

An optimal policy is found after playing few hundreds of episodes. See the rewards obtained below: download

Releases

No releases published

Packages

No packages published

Languages