Advantage Actor Critic (A2C) algorithm is a widely used algorithm in reinforcement learning. Conventionally, the vanilla A2C is always implemented in Monte Carlo setting. In this repository, an off-policy A2C trained under Temporal Difference setting is implemented to solve the Cartpole problem.
More efficient use of previous experience by training the model with the same past experience for multiple times.
Allowing a more stable learning of the value network by enabling more consistent TD targets predicted by a freezed value network.
Allowing the policy to learn from experience generated by different policies (different distributions from the current policy). The importance sampling ratio acts as a weight to correctly adjust the gradient for the current policy as the train data are generated from a different policy (i.e. behaviour policy)
An optimal policy is found after playing few hundreds of episodes. See the rewards obtained below: