We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
在运行书中DDPG的参考代码时,观察到随着训练的进行,actorloss 的值在不断的上升,criticloss的值也是上下飘忽不定 actorloss 的定义不是-q值吗?现在这个值越来越大不就意味着动作的q值越来越小吗?这不是与我们想最大化动作的q值的目的相反吗? 所以评价这个算法的好坏最终是要看他的奖励是否上升吗?
The text was updated successfully, but these errors were encountered:
看奖励情况。actor的loss的绝对情况应该没什么意义吧,本身他的loss其实就是critic的输出,而critic网络又是不断在变的,可能最开始对于Q值估计误差偏大,后面逐渐修正。总之感觉强化学习的训练和其他深度学习任务还不太一样,最终目标还是看奖励是否收敛稳定
Sorry, something went wrong.
No branches or pull requests
在运行书中DDPG的参考代码时,观察到随着训练的进行,actorloss 的值在不断的上升,criticloss的值也是上下飘忽不定
actorloss 的定义不是-q值吗?现在这个值越来越大不就意味着动作的q值越来越小吗?这不是与我们想最大化动作的q值的目的相反吗?
所以评价这个算法的好坏最终是要看他的奖励是否上升吗?
The text was updated successfully, but these errors were encountered: