We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
针对MQRNN的问题: 相关代码:
ypred_rho - yf是不是应该改成yf - ypred_rho ?同时,按照quantile loss公式不应该是不同horizon先累加,然后平均么?我看你后面直接loss.mean了,相当于不同horizon也平均了。 还有一个问题,对于某一轮来说,如果采样一次就进行一次loss.backward(),那args.step_per_epoch 这个参数的意义何在?直接加大args.num_epoches的值不就好了?我在实现的过程中,发现如果不同args.step_per_epoch的loss做avg,然后整体进行一次loss.backward()效果也还可以。不知道你对这个怎么看?
The text was updated successfully, but these errors were encountered:
ypred_rho - yf是不是应该改成yf - ypred_rho ?这个都可以,因为代码里面是(q-1)
上面是每个batch,进行一下loss更新。如果每个epoch进行更新,那就相当于每次跑完一轮才更新,这样更新效率比较低,没有比较设置每个step的batch更新了
Sorry, something went wrong.
No branches or pull requests
针对MQRNN的问题:
相关代码:
ypred_rho - yf是不是应该改成yf - ypred_rho ?同时,按照quantile loss公式不应该是不同horizon先累加,然后平均么?我看你后面直接loss.mean了,相当于不同horizon也平均了。
还有一个问题,对于某一轮来说,如果采样一次就进行一次loss.backward(),那args.step_per_epoch 这个参数的意义何在?直接加大args.num_epoches的值不就好了?我在实现的过程中,发现如果不同args.step_per_epoch的loss做avg,然后整体进行一次loss.backward()效果也还可以。不知道你对这个怎么看?
The text was updated successfully, but these errors were encountered: