Skip to content
This repository has been archived by the owner on Nov 3, 2022. It is now read-only.

Fix Padam unbiased learning rate #315

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

crowsonkb
Copy link

The formula for computing Padam's unbiased learning rate lr_t is currently only valid when partial = 1/2, because it was copied unchanged from Adam/AMSGrad where partial is effectively always 1/2. This PR corrects it so it is valid for any value of partial.

@MFreidank
Copy link
Contributor

MFreidank commented Feb 4, 2019

@crowsonkb I am not convinced that this is actually necessary. The implementation of the original authors does not do this. In fact, it seems to me as if they explicitly changed their implementation to undo the changes in your PR. Notice also that we do use the partial term in our final update. Can you provide more details on whether and if so why you still think this is useful?

@bot-of-gabrieldemarmiesse

Hello, I'm a bot! I can make mistakes, notify gabrieldemarmiesse if I made one.

I see that you modified the following file:

  • keras_contrib/optimizers/padam.py

The owner of those file is @MFreidank

@MFreidank could you please take a look at it whenever
you have the time and add a review? Thank you in advance for the help.

@gabrieldemarmiesse
Copy link
Contributor

@MFreidank please ignore the bot's message, my bot is up and running and it was just triggered for this PR.

@MFreidank
Copy link
Contributor

MFreidank commented Jun 22, 2019

@gabrieldemarmiesse
This has been stale since my comment from three months ago, so I suggest closing it unless you find a flaw with my arguments above.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants