Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What does a constant Gradient norm indicate? #1027

Closed
cseveren opened this issue Jan 24, 2023 · 2 comments
Closed

What does a constant Gradient norm indicate? #1027

cseveren opened this issue Jan 24, 2023 · 2 comments

Comments

@cseveren
Copy link

What is signified by the output Gradient norm being constant (stuck, not changing) across iterations when using NewtonTrustRegion. As an example, below is the output from the first two iterations of a minimization problem run using NewtonTrustRegion. My initial point is the output of a prior round of optimization, that also got stuck at this same Function value and Gradient norm.

Iter     Function value   Gradient norm 
     0     4.925179e+06     1.951972e+03
 * time: 0.00021505355834960938
     1     4.925179e+06     1.951972e+03
 * time: 0.0005660057067871094

Some more detail: If I switch to LBFGS the optimization successfully continues (Function value decreases), but of course gradient methods are slow, so it would be ideal to switch back to NewtonTrustRegion. Even if I let LBFGS run for a while so as to find a moderately different candidate minimizer, when I switch back to NewtonTrustRegion this same behavior of being stuck with constant Gradient norm re-emerges.

I would provide code but it is complicated and has a lot of data and it only occurs in some of the models I've run -- I'm really just hoping to get some intuition as to options/tuning parameters to adjust to bounce out of difficult spots. I have already tried allow_f_increases=true; that did not solve the issue.

Excellent package, many thanks.

@cseveren
Copy link
Author

cseveren commented Feb 7, 2023

Because this may not be an Optim issue, I also posted here:

https://discourse.julialang.org/t/gradient-norm-does-not-change-in-optim-using-autodiff/94215

@pkofod
Copy link
Member

pkofod commented Apr 29, 2024

No progress would be my bet. If you can provide more information I can reopen :)

@pkofod pkofod closed this as completed Apr 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants