Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Modify damping and add lambda_max #65

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open

Modify damping and add lambda_max #65

wants to merge 4 commits into from

Conversation

Nic2020
Copy link
Member

@Nic2020 Nic2020 commented Feb 5, 2025

I played with a few difficult simulations and I found that these changes were helpful. In particular, this setting was able to solve very problematic simulations:

StateSpaceEcon.StackedTimeSolver.damping_br81(; delta=0.3, rateK=1.2, lambda_min=0.05, lambda_max=1.0)

I'd be happy to discuss these changes and find the best way to package them.

Thank you.

@Nic2020
Copy link
Member Author

Nic2020 commented Feb 5, 2025

Just FYI: the tests in ./test/sim_solver.jl fail now, so I will have to adjust them.

Copy link
Contributor

@jasonjensen jasonjensen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for working on this. I think the idea of slowly increasing lambda has some merit, to get out of a local minima, but I think in general this would best be implemented in an alternately named dampening algorithm; in principle, if we provide a reference when implementing a method that method should faithfully implement the approach in the reference.

We could also set the default growth to 1.0 to have the default behavior implement the original algorithm. That said, the change to the rate at which bigK gets lowered would (depending on the original algorithm) possibly have to be undone.

Adding the lamdba_max option seems like a good addition.

new_λ = min(λ * λ_growth, λ_max) # Gradually increase λ but cap at λ_max

if abs(norm(F, 2) - nF2) < 1e-12 # Convergence check to break loops
verbose && @info "Solver converged: residual change too small."
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Check for verbose should be before the check on the norm to save processing.

# Armijo test pass => accept the given λ
return true, λ
# Armijo test passed => accept the given λ
new_λ = min(λ * λ_growth, λ_max) # Gradually increase λ but cap at λ_max
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should call this solver something else if we are not in fact implementing the armijo rule.

bigK = bigK / rateK
# ... and accept given λ
# Lower `bigK` more aggressively when convergence is happening
bigK /= sqrt(rateK)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure about this change; why not have a different rateK? Using the square root is actually less agressive than the previous implementation since rateK > 1.
I don't have access to the Bank & Rose paper, but we should also here write a different dampener if we are changing the algorithm. Same this with the lambda growth line below.

bigK /= sqrt(rateK)

# If λ is near the lower bound for many steps, slowly increase it
λ = min(λ * λ_growth, λ_max)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not quite clear to me why we need to grow lambda here? The br algorithm already increases lambda when it lowers bigK.

if nF2_it != it
# first time we're called this iteration
# First time we're called in this iteration
nF2 = norm(F, 2) # store the residual
nF2_it = it
return false, calc_λ()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

calc_lambda should respect/enforce lambda_max, or is that option only in the signature to make all the signatures align?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants