Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

quasi-Newton approximation of Hessian of Lagrangian #129

Closed
MaxenceGollier opened this issue Nov 14, 2024 · 2 comments
Closed

quasi-Newton approximation of Hessian of Lagrangian #129

MaxenceGollier opened this issue Nov 14, 2024 · 2 comments

Comments

@MaxenceGollier
Copy link

When we compute the Hessian of the Lagrangian in fact we compute the Hessian of the objective and the dual variables are discarded.
I think this should not be the case, there is no reason for not approximating the constraints with a quasi-Newton model...
IPOPT for example does compute each constraint as a quasi-Newton model when we use quasi-Newton approximations for the problem.

function NLPModels.hprod!(
nlp::QuasiNewtonModel,
x::AbstractVector,
y::AbstractVector,
v::AbstractVector,
Hv::AbstractVector;
kwargs...,
)
return hprod!(nlp, x, v, Hv; kwargs...)
end

@MaxenceGollier
Copy link
Author

Actually, to get the qN approximation of the Lagrangian, I should just push $$\nabla f_{k+1} + J_{k+1}^Ty_{k+1} - \nabla f_{k} - J_{k}^Ty_{k}$$ to update the approximation at each iterate instead of $$\nabla f_{k+1} - \nabla f_{k}$$ so this is irrelevant. I am closing this issue now.

@tmigot
Copy link
Member

tmigot commented Nov 14, 2024

Actually, @MaxenceGollier if you have a relatively small example with this it could be the start of a good tutorial?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants