-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimizer options interface #52
Comments
I'm not sure what that means. What's "stationarity"?
There's
There's no particular reason this has to be hardcoded, except that I haven't really looked at what these parameters do or how they might go into a coherent interface for the GRAPE package. If being able to set
That would be something on the long-term TODO list
Probably not… Stuff like that should generally be in the internal "workplace" object (
Yes, definitely. I can do that this weekend if that helps you, as an "experimental feature" (subject to change in future versions if I can make the API more coherent) |
Thanks for the quick reply!
Stationarity is just jargon for a condition that encodes if a point is a stationary point for the underlying optimization problem (so in the simplest setting: a condition that encodes how close the gradient is to vanishing) which is of course often used as part of convergence criterion. Here, the default L-BFGS-B method declares convergence when the norm of the projected gradient is
I personally only need
If the gradient was carried by this object I thought I would just be able to define my convergence criterion accordingly. Something along the lines of
Together with what I wrote above, maybe this could already solve my problem? I will look into that. Thanks! |
Makes sense now!
GRAPE is still very much at a pre-1.0 stage of development, so there's a very low bar both for ad-hoc changes and for breaking changes later on to clean up those ad-hoc changes. So I'll just push out a minor release that makes those parameters accessible.
You're totally right (and I kinda forgot). That makes total sense, and is a very good argument for including the gradient in the result object. So I'll probably end up doing that (at some point)
You can definitely abuse the |
Hey,
I would like to solve some quantum control problems via GRAPE but I need high precision with regard to stationarity. Unfortunately, I can't use Optim.jl because I have bounds on the control drives and it seems I am also not able to strengthen the convergence criteria in the L-BFGS-B default flexibly enough. I think that is the case at least? The
GrapeResult
object does not carry gradient information so I assume cannot use that to define my own convergence criterion and the corresponding LBFGSB.jl optionpgtol = 1e-5
appears to be hardcoded (see https://github.com/JuliaQuantumControl/GRAPE.jl/blob/83295ce48621430db739ce2cdee4afba5b9ee41e/src/backend_lbfgsb.jl#L8).Am I missing a way to set the convergence criterion? If not, what's the better way to address that issue? Support bounds for Optim.jl optimizers, pass gradient information to the result object, or admit kwargs to set options for the default optimizer?
Thanks!
Flemming
The text was updated successfully, but these errors were encountered: