-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slimmed-down Levenberg-Marquardt for nonlinear least squares #500
Conversation
…copy-paste of custom_leastsq to simplish_lstsq)
…ty" in the CustomLSOptimizer behavior. Remove damping_clip as well, since that was only used for damping_mode != "identity".
…tomLMOptimizer when damping_basis=="diagonal_basis". I believe that since I already restricted damping_mode=identity in the previous commit that damping_basis has no special meaning anyway.
…ored out into their own functions
…s in simplerlm.py instead of customlm.py, so we can mark the whole customlm.py file as deprecated.
…d custom_leastsq to simplish_leastsq
Ping @enielse for review, when you get a chance. Now is a good time for us to document why options are worth keeping in the SimplerLMOptimizer class and the simplish_leastsq function. I'd be all for removing even more options if you think some are unnecessary. |
…f boolean flag computed elsewhere) then (break or continue)" and moved them to where the value of the flag was originally determined.
… directly accesses elements of JTJ.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great! Thanks for the hard work, @rileyjmurray!
Ping @sserita -- can you take a look at this? Once you approve we can merge this in. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this @rileyjmurray!
WOOO! |
pyGSTi's nonlinear least squares solver is a custom implementation with Levenberg-Marquardt with a dizzying array of options. The core algorithm is over 800 lines long and has 10 levels of indentation at its deepest point. The complexity of this existing implementation makes it basically impossible to extend.
This PR introduces slimmed-down infrastructure for nonlinear least squares. I started by copying customlm.py into a new file called simplerlm.py, then I removed one feature at a time in a series of commits. "Removal" of a feature meant making the default behavior for a given option the only behavior.
Notes
I tried to use ChatGPT to help split the monolithic custom_leastsq function in customlm.py into simpler functions. This turned out to not work for reasons that I'll discuss off-GitHub if people are interested.
For the curious, I'm doing this because I want to extend our nonlinear least squares solver in two ways. First, I want it to support pytorch Tensors; right now we only support numpy arrays and whatever home-cooked thing we have for distributed-memory computations. Second, I want to experiment with optimization algorithms that rarely (if ever!) require evaluating the full Jacobian of circuit outcome probabilities.
I also edited a scaling test for MPI because it raised errors when I ran pytest.
Finally, I made slight extensions to the ArraysInterface classes. The new functions are there so that the simpler LM implementation never needs to directly index into JTJ.