-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add NLopt as an optimizer #30
Comments
Also just found out about https://pymoo.org/ |
@brosaplanella I can look into this if it is wanted? |
Yes, that would be great, thanks! |
It looks like |
Okay: Steps for making nlopt build (M series macs at least) following this SO post
|
Clearly this isn’t a workable implementation for a deployed package so we wont be merging this branch until this is fixed. Linking the bug DanielBok/nlopt-python#13 to track if this is fixed. |
Above issue implements the nlopt fix in a handy script. |
That's great! It's nice to have this script. |
Latest commit on this issue branch has an implementation but it isn't working yet. |
I am unsure if this will work as it seems to not like the structure we have around the objective_function. This goes for Nlopt I will also have a look into pymoo |
I managed to ascertain what the issue was. Nlopt expects a grad value in the python function, strictly this is bad as it is unused but I assume it comes from some pointer thing deeper in Nlopt. It also is very very picky about the returntype of the optimisation function. In the Nlopt wrapper class I have made a function decorator (called wrapper) in the run optimiser. This makes the nlopt play nicely with our other functions by allowing the function to take as many arguments as needed then simply strips off the last one to ensure the oprimiastion problem gets the expected number of arguments and the nlopt library can still do whatever pointer magic it is doing with grad (the final argument). We also cast the result to np.float64 which is the expected return type for nlopt, numpy will usually do optimisation to use smaller return types if possible but then nlopt will hiccup. I think that we should shelve this branch as an example of how it’s done but why we don’t do it. As between the mac install and the fragility of nlopt I think its a time bomb. This is of course unless there is a benifit to nlopt that outweighs these concerns. In the notebooks here it performs no better than scipy minimise. |
Sounds good! Let's keep this branch as an example on how to do it and move on to PyMOO and see if it is better. |
NLopt is an optimization library with lots of different optimizers. It has a python package on pip https://nlopt.readthedocs.io/en/latest/NLopt_Tutorial/#example-in-python
The following code works well and quickly:
The text was updated successfully, but these errors were encountered: