Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add optimization problems to "real-world" tests #69

Closed
gdalle opened this issue May 17, 2024 · 12 comments · Fixed by #83
Closed

Add optimization problems to "real-world" tests #69

gdalle opened this issue May 17, 2024 · 12 comments · Fixed by #83
Assignees
Labels
testing Improve package tests

Comments

@gdalle
Copy link
Collaborator

gdalle commented May 17, 2024

@amontoison pointed out that we could test sparsity detection on suites of optimization problems with

@adrhill
Copy link
Owner

adrhill commented May 17, 2024

Sounds great. Are there any specific we should start with?

@adrhill adrhill changed the title Useful test sets Add optimization problems to "real-world" tests May 17, 2024
@adrhill adrhill added the testing Improve package tests label May 17, 2024
@amontoison
Copy link

I tested the problems with my branch of ADNLPModels.jl:
JuliaSmoothOptimizers/ADNLPModels.jl#230

using ADNLPModels
using OptimizationProblems, OptimizationProblems.ADNLPProblems
problems = OptimizationProblems.meta[!, :name]
npb = length(problems)

for (i, problem) in enumerate(problems)
	print("$i/$npb -- $problem ")
	prob = Symbol(problem)
	try
		nlp = OptimizationProblems.ADNLPProblems.eval(prob)()
		println("")
  	catch
      	println("")
  	end
end

I got an error on the problems AMPGO07, AMPGO13, AMPGO18, hs243, hs68, hs69, hs87.

For some problems, it's because we probably need a local tracing but for hs68 and hs69 it's because we need a support of the erf function here.
For hs243, the problem is x' * B because it relies on conj.

For a given model, we can easily verify the sparsity pattern with:

using OptimizationProblems, NLPModels

nlp = OptimizationProblems.ADNLPProblems.eval(:woods)()

jrows, jcols = jac_structure(nlp)
nnzj = length(jrows)
jvals = ones(Bool, nnzj)
J = sparse(jrows, jcols, jvals, nlp.meta.ncon, nlp.meta.nvar)

hrows, hcols = hess_structure(nlp)
nnzh = length(hrows)
hvals = ones(Bool, nnzh)
H = sparse(hrows, hcols, hvals, nlp.meta.nvar, nlp.meta.nvar)

We can check that we have the same results an JuMP model:

using OptimizationProblems, NLPModels, NLPModelsJuMP

jump_model = OptimizationProblems.PureJuMP.eval(:woods)()
nlp2 = MathOptNLPModel(jump_model)

jrows2, jcols2 = jac_structure(nlp2)
nnzj2 = length(jrows2)
jvals2 = ones(Bool, nnzj2)
J2 = sparse(jrows2, jcols2, jvals2, nlp2.meta.ncon, nlp2.meta.nvar)

hrows2, hcols2 = hess_structure(nlp2)
nnzh2 = length(hrows2)
hvals2 = ones(Bool, nnzh2)
H2 = sparse(hrows2, hcols2, hvals2, nlp2.meta.nvar, nlp2.meta.nvar)

It should be easy to check that the sparsity pattern returned by SparseConnectivityTracer.jl and JuMP.jl is the same.

@gdalle
Copy link
Collaborator Author

gdalle commented May 19, 2024

Just to be clear, how are the jac_structure and hess_structure implemented? Are they independent from ADNLPModels.jl? We don't want to end up with a suite of tests that checks nothing at all once you depend on us

For conj it will be fixed by #75

For erf we'll need to add a package extension on SpecialFunctions.jl, I'll get around to it this weekend or next week

@amontoison
Copy link

amontoison commented May 19, 2024

jac_structure and hess_structure return the sparsity structure based on your functions jacobian_pattern and hessian_pattern here.

We just store the rows and columns of the the sparsity pattern in COO format.
For the Hessian, only the lower triangle is stored, just like JuMP.

@gdalle
Copy link
Collaborator Author

gdalle commented May 19, 2024

You say "based on our functions". If we add such tests with our test suite, and then ADNLPModels uses SCT internally, do we end up testing nothing at all (i.e. that our sparsity pattern is equal to itself)?

@gdalle
Copy link
Collaborator Author

gdalle commented May 19, 2024

Forget the typo ^^ Let's put it this way: where are those functions jac_structure and hess_structure implemented? And will they remain independent of SCT even if you end up using it in ADNLPModels?

@amontoison
Copy link

amontoison commented May 19, 2024

I suggest a way to compare the sparsity pattern returned by SCT (through ADNLPModels) with that of JuMP/MOI.
We have the same API for both NLPModels (ADNLPModelandMathOptNLPModel`).

MathOptNLPModel is just the JuMP model on which we can call the function of NLPModels, but everything is computed by MOI and can be used as a reference.

@gdalle
Copy link
Collaborator Author

gdalle commented May 19, 2024

Ok, so in the message above, the second code snippet currently uses Symbolics for jac_structure/hess_structure and would probably switch to SCT. Meanwhile the third snippet uses some JuMP internals for jac_structure/hess_structure, which means we can use it as a reliable reference. Do I get that right?

@amontoison
Copy link

amontoison commented May 19, 2024

Yes!
For the second snippet I tested it on my branch where I replace Symbolics.jl by with SCT.jl.
I will merge it as soon as we have a release that supports dot here.

@gdalle
Copy link
Collaborator Author

gdalle commented May 19, 2024

I'll also take a shot at erf support later today but @adrhill is on a well-deserved weekend off so don't expect a merge/release before Tuesday or so

@amontoison
Copy link

No rush, I was just curious to try if SCT.jl is robust :)

@adrhill
Copy link
Owner

adrhill commented May 21, 2024

Sorry for being AWOL! Let's put these optimization problems into our benchmark suite as well once we get them running.

@adrhill adrhill self-assigned this May 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
testing Improve package tests
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants