-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add optimization problems to "real-world" tests #69
Comments
Sounds great. Are there any specific we should start with? |
I tested the problems with my branch of ADNLPModels.jl: using ADNLPModels
using OptimizationProblems, OptimizationProblems.ADNLPProblems
problems = OptimizationProblems.meta[!, :name]
npb = length(problems)
for (i, problem) in enumerate(problems)
print("$i/$npb -- $problem ")
prob = Symbol(problem)
try
nlp = OptimizationProblems.ADNLPProblems.eval(prob)()
println("✓")
catch
println("✗")
end
end I got an error on the problems AMPGO07, AMPGO13, AMPGO18, hs243, hs68, hs69, hs87. For some problems, it's because we probably need a local tracing but for For a given model, we can easily verify the sparsity pattern with: using OptimizationProblems, NLPModels
nlp = OptimizationProblems.ADNLPProblems.eval(:woods)()
jrows, jcols = jac_structure(nlp)
nnzj = length(jrows)
jvals = ones(Bool, nnzj)
J = sparse(jrows, jcols, jvals, nlp.meta.ncon, nlp.meta.nvar)
hrows, hcols = hess_structure(nlp)
nnzh = length(hrows)
hvals = ones(Bool, nnzh)
H = sparse(hrows, hcols, hvals, nlp.meta.nvar, nlp.meta.nvar) We can check that we have the same results an JuMP model: using OptimizationProblems, NLPModels, NLPModelsJuMP
jump_model = OptimizationProblems.PureJuMP.eval(:woods)()
nlp2 = MathOptNLPModel(jump_model)
jrows2, jcols2 = jac_structure(nlp2)
nnzj2 = length(jrows2)
jvals2 = ones(Bool, nnzj2)
J2 = sparse(jrows2, jcols2, jvals2, nlp2.meta.ncon, nlp2.meta.nvar)
hrows2, hcols2 = hess_structure(nlp2)
nnzh2 = length(hrows2)
hvals2 = ones(Bool, nnzh2)
H2 = sparse(hrows2, hcols2, hvals2, nlp2.meta.nvar, nlp2.meta.nvar) It should be easy to check that the sparsity pattern returned by |
Just to be clear, how are the For For |
We just store the rows and columns of the the sparsity pattern in COO format. |
You say "based on our functions". If we add such tests with our test suite, and then ADNLPModels uses SCT internally, do we end up testing nothing at all (i.e. that our sparsity pattern is equal to itself)? |
Forget the typo ^^ Let's put it this way: where are those functions |
I suggest a way to compare the sparsity pattern returned by SCT (through ADNLPModels) with that of JuMP/MOI.
|
Ok, so in the message above, the second code snippet currently uses Symbolics for |
Yes! |
I'll also take a shot at |
No rush, I was just curious to try if SCT.jl is robust :) |
Sorry for being AWOL! Let's put these optimization problems into our benchmark suite as well once we get them running. |
@amontoison pointed out that we could test sparsity detection on suites of optimization problems with
The text was updated successfully, but these errors were encountered: