diff --git a/previews/PR314/.documenter-siteinfo.json b/previews/PR314/.documenter-siteinfo.json index ed582b31..817a7168 100644 --- a/previews/PR314/.documenter-siteinfo.json +++ b/previews/PR314/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.7","generation_timestamp":"2024-12-21T20:07:42","documenter_version":"1.8.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.11.2","generation_timestamp":"2024-12-21T20:09:39","documenter_version":"1.8.0"}} \ No newline at end of file diff --git a/previews/PR314/backend/index.html b/previews/PR314/backend/index.html index bfabceed..64b0e6a9 100644 --- a/previews/PR314/backend/index.html +++ b/previews/PR314/backend/index.html @@ -57,9 +57,9 @@ return g end

Finally, we use the homemade backend to compute the gradient.

nlp = ADNLPModel(sum, ones(3), gradient_backend = NewADGradient)
 grad(nlp, nlp.meta.x0)  # returns the gradient at x0 using `NewADGradient`
3-element Vector{Float64}:
- 0.28670930527875793
- 0.07017176511439704
- 0.5556853370806676

Change backend

Once an instance of an ADNLPModel has been created, it is possible to change the backends without re-instantiating the model.

using ADNLPModels, NLPModels
+ 0.14741584220291293
+ 0.5016581544176959
+ 0.9463651629874223

Change backend

Once an instance of an ADNLPModel has been created, it is possible to change the backends without re-instantiating the model.

using ADNLPModels, NLPModels
 f(x) = 100 * (x[2] - x[1]^2)^2 + (x[1] - 1)^2
 x0 = 3 * ones(2)
 nlp = ADNLPModel(f, x0)
@@ -128,10 +128,10 @@
            jhess: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0               jhprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0     
 

Then, the gradient will return a vector of Float64.

x64 = rand(2)
 grad(nlp, x64)
2-element Vector{Float64}:
- 104.42640703309738
- -73.86987557112867

It is now possible to move to a different type, for instance Float32, while keeping the instance nlp.

x0_32 = ones(Float32, 2)
+  58.99263057208681
+ -40.916174541610964

It is now possible to move to a different type, for instance Float32, while keeping the instance nlp.

x0_32 = ones(Float32, 2)
 set_adbackend!(nlp, gradient_backend = ADNLPModels.ForwardDiffADGradient, x0 = x0_32)
 x32 = rand(Float32, 2)
 grad(nlp, x32)
2-element Vector{Float64}:
- -67.99763488769531
-  50.18924331665039
+ -17.310041427612305 + 194.59701538085938 diff --git a/previews/PR314/generic/index.html b/previews/PR314/generic/index.html index 82a7ec43..47af0fca 100644 --- a/previews/PR314/generic/index.html +++ b/previews/PR314/generic/index.html @@ -1,2 +1,2 @@ -Support multiple precision · ADNLPModels.jl
+Support multiple precision · ADNLPModels.jl
diff --git a/previews/PR314/index.html b/previews/PR314/index.html index e18159fa..f2d4039f 100644 --- a/previews/PR314/index.html +++ b/previews/PR314/index.html @@ -127,4 +127,4 @@ output[2] = x[2] end nvar, ncon = 3, 2 -nls = ADNLSModel!(F!, x0, nequ, c!, zeros(ncon), zeros(ncon))source

Check the Tutorial for more details on the usage.

License

This content is released under the MPL2.0 License.

Bug reports and discussions

If you think you found a bug, feel free to open an issue. Focused suggestions and requests can also be opened as issues. Before opening a pull request, start an issue or a discussion on the topic, please.

If you want to ask a question not suited for a bug report, feel free to start a discussion here. This forum is for general discussion about this repository and the JuliaSmoothOptimizers, so questions about any of our packages are welcome.

Contents

+nls = ADNLSModel!(F!, x0, nequ, c!, zeros(ncon), zeros(ncon))source

Check the Tutorial for more details on the usage.

License

This content is released under the MPL2.0 License.

Bug reports and discussions

If you think you found a bug, feel free to open an issue. Focused suggestions and requests can also be opened as issues. Before opening a pull request, start an issue or a discussion on the topic, please.

If you want to ask a question not suited for a bug report, feel free to start a discussion here. This forum is for general discussion about this repository and the JuliaSmoothOptimizers, so questions about any of our packages are welcome.

Contents

diff --git a/previews/PR314/mixed/index.html b/previews/PR314/mixed/index.html index ce82d4c2..86a94093 100644 --- a/previews/PR314/mixed/index.html +++ b/previews/PR314/mixed/index.html @@ -101,4 +101,4 @@ }

Note that the backends used for the gradient and jacobian are now NLPModel. So, a call to grad on nlp

grad(nlp, x0)
2-element Vector{Float64}:
  -12.847999999999999
   -3.5199999999999996

would call grad on model

neval_grad(model)
1

Moreover, as expected, the ADNLPModel nlp also implements the missing methods, e.g.

jprod(nlp, x0, v)
1-element Vector{Float64}:
- 2.0
+ 2.0 diff --git a/previews/PR314/performance/cdb8e548.svg b/previews/PR314/performance/13b792b8.svg similarity index 61% rename from previews/PR314/performance/cdb8e548.svg rename to previews/PR314/performance/13b792b8.svg index b65f0b90..3be5bc63 100644 --- a/previews/PR314/performance/cdb8e548.svg +++ b/previews/PR314/performance/13b792b8.svg @@ -1,267 +1,275 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR314/performance/index.html b/previews/PR314/performance/index.html index 9095e19c..9868c375 100644 --- a/previews/PR314/performance/index.html +++ b/previews/PR314/performance/index.html @@ -267,33 +267,33 @@ stats[back][stats[back].name .== name, :time] = [median(b.times)] stats[back][stats[back].name .== name, :allocs] = [median(b.allocs)] end -end
[ Info:  camshape with 1000 vars and 2003 cons
-[ Info:  catenary with 999 vars and 332 cons
-┌ Warning: catenary: number of variables adjusted to be a multiple of 3
-@ OptimizationProblems.PureJuMP ~/.julia/packages/OptimizationProblems/9qr9C/src/PureJuMP/catenary.jl:20
-┌ Warning: catenary: number of variables adjusted to be greater or equal to 6
-@ OptimizationProblems.PureJuMP ~/.julia/packages/OptimizationProblems/9qr9C/src/PureJuMP/catenary.jl:22
-┌ Warning: catenary: number of variables adjusted to be a multiple of 3
-@ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/9qr9C/src/ADNLPProblems/catenary.jl:4
-┌ Warning: catenary: number of variables adjusted to be greater or equal to 6
-@ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/9qr9C/src/ADNLPProblems/catenary.jl:6
-┌ Warning: catenary: number of variables adjusted to be a multiple of 3
-@ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/9qr9C/src/ADNLPProblems/catenary.jl:4
-┌ Warning: catenary: number of variables adjusted to be greater or equal to 6
-@ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/9qr9C/src/ADNLPProblems/catenary.jl:6
-[ Info:  chain with 1000 vars and 752 cons
-[ Info:  channel with 1000 vars and 1000 cons
-[ Info:  clnlbeam with 999 vars and 664 cons
-[ Info:  controlinvestment with 1000 vars and 500 cons
-[ Info:  elec with 999 vars and 333 cons
-[ Info:  hovercraft1d with 998 vars and 668 cons
-[ Info:  marine with 1007 vars and 488 cons
-[ Info:  polygon with 1000 vars and 125251 cons
-[ Info:  polygon1 with 1000 vars and 500 cons
-[ Info:  polygon2 with 1000 vars and 1 cons
-[ Info:  polygon3 with 1000 vars and 1000 cons
-[ Info:  robotarm with 1009 vars and 1002 cons
-[ Info:  structural with 3540 vars and 3652 cons
using Plots, SolverBenchmark
+end
[ Info:  camshape with 1000 vars and 2003 cons
+[ Info:  catenary with 999 vars and 332 cons
+┌ Warning: catenary: number of variables adjusted to be a multiple of 3
+└ @ OptimizationProblems.PureJuMP ~/.julia/packages/OptimizationProblems/9qr9C/src/PureJuMP/catenary.jl:20
+┌ Warning: catenary: number of variables adjusted to be greater or equal to 6
+└ @ OptimizationProblems.PureJuMP ~/.julia/packages/OptimizationProblems/9qr9C/src/PureJuMP/catenary.jl:22
+┌ Warning: catenary: number of variables adjusted to be a multiple of 3
+└ @ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/9qr9C/src/ADNLPProblems/catenary.jl:4
+┌ Warning: catenary: number of variables adjusted to be greater or equal to 6
+└ @ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/9qr9C/src/ADNLPProblems/catenary.jl:6
+┌ Warning: catenary: number of variables adjusted to be a multiple of 3
+└ @ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/9qr9C/src/ADNLPProblems/catenary.jl:4
+┌ Warning: catenary: number of variables adjusted to be greater or equal to 6
+└ @ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/9qr9C/src/ADNLPProblems/catenary.jl:6
+[ Info:  chain with 1000 vars and 752 cons
+[ Info:  channel with 1000 vars and 1000 cons
+[ Info:  clnlbeam with 999 vars and 664 cons
+[ Info:  controlinvestment with 1000 vars and 500 cons
+[ Info:  elec with 999 vars and 333 cons
+[ Info:  hovercraft1d with 998 vars and 668 cons
+[ Info:  marine with 1007 vars and 488 cons
+[ Info:  polygon with 1000 vars and 125251 cons
+[ Info:  polygon1 with 1000 vars and 500 cons
+[ Info:  polygon2 with 1000 vars and 1 cons
+[ Info:  polygon3 with 1000 vars and 1000 cons
+[ Info:  robotarm with 1009 vars and 1002 cons
+[ Info:  structural with 3540 vars and 3652 cons
using Plots, SolverBenchmark
 costnames = ["median time (in ns)", "median allocs"]
 costs = [
   df -> df.time,
@@ -302,4 +302,4 @@
 
 gr()
 
-profile_solvers(stats, costs, costnames)
Example block output +profile_solvers(stats, costs, costnames)Example block output diff --git a/previews/PR314/predefined/index.html b/previews/PR314/predefined/index.html index ffd05c82..3661370b 100644 --- a/previews/PR314/predefined/index.html +++ b/previews/PR314/predefined/index.html @@ -57,4 +57,4 @@ SparseADJacobian, SparseReverseADHessian, ForwardDiffADGHjvprod, -} +} diff --git a/previews/PR314/reference/index.html b/previews/PR314/reference/index.html index e0fffc9a..26005584 100644 --- a/previews/PR314/reference/index.html +++ b/previews/PR314/reference/index.html @@ -115,4 +115,4 @@ get_nln_nnzj(nlp::AbstractNLPModel, nvar, ncon)

For a given ADBackend of a problem with nvar variables and ncon constraints, return the number of nonzeros in the Jacobian of nonlinear constraints. If b is the ADModelBackend then b.jacobian_backend is used.

source
ADNLPModels.get_residual_nnzhMethod
get_residual_nnzh(b::ADModelBackend, nvar)
 get_residual_nnzh(nls::AbstractNLSModel, nvar)

Return the number of nonzeros elements in the residual Hessians.

source
ADNLPModels.get_residual_nnzjMethod
get_residual_nnzj(b::ADModelBackend, nvar, nequ)
 get_residual_nnzj(nls::AbstractNLSModel, nvar, nequ)

Return the number of nonzeros elements in the residual Jacobians.

source
ADNLPModels.get_sparsity_patternMethod
S = get_sparsity_pattern(model::ADModel, derivative::Symbol)

Retrieve the sparsity pattern of a Jacobian or Hessian from an ADModel. For the Hessian, only the lower triangular part of its sparsity pattern is returned. The user can reconstruct the upper triangular part by exploiting symmetry.

To compute the sparsity pattern, the model must use a sparse backend. Supported backends include SparseADJacobian, SparseADHessian, and SparseReverseADHessian.

Input arguments

  • model: An automatic differentiation model (either AbstractADNLPModel or AbstractADNLSModel).
  • derivative: The type of derivative for which the sparsity pattern is needed. The supported values are :jacobian, :hessian, :jacobian_residual and :hessian_residual.

Output argument

  • S: A sparse matrix of type SparseMatrixCSC{Bool,Int} indicating the sparsity pattern of the requested derivative.
source
ADNLPModels.set_adbackend!Method
set_adbackend!(nlp, new_adbackend)
-set_adbackend!(nlp; kwargs...)

Replace the current adbackend value of nlp by new_adbackend or instantiate a new one with kwargs, see ADModelBackend. By default, the setter with kwargs will reuse existing backends.

source
+set_adbackend!(nlp; kwargs...)

Replace the current adbackend value of nlp by new_adbackend or instantiate a new one with kwargs, see ADModelBackend. By default, the setter with kwargs will reuse existing backends.

source diff --git a/previews/PR314/sparse/index.html b/previews/PR314/sparse/index.html index 81ee9969..bf76f647 100644 --- a/previews/PR314/sparse/index.html +++ b/previews/PR314/sparse/index.html @@ -187,4 +187,4 @@ jprod_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jtprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jtprod_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jtprod_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 hess: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 hprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jhess: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jhprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 -

The section "providing the sparsity pattern for sparse derivatives" illustrates this feature with a more advanced application.

Acknowledgements

The package SparseConnectivityTracer.jl is used to compute the sparsity pattern of Jacobians and Hessians. The evaluation of the number of directional derivatives and the seeds required to compute compressed Jacobians and Hessians is performed using SparseMatrixColorings.jl. As of release v0.8.1, it has replaced ColPack.jl. We acknowledge Guillaume Dalle (@gdalle), Adrian Hill (@adrhill), Alexis Montoison (@amontoison), and Michel Schanen (@michel2323) for the development of these packages.

+

The section "providing the sparsity pattern for sparse derivatives" illustrates this feature with a more advanced application.

Acknowledgements

The package SparseConnectivityTracer.jl is used to compute the sparsity pattern of Jacobians and Hessians. The evaluation of the number of directional derivatives and the seeds required to compute compressed Jacobians and Hessians is performed using SparseMatrixColorings.jl. As of release v0.8.1, it has replaced ColPack.jl. We acknowledge Guillaume Dalle (@gdalle), Adrian Hill (@adrhill), Alexis Montoison (@amontoison), and Michel Schanen (@michel2323) for the development of these packages.

diff --git a/previews/PR314/sparsity_pattern/index.html b/previews/PR314/sparsity_pattern/index.html index 8219e344..a13191b5 100644 --- a/previews/PR314/sparsity_pattern/index.html +++ b/previews/PR314/sparsity_pattern/index.html @@ -29,7 +29,7 @@ @elapsed begin nlp = ADNLPModel!(f, xi, lvar, uvar, [1], [1], T[1], c!, lcon, ucon; hessian_backend = ADNLPModels.EmptyADbackend) -end
2.478523631

ADNLPModel will automatically prepare an AD backend for computing sparse Jacobian and Hessian. We disabled the Hessian computation here to focus the measurement on the Jacobian computation. The keyword argument show_time = true can also be passed to the problem's constructor to get more detailed information about the time used to prepare the AD backend.

using NLPModels
+end
2.764569469

ADNLPModel will automatically prepare an AD backend for computing sparse Jacobian and Hessian. We disabled the Hessian computation here to focus the measurement on the Jacobian computation. The keyword argument show_time = true can also be passed to the problem's constructor to get more detailed information about the time used to prepare the AD backend.

using NLPModels
 x = sqrt(2) * ones(n)
 jac_nln(nlp, x)
49999×100000 SparseArrays.SparseMatrixCSC{Float64, Int64} with 199996 stored entries:
 ⎡⠙⢦⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠳⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎤
@@ -78,7 +78,7 @@
 
   jac_back = ADNLPModels.SparseADJacobian(n, f, N - 1, c!, J)
   nlp = ADNLPModel!(f, xi, lvar, uvar, [1], [1], T[1], c!, lcon, ucon; hessian_backend = ADNLPModels.EmptyADbackend, jacobian_backend = jac_back)
-end
1.626533376

We recover the same Jacobian.

using NLPModels
+end
1.655361655

We recover the same Jacobian.

using NLPModels
 x = sqrt(2) * ones(n)
 jac_nln(nlp, x)
49999×100000 SparseArrays.SparseMatrixCSC{Float64, Int64} with 199996 stored entries:
 ⎡⠙⢦⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠳⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎤
@@ -90,4 +90,4 @@
 ⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⠲⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⢦⡀⠀⠀⠀⠀⠀⎥
 ⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠳⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⢦⡀⠀⠀⠀⎥
 ⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠳⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⢦⡀⠀⎥
-⎣⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠳⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⠦⎦

The same can be done for the Hessian of the Lagrangian.

+⎣⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠳⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⠦⎦

The same can be done for the Hessian of the Lagrangian.

diff --git a/previews/PR314/tutorial/index.html b/previews/PR314/tutorial/index.html index 64d78f67..0c5c4ebb 100644 --- a/previews/PR314/tutorial/index.html +++ b/previews/PR314/tutorial/index.html @@ -1,2 +1,2 @@ -Tutorial · ADNLPModels.jl
+Tutorial · ADNLPModels.jl