From 0854af737b50cc3d05aa5c86166dd3778d36e651 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Fri, 20 Dec 2024 16:34:08 +0000 Subject: [PATCH] build based on af764f3 --- previews/PR314/.documenter-siteinfo.json | 2 +- previews/PR314/backend/index.html | 14 +- previews/PR314/generic/index.html | 2 +- previews/PR314/index.html | 2 +- previews/PR314/mixed/index.html | 2 +- .../{08905d36.svg => 5d292bd0.svg} | 456 +++++++++--------- previews/PR314/performance/index.html | 56 +-- previews/PR314/predefined/index.html | 2 +- previews/PR314/reference/index.html | 2 +- previews/PR314/sparse/index.html | 2 +- previews/PR314/sparsity_pattern/index.html | 6 +- previews/PR314/tutorial/index.html | 2 +- 12 files changed, 278 insertions(+), 270 deletions(-) rename previews/PR314/performance/{08905d36.svg => 5d292bd0.svg} (62%) diff --git a/previews/PR314/.documenter-siteinfo.json b/previews/PR314/.documenter-siteinfo.json index 1bcf43b8..332c5853 100644 --- a/previews/PR314/.documenter-siteinfo.json +++ b/previews/PR314/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.7","generation_timestamp":"2024-12-20T16:32:04","documenter_version":"1.8.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.11.2","generation_timestamp":"2024-12-20T16:33:59","documenter_version":"1.8.0"}} \ No newline at end of file diff --git a/previews/PR314/backend/index.html b/previews/PR314/backend/index.html index 779867e2..a9d09352 100644 --- a/previews/PR314/backend/index.html +++ b/previews/PR314/backend/index.html @@ -57,9 +57,9 @@ return g end

Finally, we use the homemade backend to compute the gradient.

nlp = ADNLPModel(sum, ones(3), gradient_backend = NewADGradient)
 grad(nlp, nlp.meta.x0)  # returns the gradient at x0 using `NewADGradient`
3-element Vector{Float64}:
- 0.6071726394174236
- 0.14717406903623043
- 0.8576568096739905

Change backend

Once an instance of an ADNLPModel has been created, it is possible to change the backends without re-instantiating the model.

using ADNLPModels, NLPModels
+ 0.030188194174360583
+ 0.42187939956508747
+ 0.930689998188221

Change backend

Once an instance of an ADNLPModel has been created, it is possible to change the backends without re-instantiating the model.

using ADNLPModels, NLPModels
 f(x) = 100 * (x[2] - x[1]^2)^2 + (x[1] - 1)^2
 x0 = 3 * ones(2)
 nlp = ADNLPModel(f, x0)
@@ -128,10 +128,10 @@
            jhess: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0               jhprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0     
 

Then, the gradient will return a vector of Float64.

x64 = rand(2)
 grad(nlp, x64)
2-element Vector{Float64}:
- -7.704737284893528
- 66.80970051291837

It is now possible to move to a different type, for instance Float32, while keeping the instance nlp.

x0_32 = ones(Float32, 2)
+ -11.752369025605399
+  11.622097426192173

It is now possible to move to a different type, for instance Float32, while keeping the instance nlp.

x0_32 = ones(Float32, 2)
 set_adbackend!(nlp, gradient_backend = ADNLPModels.ForwardDiffADGradient, x0 = x0_32)
 x32 = rand(Float32, 2)
 grad(nlp, x32)
2-element Vector{Float64}:
-  175.5325927734375
- -115.24372100830078
+ -33.94695281982422 + 44.12239074707031 diff --git a/previews/PR314/generic/index.html b/previews/PR314/generic/index.html index 3c281a89..30228304 100644 --- a/previews/PR314/generic/index.html +++ b/previews/PR314/generic/index.html @@ -1,2 +1,2 @@ -Support multiple precision · ADNLPModels.jl
+Support multiple precision · ADNLPModels.jl
diff --git a/previews/PR314/index.html b/previews/PR314/index.html index 3bfefc3a..963cbc53 100644 --- a/previews/PR314/index.html +++ b/previews/PR314/index.html @@ -127,4 +127,4 @@ output[2] = x[2] end nvar, ncon = 3, 2 -nls = ADNLSModel!(F!, x0, nequ, c!, zeros(ncon), zeros(ncon))source

Check the Tutorial for more details on the usage.

License

This content is released under the MPL2.0 License.

Bug reports and discussions

If you think you found a bug, feel free to open an issue. Focused suggestions and requests can also be opened as issues. Before opening a pull request, start an issue or a discussion on the topic, please.

If you want to ask a question not suited for a bug report, feel free to start a discussion here. This forum is for general discussion about this repository and the JuliaSmoothOptimizers, so questions about any of our packages are welcome.

Contents

+nls = ADNLSModel!(F!, x0, nequ, c!, zeros(ncon), zeros(ncon))source

Check the Tutorial for more details on the usage.

License

This content is released under the MPL2.0 License.

Bug reports and discussions

If you think you found a bug, feel free to open an issue. Focused suggestions and requests can also be opened as issues. Before opening a pull request, start an issue or a discussion on the topic, please.

If you want to ask a question not suited for a bug report, feel free to start a discussion here. This forum is for general discussion about this repository and the JuliaSmoothOptimizers, so questions about any of our packages are welcome.

Contents

diff --git a/previews/PR314/mixed/index.html b/previews/PR314/mixed/index.html index 7ef3a959..c59878b7 100644 --- a/previews/PR314/mixed/index.html +++ b/previews/PR314/mixed/index.html @@ -101,4 +101,4 @@ }

Note that the backends used for the gradient and jacobian are now NLPModel. So, a call to grad on nlp

grad(nlp, x0)
2-element Vector{Float64}:
  -12.847999999999999
   -3.5199999999999996

would call grad on model

neval_grad(model)
1

Moreover, as expected, the ADNLPModel nlp also implements the missing methods, e.g.

jprod(nlp, x0, v)
1-element Vector{Float64}:
- 2.0
+ 2.0 diff --git a/previews/PR314/performance/08905d36.svg b/previews/PR314/performance/5d292bd0.svg similarity index 62% rename from previews/PR314/performance/08905d36.svg rename to previews/PR314/performance/5d292bd0.svg index 7b3e1d42..002de99c 100644 --- a/previews/PR314/performance/08905d36.svg +++ b/previews/PR314/performance/5d292bd0.svg @@ -1,267 +1,275 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR314/performance/index.html b/previews/PR314/performance/index.html index 749f90cd..f65ab872 100644 --- a/previews/PR314/performance/index.html +++ b/previews/PR314/performance/index.html @@ -267,33 +267,33 @@ stats[back][stats[back].name .== name, :time] = [median(b.times)] stats[back][stats[back].name .== name, :allocs] = [median(b.allocs)] end -end
[ Info:  camshape with 1000 vars and 2003 cons
-[ Info:  catenary with 999 vars and 332 cons
-┌ Warning: catenary: number of variables adjusted to be a multiple of 3
-@ OptimizationProblems.PureJuMP ~/.julia/packages/OptimizationProblems/9qr9C/src/PureJuMP/catenary.jl:20
-┌ Warning: catenary: number of variables adjusted to be greater or equal to 6
-@ OptimizationProblems.PureJuMP ~/.julia/packages/OptimizationProblems/9qr9C/src/PureJuMP/catenary.jl:22
-┌ Warning: catenary: number of variables adjusted to be a multiple of 3
-@ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/9qr9C/src/ADNLPProblems/catenary.jl:4
-┌ Warning: catenary: number of variables adjusted to be greater or equal to 6
-@ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/9qr9C/src/ADNLPProblems/catenary.jl:6
-┌ Warning: catenary: number of variables adjusted to be a multiple of 3
-@ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/9qr9C/src/ADNLPProblems/catenary.jl:4
-┌ Warning: catenary: number of variables adjusted to be greater or equal to 6
-@ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/9qr9C/src/ADNLPProblems/catenary.jl:6
-[ Info:  chain with 1000 vars and 752 cons
-[ Info:  channel with 1000 vars and 1000 cons
-[ Info:  clnlbeam with 999 vars and 664 cons
-[ Info:  controlinvestment with 1000 vars and 500 cons
-[ Info:  elec with 999 vars and 333 cons
-[ Info:  hovercraft1d with 998 vars and 668 cons
-[ Info:  marine with 1007 vars and 488 cons
-[ Info:  polygon with 1000 vars and 125251 cons
-[ Info:  polygon1 with 1000 vars and 500 cons
-[ Info:  polygon2 with 1000 vars and 1 cons
-[ Info:  polygon3 with 1000 vars and 1000 cons
-[ Info:  robotarm with 1009 vars and 1002 cons
-[ Info:  structural with 3540 vars and 3652 cons
using Plots, SolverBenchmark
+end
[ Info:  camshape with 1000 vars and 2003 cons
+[ Info:  catenary with 999 vars and 332 cons
+┌ Warning: catenary: number of variables adjusted to be a multiple of 3
+└ @ OptimizationProblems.PureJuMP ~/.julia/packages/OptimizationProblems/9qr9C/src/PureJuMP/catenary.jl:20
+┌ Warning: catenary: number of variables adjusted to be greater or equal to 6
+└ @ OptimizationProblems.PureJuMP ~/.julia/packages/OptimizationProblems/9qr9C/src/PureJuMP/catenary.jl:22
+┌ Warning: catenary: number of variables adjusted to be a multiple of 3
+└ @ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/9qr9C/src/ADNLPProblems/catenary.jl:4
+┌ Warning: catenary: number of variables adjusted to be greater or equal to 6
+└ @ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/9qr9C/src/ADNLPProblems/catenary.jl:6
+┌ Warning: catenary: number of variables adjusted to be a multiple of 3
+└ @ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/9qr9C/src/ADNLPProblems/catenary.jl:4
+┌ Warning: catenary: number of variables adjusted to be greater or equal to 6
+└ @ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/9qr9C/src/ADNLPProblems/catenary.jl:6
+[ Info:  chain with 1000 vars and 752 cons
+[ Info:  channel with 1000 vars and 1000 cons
+[ Info:  clnlbeam with 999 vars and 664 cons
+[ Info:  controlinvestment with 1000 vars and 500 cons
+[ Info:  elec with 999 vars and 333 cons
+[ Info:  hovercraft1d with 998 vars and 668 cons
+[ Info:  marine with 1007 vars and 488 cons
+[ Info:  polygon with 1000 vars and 125251 cons
+[ Info:  polygon1 with 1000 vars and 500 cons
+[ Info:  polygon2 with 1000 vars and 1 cons
+[ Info:  polygon3 with 1000 vars and 1000 cons
+[ Info:  robotarm with 1009 vars and 1002 cons
+[ Info:  structural with 3540 vars and 3652 cons
using Plots, SolverBenchmark
 costnames = ["median time (in ns)", "median allocs"]
 costs = [
   df -> df.time,
@@ -302,4 +302,4 @@
 
 gr()
 
-profile_solvers(stats, costs, costnames)
Example block output +profile_solvers(stats, costs, costnames)Example block output diff --git a/previews/PR314/predefined/index.html b/previews/PR314/predefined/index.html index a1e68204..0af5b858 100644 --- a/previews/PR314/predefined/index.html +++ b/previews/PR314/predefined/index.html @@ -55,4 +55,4 @@ SparseADJacobian, SparseReverseADHessian, ForwardDiffADGHjvprod, -} +} diff --git a/previews/PR314/reference/index.html b/previews/PR314/reference/index.html index a1ff7f67..dc9e54e0 100644 --- a/previews/PR314/reference/index.html +++ b/previews/PR314/reference/index.html @@ -115,4 +115,4 @@ get_nln_nnzj(nlp::AbstractNLPModel, nvar, ncon)

For a given ADBackend of a problem with nvar variables and ncon constraints, return the number of nonzeros in the Jacobian of nonlinear constraints. If b is the ADModelBackend then b.jacobian_backend is used.

source
ADNLPModels.get_residual_nnzhMethod
get_residual_nnzh(b::ADModelBackend, nvar)
 get_residual_nnzh(nls::AbstractNLSModel, nvar)

Return the number of nonzeros elements in the residual Hessians.

source
ADNLPModels.get_residual_nnzjMethod
get_residual_nnzj(b::ADModelBackend, nvar, nequ)
 get_residual_nnzj(nls::AbstractNLSModel, nvar, nequ)

Return the number of nonzeros elements in the residual Jacobians.

source
ADNLPModels.get_sparsity_patternMethod
S = get_sparsity_pattern(model::ADModel, derivative::Symbol)

Retrieve the sparsity pattern of a Jacobian or Hessian from an ADModel. For the Hessian, only the lower triangular part of its sparsity pattern is returned. The user can reconstruct the upper triangular part by exploiting symmetry.

To compute the sparsity pattern, the model must use a sparse backend. Supported backends include SparseADJacobian, SparseADHessian, and SparseReverseADHessian.

Input arguments

  • model: An automatic differentiation model (either AbstractADNLPModel or AbstractADNLSModel).
  • derivative: The type of derivative for which the sparsity pattern is needed. The supported values are :jacobian, :hessian, :jacobian_residual and :hessian_residual.

Output argument

  • S: A sparse matrix of type SparseMatrixCSC{Bool,Int} indicating the sparsity pattern of the requested derivative.
source
ADNLPModels.set_adbackend!Method
set_adbackend!(nlp, new_adbackend)
-set_adbackend!(nlp; kwargs...)

Replace the current adbackend value of nlp by new_adbackend or instantiate a new one with kwargs, see ADModelBackend. By default, the setter with kwargs will reuse existing backends.

source
+set_adbackend!(nlp; kwargs...)

Replace the current adbackend value of nlp by new_adbackend or instantiate a new one with kwargs, see ADModelBackend. By default, the setter with kwargs will reuse existing backends.

source diff --git a/previews/PR314/sparse/index.html b/previews/PR314/sparse/index.html index 9571780f..ad828873 100644 --- a/previews/PR314/sparse/index.html +++ b/previews/PR314/sparse/index.html @@ -187,4 +187,4 @@ jprod_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jtprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jtprod_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jtprod_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 hess: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 hprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jhess: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jhprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 -

The section "providing the sparsity pattern for sparse derivatives" illustrates this feature with a more advanced application.

Acknowledgements

The package SparseConnectivityTracer.jl is used to compute the sparsity pattern of Jacobians and Hessians. The evaluation of the number of directional derivatives and the seeds required to compute compressed Jacobians and Hessians is performed using SparseMatrixColorings.jl. As of release v0.8.1, it has replaced ColPack.jl. We acknowledge Guillaume Dalle (@gdalle), Adrian Hill (@adrhill), Alexis Montoison (@amontoison), and Michel Schanen (@michel2323) for the development of these packages.

+

The section "providing the sparsity pattern for sparse derivatives" illustrates this feature with a more advanced application.

Acknowledgements

The package SparseConnectivityTracer.jl is used to compute the sparsity pattern of Jacobians and Hessians. The evaluation of the number of directional derivatives and the seeds required to compute compressed Jacobians and Hessians is performed using SparseMatrixColorings.jl. As of release v0.8.1, it has replaced ColPack.jl. We acknowledge Guillaume Dalle (@gdalle), Adrian Hill (@adrhill), Alexis Montoison (@amontoison), and Michel Schanen (@michel2323) for the development of these packages.

diff --git a/previews/PR314/sparsity_pattern/index.html b/previews/PR314/sparsity_pattern/index.html index 65eb2852..8be4e137 100644 --- a/previews/PR314/sparsity_pattern/index.html +++ b/previews/PR314/sparsity_pattern/index.html @@ -29,7 +29,7 @@ @elapsed begin nlp = ADNLPModel!(f, xi, lvar, uvar, [1], [1], T[1], c!, lcon, ucon; hessian_backend = ADNLPModels.EmptyADbackend) -end
2.42917405

ADNLPModel will automatically prepare an AD backend for computing sparse Jacobian and Hessian. We disabled the Hessian computation here to focus the measurement on the Jacobian computation. The keyword argument show_time = true can also be passed to the problem's constructor to get more detailed information about the time used to prepare the AD backend.

using NLPModels
+end
2.728213387

ADNLPModel will automatically prepare an AD backend for computing sparse Jacobian and Hessian. We disabled the Hessian computation here to focus the measurement on the Jacobian computation. The keyword argument show_time = true can also be passed to the problem's constructor to get more detailed information about the time used to prepare the AD backend.

using NLPModels
 x = sqrt(2) * ones(n)
 jac_nln(nlp, x)
49999×100000 SparseArrays.SparseMatrixCSC{Float64, Int64} with 199996 stored entries:
 ⎡⠙⢦⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠳⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎤
@@ -78,7 +78,7 @@
 
   jac_back = ADNLPModels.SparseADJacobian(n, f, N - 1, c!, J)
   nlp = ADNLPModel!(f, xi, lvar, uvar, [1], [1], T[1], c!, lcon, ucon; hessian_backend = ADNLPModels.EmptyADbackend, jacobian_backend = jac_back)
-end
1.623450373

We recover the same Jacobian.

using NLPModels
+end
1.696861106

We recover the same Jacobian.

using NLPModels
 x = sqrt(2) * ones(n)
 jac_nln(nlp, x)
49999×100000 SparseArrays.SparseMatrixCSC{Float64, Int64} with 199996 stored entries:
 ⎡⠙⢦⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠳⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎤
@@ -90,4 +90,4 @@
 ⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⠲⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⢦⡀⠀⠀⠀⠀⠀⎥
 ⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠳⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⢦⡀⠀⠀⠀⎥
 ⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠳⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⢦⡀⠀⎥
-⎣⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠳⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⠦⎦

The same can be done for the Hessian of the Lagrangian.

+⎣⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠳⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⠦⎦

The same can be done for the Hessian of the Lagrangian.

diff --git a/previews/PR314/tutorial/index.html b/previews/PR314/tutorial/index.html index 8a7e9955..2c0cc3c9 100644 --- a/previews/PR314/tutorial/index.html +++ b/previews/PR314/tutorial/index.html @@ -1,2 +1,2 @@ -Tutorial · ADNLPModels.jl
+Tutorial · ADNLPModels.jl