diff --git a/previews/PR302/.documenter-siteinfo.json b/previews/PR302/.documenter-siteinfo.json index 8fb146b4..eb7956d6 100644 --- a/previews/PR302/.documenter-siteinfo.json +++ b/previews/PR302/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.5","generation_timestamp":"2024-09-26T00:29:47","documenter_version":"1.7.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.5","generation_timestamp":"2024-09-26T00:33:45","documenter_version":"1.7.0"}} \ No newline at end of file diff --git a/previews/PR302/backend/index.html b/previews/PR302/backend/index.html index 35b6b3cc..c8d87fdb 100644 --- a/previews/PR302/backend/index.html +++ b/previews/PR302/backend/index.html @@ -57,9 +57,9 @@ return g end

Finally, we use the homemade backend to compute the gradient.

nlp = ADNLPModel(sum, ones(3), gradient_backend = NewADGradient)
 grad(nlp, nlp.meta.x0)  # returns the gradient at x0 using `NewADGradient`
3-element Vector{Float64}:
- 0.25134352937048154
- 0.014934949634762829
- 0.5167014826880357

Change backend

Once an instance of an ADNLPModel has been created, it is possible to change the backends without re-instantiating the model.

using ADNLPModels, NLPModels
+ 0.33325422486022316
+ 0.7952833318059003
+ 0.5700332943662471

Change backend

Once an instance of an ADNLPModel has been created, it is possible to change the backends without re-instantiating the model.

using ADNLPModels, NLPModels
 f(x) = 100 * (x[2] - x[1]^2)^2 + (x[1] - 1)^2
 x0 = 3 * ones(2)
 nlp = ADNLPModel(f, x0)
@@ -128,10 +128,10 @@
            jhess: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0               jhprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0     
 

Then, the gradient will return a vector of Float64.

x64 = rand(2)
 grad(nlp, x64)
2-element Vector{Float64}:
- -47.401657140104184
-  41.130394589412724

It is now possible to move to a different type, for instance Float32, while keeping the instance nlp.

x0_32 = ones(Float32, 2)
+ 143.4807672671233
+ -87.85730552084232

It is now possible to move to a different type, for instance Float32, while keeping the instance nlp.

x0_32 = ones(Float32, 2)
 set_adbackend!(nlp, gradient_backend = ADNLPModels.ForwardDiffADGradient, x0 = x0_32)
 x32 = rand(Float32, 2)
 grad(nlp, x32)
2-element Vector{Float64}:
- -120.70941925048828
-  115.58370971679688
+ 57.292362213134766 + -43.66872787475586 diff --git a/previews/PR302/generic/index.html b/previews/PR302/generic/index.html index 5b53e568..0a202743 100644 --- a/previews/PR302/generic/index.html +++ b/previews/PR302/generic/index.html @@ -1,2 +1,2 @@ -Support multiple precision · ADNLPModels.jl
+Support multiple precision · ADNLPModels.jl
diff --git a/previews/PR302/index.html b/previews/PR302/index.html index 172d98f7..1f61d1ce 100644 --- a/previews/PR302/index.html +++ b/previews/PR302/index.html @@ -75,7 +75,7 @@ output[2] = x[2] end nvar, ncon = 3, 2 -nlp = ADNLPModel!(f, x0, c!, zeros(ncon), zeros(ncon)) # uses the default ForwardDiffAD backend.source
ADNLPModels.ADNLSModelType
ADNLSModel(F, x0, nequ)
+nlp = ADNLPModel!(f, x0, c!, zeros(ncon), zeros(ncon)) # uses the default ForwardDiffAD backend.
source
ADNLPModels.ADNLSModelType
ADNLSModel(F, x0, nequ)
 ADNLSModel(F, x0, nequ, lvar, uvar)
 ADNLSModel(F, x0, nequ, clinrows, clincols, clinvals, lcon, ucon)
 ADNLSModel(F, x0, nequ, A, lcon, ucon)
@@ -133,4 +133,4 @@
   output[2] = x[2]
 end
 nvar, ncon = 3, 2
-nls = ADNLSModel!(F!, x0, nequ, c!, zeros(ncon), zeros(ncon))
source

Check the Tutorial for more details on the usage.

License

This content is released under the MPL2.0 License.

Bug reports and discussions

If you think you found a bug, feel free to open an issue. Focused suggestions and requests can also be opened as issues. Before opening a pull request, start an issue or a discussion on the topic, please.

If you want to ask a question not suited for a bug report, feel free to start a discussion here. This forum is for general discussion about this repository and the JuliaSmoothOptimizers, so questions about any of our packages are welcome.

Contents

+nls = ADNLSModel!(F!, x0, nequ, c!, zeros(ncon), zeros(ncon))source

Check the Tutorial for more details on the usage.

License

This content is released under the MPL2.0 License.

Bug reports and discussions

If you think you found a bug, feel free to open an issue. Focused suggestions and requests can also be opened as issues. Before opening a pull request, start an issue or a discussion on the topic, please.

If you want to ask a question not suited for a bug report, feel free to start a discussion here. This forum is for general discussion about this repository and the JuliaSmoothOptimizers, so questions about any of our packages are welcome.

Contents

diff --git a/previews/PR302/mixed/index.html b/previews/PR302/mixed/index.html index 21c6da3d..69210775 100644 --- a/previews/PR302/mixed/index.html +++ b/previews/PR302/mixed/index.html @@ -101,4 +101,4 @@ }

Note that the backends used for the gradient and jacobian are now NLPModel. So, a call to grad on nlp

grad(nlp, x0)
2-element Vector{Float64}:
  -12.847999999999999
   -3.5199999999999996

would call grad on model

neval_grad(model)
1

Moreover, as expected, the ADNLPModel nlp also implements the missing methods, e.g.

jprod(nlp, x0, v)
1-element Vector{Float64}:
- 2.0
+ 2.0 diff --git a/previews/PR302/performance/96cc5078.svg b/previews/PR302/performance/a4064bba.svg similarity index 77% rename from previews/PR302/performance/96cc5078.svg rename to previews/PR302/performance/a4064bba.svg index d7f936e4..86610ff0 100644 --- a/previews/PR302/performance/96cc5078.svg +++ b/previews/PR302/performance/a4064bba.svg @@ -1,279 +1,279 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR302/performance/index.html b/previews/PR302/performance/index.html index 5a210370..cf70f4fa 100644 --- a/previews/PR302/performance/index.html +++ b/previews/PR302/performance/index.html @@ -302,4 +302,4 @@ gr() -profile_solvers(stats, costs, costnames)Example block output +profile_solvers(stats, costs, costnames)Example block output diff --git a/previews/PR302/predefined/index.html b/previews/PR302/predefined/index.html index 7189809e..08be9474 100644 --- a/previews/PR302/predefined/index.html +++ b/previews/PR302/predefined/index.html @@ -55,4 +55,4 @@ SparseADJacobian, SparseReverseADHessian, ForwardDiffADGHjvprod, -} +} diff --git a/previews/PR302/reference/index.html b/previews/PR302/reference/index.html index 8b8eee15..213cb9f2 100644 --- a/previews/PR302/reference/index.html +++ b/previews/PR302/reference/index.html @@ -1,5 +1,5 @@ -Reference · ADNLPModels.jl

Reference

Contents

Index

ADNLPModels.ADModelBackendType
ADModelBackend(gradient_backend, hprod_backend, jprod_backend, jtprod_backend, jacobian_backend, hessian_backend, ghjvprod_backend, hprod_residual_backend, jprod_residual_backend, jtprod_residual_backend, jacobian_residual_backend, hessian_residual_backend)

Structure that define the different backend used to compute automatic differentiation of an ADNLPModel/ADNLSModel model. The different backend are all subtype of ADBackend and are respectively used for:

  • gradient computation;
  • hessian-vector products;
  • jacobian-vector products;
  • transpose jacobian-vector products;
  • jacobian computation;
  • hessian computation;
  • directional second derivative computation, i.e. gᵀ ∇²cᵢ(x) v.

The default constructors are ADModelBackend(nvar, f, ncon = 0, c = (args...) -> []; showtime::Bool = false, kwargs...) ADModelNLSBackend(nvar, F!, nequ, ncon = 0, c = (args...) -> []; showtime::Bool = false, kwargs...)

If show_time is set to true, it prints the time used to generate each backend.

The remaining kwargs are either the different backends as listed below or arguments passed to the backend's constructors:

  • gradient_backend = ForwardDiffADGradient;
  • hprod_backend = ForwardDiffADHvprod;
  • jprod_backend = ForwardDiffADJprod;
  • jtprod_backend = ForwardDiffADJtprod;
  • jacobian_backend = SparseADJacobian;
  • hessian_backend = ForwardDiffADHessian;
  • ghjvprod_backend = ForwardDiffADGHjvprod;
  • hprod_residual_backend = ForwardDiffADHvprod for ADNLSModel and EmptyADbackend otherwise;
  • jprod_residual_backend = ForwardDiffADJprod for ADNLSModel and EmptyADbackend otherwise;
  • jtprod_residual_backend = ForwardDiffADJtprod for ADNLSModel and EmptyADbackend otherwise;
  • jacobian_residual_backend = SparseADJacobian for ADNLSModel and EmptyADbackend otherwise;
  • hessian_residual_backend = ForwardDiffADHessian for ADNLSModel and EmptyADbackend otherwise.
source
ADNLPModels.ADNLPModelMethod
ADNLPModel(f, x0)
+Reference · ADNLPModels.jl

Reference

Contents

Index

ADNLPModels.ADModelBackendType
ADModelBackend(gradient_backend, hprod_backend, jprod_backend, jtprod_backend, jacobian_backend, hessian_backend, ghjvprod_backend, hprod_residual_backend, jprod_residual_backend, jtprod_residual_backend, jacobian_residual_backend, hessian_residual_backend)

Structure that define the different backend used to compute automatic differentiation of an ADNLPModel/ADNLSModel model. The different backend are all subtype of ADBackend and are respectively used for:

  • gradient computation;
  • hessian-vector products;
  • jacobian-vector products;
  • transpose jacobian-vector products;
  • jacobian computation;
  • hessian computation;
  • directional second derivative computation, i.e. gᵀ ∇²cᵢ(x) v.

The default constructors are ADModelBackend(nvar, f, ncon = 0, c = (args...) -> []; showtime::Bool = false, kwargs...) ADModelNLSBackend(nvar, F!, nequ, ncon = 0, c = (args...) -> []; showtime::Bool = false, kwargs...)

If show_time is set to true, it prints the time used to generate each backend.

The remaining kwargs are either the different backends as listed below or arguments passed to the backend's constructors:

  • gradient_backend = ForwardDiffADGradient;
  • hprod_backend = ForwardDiffADHvprod;
  • jprod_backend = ForwardDiffADJprod;
  • jtprod_backend = ForwardDiffADJtprod;
  • jacobian_backend = SparseADJacobian;
  • hessian_backend = ForwardDiffADHessian;
  • ghjvprod_backend = ForwardDiffADGHjvprod;
  • hprod_residual_backend = ForwardDiffADHvprod for ADNLSModel and EmptyADbackend otherwise;
  • jprod_residual_backend = ForwardDiffADJprod for ADNLSModel and EmptyADbackend otherwise;
  • jtprod_residual_backend = ForwardDiffADJtprod for ADNLSModel and EmptyADbackend otherwise;
  • jacobian_residual_backend = SparseADJacobian for ADNLSModel and EmptyADbackend otherwise;
  • hessian_residual_backend = ForwardDiffADHessian for ADNLSModel and EmptyADbackend otherwise.
source
ADNLPModels.ADNLPModelMethod
ADNLPModel(f, x0)
 ADNLPModel(f, x0, lvar, uvar)
 ADNLPModel(f, x0, clinrows, clincols, clinvals, lcon, ucon)
 ADNLPModel(f, x0, A, lcon, ucon)
@@ -45,7 +45,7 @@
   output[2] = x[2]
 end
 nvar, ncon = 3, 2
-nlp = ADNLPModel!(f, x0, c!, zeros(ncon), zeros(ncon)) # uses the default ForwardDiffAD backend.
source
ADNLPModels.ADNLSModelMethod
ADNLSModel(F, x0, nequ)
+nlp = ADNLPModel!(f, x0, c!, zeros(ncon), zeros(ncon)) # uses the default ForwardDiffAD backend.
source
ADNLPModels.ADNLSModelMethod
ADNLSModel(F, x0, nequ)
 ADNLSModel(F, x0, nequ, lvar, uvar)
 ADNLSModel(F, x0, nequ, clinrows, clincols, clinvals, lcon, ucon)
 ADNLSModel(F, x0, nequ, A, lcon, ucon)
@@ -103,16 +103,16 @@
   output[2] = x[2]
 end
 nvar, ncon = 3, 2
-nls = ADNLSModel!(F!, x0, nequ, c!, zeros(ncon), zeros(ncon))
source
ADNLPModels.compute_jacobian_sparsityFunction
compute_jacobian_sparsity(c, x0; detector)
-compute_jacobian_sparsity(c!, cx, x0; detector)

Return a sparse boolean matrix that represents the adjacency matrix of the Jacobian of c(x).

source
ADNLPModels.get_default_backendMethod
get_default_backend(meth::Symbol, backend::Symbol; kwargs...)
-get_default_backend(::Val{::Symbol}, backend; kwargs...)

Return a type <:ADBackend that corresponds to the default backend use for the method meth. See keys(ADNLPModels.predefined_backend) for a list of possible backends.

The following keyword arguments are accepted:

  • matrix_free::Bool: If true, this returns an EmptyADbackend for methods that handle matrices, e.g. :hessian_backend.
source
ADNLPModels.get_lagMethod
get_lag(nlp, b::ADBackend, obj_weight)
-get_lag(nlp, b::ADBackend, obj_weight, y)

Return the lagrangian function ℓ(x) = obj_weight * f(x) + c(x)ᵀy.

source
ADNLPModels.compute_jacobian_sparsityFunction
compute_jacobian_sparsity(c, x0; detector)
+compute_jacobian_sparsity(c!, cx, x0; detector)

Return a sparse boolean matrix that represents the adjacency matrix of the Jacobian of c(x).

source
ADNLPModels.get_default_backendMethod
get_default_backend(meth::Symbol, backend::Symbol; kwargs...)
+get_default_backend(::Val{::Symbol}, backend; kwargs...)

Return a type <:ADBackend that corresponds to the default backend use for the method meth. See keys(ADNLPModels.predefined_backend) for a list of possible backends.

The following keyword arguments are accepted:

  • matrix_free::Bool: If true, this returns an EmptyADbackend for methods that handle matrices, e.g. :hessian_backend.
source
ADNLPModels.get_lagMethod
get_lag(nlp, b::ADBackend, obj_weight)
+get_lag(nlp, b::ADBackend, obj_weight, y)

Return the lagrangian function ℓ(x) = obj_weight * f(x) + c(x)ᵀy.

source
ADNLPModels.get_nln_nnzhMethod
get_nln_nnzh(::ADBackend, nvar)
 get_nln_nnzh(b::ADModelBackend, nvar)
-get_nln_nnzh(nlp::AbstractNLPModel, nvar)

For a given ADBackend of a problem with nvar variables, return the number of nonzeros in the lower triangle of the Hessian. If b is the ADModelBackend then b.hessian_backend is used.

source
ADNLPModels.get_nln_nnzjMethod
get_nln_nnzj(::ADBackend, nvar, ncon)
+get_nln_nnzh(nlp::AbstractNLPModel, nvar)

For a given ADBackend of a problem with nvar variables, return the number of nonzeros in the lower triangle of the Hessian. If b is the ADModelBackend then b.hessian_backend is used.

source
ADNLPModels.get_nln_nnzjMethod
get_nln_nnzj(::ADBackend, nvar, ncon)
 get_nln_nnzj(b::ADModelBackend, nvar, ncon)
-get_nln_nnzj(nlp::AbstractNLPModel, nvar, ncon)

For a given ADBackend of a problem with nvar variables and ncon constraints, return the number of nonzeros in the Jacobian of nonlinear constraints. If b is the ADModelBackend then b.jacobian_backend is used.

source
ADNLPModels.get_residual_nnzhMethod
get_residual_nnzh(b::ADModelBackend, nvar)
-get_residual_nnzh(nls::AbstractNLSModel, nvar)

Return the number of nonzeros elements in the residual Hessians.

source
ADNLPModels.get_residual_nnzjMethod
get_residual_nnzj(b::ADModelBackend, nvar, nequ)
-get_residual_nnzj(nls::AbstractNLSModel, nvar, nequ)

Return the number of nonzeros elements in the residual Jacobians.

source
ADNLPModels.set_adbackend!Method
set_adbackend!(nlp, new_adbackend)
-set_adbackend!(nlp; kwargs...)

Replace the current adbackend value of nlp by new_adbackend or instantiate a new one with kwargs, see ADModelBackend. By default, the setter with kwargs will reuse existing backends.

source
+get_nln_nnzj(nlp::AbstractNLPModel, nvar, ncon)

For a given ADBackend of a problem with nvar variables and ncon constraints, return the number of nonzeros in the Jacobian of nonlinear constraints. If b is the ADModelBackend then b.jacobian_backend is used.

source
ADNLPModels.get_residual_nnzhMethod
get_residual_nnzh(b::ADModelBackend, nvar)
+get_residual_nnzh(nls::AbstractNLSModel, nvar)

Return the number of nonzeros elements in the residual Hessians.

source
ADNLPModels.get_residual_nnzjMethod
get_residual_nnzj(b::ADModelBackend, nvar, nequ)
+get_residual_nnzj(nls::AbstractNLSModel, nvar, nequ)

Return the number of nonzeros elements in the residual Jacobians.

source
ADNLPModels.set_adbackend!Method
set_adbackend!(nlp, new_adbackend)
+set_adbackend!(nlp; kwargs...)

Replace the current adbackend value of nlp by new_adbackend or instantiate a new one with kwargs, see ADModelBackend. By default, the setter with kwargs will reuse existing backends.

source
diff --git a/previews/PR302/sparse/index.html b/previews/PR302/sparse/index.html index a73edf78..4e0e533d 100644 --- a/previews/PR302/sparse/index.html +++ b/previews/PR302/sparse/index.html @@ -120,4 +120,4 @@ jprod_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jtprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jtprod_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jtprod_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 hess: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 hprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jhess: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jhprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 -

The package SparseConnectivityTracer.jl is used to compute the sparsity pattern of Jacobians and Hessians. The evaluation of the number of directional derivatives and the seeds required to compute compressed Jacobians and Hessians is performed using SparseMatrixColorings.jl. As of release v0.8.1, it has replaced ColPack.jl. We acknowledge Guillaume Dalle (@gdalle), Adrian Hill (@adrhill), Alexis Montoison (@amontoison), and Michel Schanen (@michel2323) for the development of these packages.

+

The package SparseConnectivityTracer.jl is used to compute the sparsity pattern of Jacobians and Hessians. The evaluation of the number of directional derivatives and the seeds required to compute compressed Jacobians and Hessians is performed using SparseMatrixColorings.jl. As of release v0.8.1, it has replaced ColPack.jl. We acknowledge Guillaume Dalle (@gdalle), Adrian Hill (@adrhill), Alexis Montoison (@amontoison), and Michel Schanen (@michel2323) for the development of these packages.

diff --git a/previews/PR302/sparsity_pattern/index.html b/previews/PR302/sparsity_pattern/index.html index 3301b62b..5d3405cb 100644 --- a/previews/PR302/sparsity_pattern/index.html +++ b/previews/PR302/sparsity_pattern/index.html @@ -29,7 +29,7 @@ @elapsed begin nlp = ADNLPModel!(f, xi, lvar, uvar, [1], [1], T[1], c!, lcon, ucon; hessian_backend = ADNLPModels.EmptyADbackend) -end
2.309108635

ADNLPModel will automatically prepare an AD backend for computing sparse Jacobian and Hessian. We disabled the Hessian computation here to focus the measurement on the Jacobian computation. The keyword argument show_time = true can also be passed to the problem's constructor to get more detailed information about the time used to prepare the AD backend.

using NLPModels
+end
2.295093793

ADNLPModel will automatically prepare an AD backend for computing sparse Jacobian and Hessian. We disabled the Hessian computation here to focus the measurement on the Jacobian computation. The keyword argument show_time = true can also be passed to the problem's constructor to get more detailed information about the time used to prepare the AD backend.

using NLPModels
 x = sqrt(2) * ones(n)
 jac_nln(nlp, x)
49999×100000 SparseArrays.SparseMatrixCSC{Float64, Int64} with 199996 stored entries:
 ⎡⠙⢦⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠳⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎤
@@ -78,7 +78,7 @@
 
   jac_back = ADNLPModels.SparseADJacobian(n, f, N - 1, c!, J)
   nlp = ADNLPModel!(f, xi, lvar, uvar, [1], [1], T[1], c!, lcon, ucon; hessian_backend = ADNLPModels.EmptyADbackend, jacobian_backend = jac_back)
-end
1.598714238

We recover the same Jacobian.

using NLPModels
+end
1.56674004

We recover the same Jacobian.

using NLPModels
 x = sqrt(2) * ones(n)
 jac_nln(nlp, x)
49999×100000 SparseArrays.SparseMatrixCSC{Float64, Int64} with 199996 stored entries:
 ⎡⠙⢦⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠳⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎤
@@ -90,4 +90,4 @@
 ⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⠲⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⢦⡀⠀⠀⠀⠀⠀⎥
 ⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠳⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⢦⡀⠀⠀⠀⎥
 ⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠳⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⢦⡀⠀⎥
-⎣⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠳⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⠦⎦

The same can be done for the Hessian of the Lagrangian.

+⎣⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠳⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⠦⎦

The same can be done for the Hessian of the Lagrangian.

diff --git a/previews/PR302/tutorial/index.html b/previews/PR302/tutorial/index.html index f39b8f56..1e0d05d1 100644 --- a/previews/PR302/tutorial/index.html +++ b/previews/PR302/tutorial/index.html @@ -1,2 +1,2 @@ -Tutorial · ADNLPModels.jl
+Tutorial · ADNLPModels.jl