diff --git a/previews/PR302/.documenter-siteinfo.json b/previews/PR302/.documenter-siteinfo.json index fa1ed821..8fb146b4 100644 --- a/previews/PR302/.documenter-siteinfo.json +++ b/previews/PR302/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.5","generation_timestamp":"2024-09-26T00:29:38","documenter_version":"1.7.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.5","generation_timestamp":"2024-09-26T00:29:47","documenter_version":"1.7.0"}} \ No newline at end of file diff --git a/previews/PR302/backend/index.html b/previews/PR302/backend/index.html index 6331f2e9..35b6b3cc 100644 --- a/previews/PR302/backend/index.html +++ b/previews/PR302/backend/index.html @@ -57,9 +57,9 @@ return g end

Finally, we use the homemade backend to compute the gradient.

nlp = ADNLPModel(sum, ones(3), gradient_backend = NewADGradient)
 grad(nlp, nlp.meta.x0)  # returns the gradient at x0 using `NewADGradient`
3-element Vector{Float64}:
- 0.437855944376617
- 0.9275335801536463
- 0.3879876758647781

Change backend

Once an instance of an ADNLPModel has been created, it is possible to change the backends without re-instantiating the model.

using ADNLPModels, NLPModels
+ 0.25134352937048154
+ 0.014934949634762829
+ 0.5167014826880357

Change backend

Once an instance of an ADNLPModel has been created, it is possible to change the backends without re-instantiating the model.

using ADNLPModels, NLPModels
 f(x) = 100 * (x[2] - x[1]^2)^2 + (x[1] - 1)^2
 x0 = 3 * ones(2)
 nlp = ADNLPModel(f, x0)
@@ -128,10 +128,10 @@
            jhess: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0               jhprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0     
 

Then, the gradient will return a vector of Float64.

x64 = rand(2)
 grad(nlp, x64)
2-element Vector{Float64}:
- -22.997214205244017
- 195.12883522056904

It is now possible to move to a different type, for instance Float32, while keeping the instance nlp.

x0_32 = ones(Float32, 2)
+ -47.401657140104184
+  41.130394589412724

It is now possible to move to a different type, for instance Float32, while keeping the instance nlp.

x0_32 = ones(Float32, 2)
 set_adbackend!(nlp, gradient_backend = ADNLPModels.ForwardDiffADGradient, x0 = x0_32)
 x32 = rand(Float32, 2)
 grad(nlp, x32)
2-element Vector{Float64}:
- -118.4972152709961
-  147.8224334716797
+ -120.70941925048828 + 115.58370971679688 diff --git a/previews/PR302/performance/29dec148.svg b/previews/PR302/performance/96cc5078.svg similarity index 72% rename from previews/PR302/performance/29dec148.svg rename to previews/PR302/performance/96cc5078.svg index b7dbbfd5..d7f936e4 100644 --- a/previews/PR302/performance/29dec148.svg +++ b/previews/PR302/performance/96cc5078.svg @@ -1,279 +1,279 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR302/performance/index.html b/previews/PR302/performance/index.html index 71d08f5b..5a210370 100644 --- a/previews/PR302/performance/index.html +++ b/previews/PR302/performance/index.html @@ -267,33 +267,33 @@ stats[back][stats[back].name .== name, :time] = [median(b.times)] stats[back][stats[back].name .== name, :allocs] = [median(b.allocs)] end -end
[ Info:  camshape with 1000 vars and 2003 cons
-[ Info:  catenary with 999 vars and 332 cons
-┌ Warning: catenary: number of variables adjusted to be a multiple of 3
-@ OptimizationProblems.PureJuMP ~/.julia/packages/OptimizationProblems/G0vFO/src/PureJuMP/catenary.jl:20
-┌ Warning: catenary: number of variables adjusted to be greater or equal to 6
-@ OptimizationProblems.PureJuMP ~/.julia/packages/OptimizationProblems/G0vFO/src/PureJuMP/catenary.jl:22
-┌ Warning: catenary: number of variables adjusted to be a multiple of 3
-@ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/G0vFO/src/ADNLPProblems/catenary.jl:11
-┌ Warning: catenary: number of variables adjusted to be greater or equal to 6
-@ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/G0vFO/src/ADNLPProblems/catenary.jl:13
-┌ Warning: catenary: number of variables adjusted to be a multiple of 3
-@ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/G0vFO/src/ADNLPProblems/catenary.jl:11
-┌ Warning: catenary: number of variables adjusted to be greater or equal to 6
-@ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/G0vFO/src/ADNLPProblems/catenary.jl:13
-[ Info:  chain with 1000 vars and 752 cons
-[ Info:  channel with 1000 vars and 1000 cons
-[ Info:  clnlbeam with 999 vars and 664 cons
-[ Info:  controlinvestment with 1000 vars and 500 cons
-[ Info:  elec with 999 vars and 333 cons
-[ Info:  hovercraft1d with 998 vars and 668 cons
-[ Info:  marine with 1007 vars and 488 cons
-[ Info:  polygon with 1000 vars and 125251 cons
-[ Info:  polygon1 with 1000 vars and 500 cons
-[ Info:  polygon2 with 1000 vars and 1 cons
-[ Info:  polygon3 with 1000 vars and 1000 cons
-[ Info:  robotarm with 1009 vars and 1002 cons
-[ Info:  structural with 3540 vars and 3652 cons
using Plots, SolverBenchmark
+end
[ Info:  camshape with 1000 vars and 2003 cons
+[ Info:  catenary with 999 vars and 332 cons
+┌ Warning: catenary: number of variables adjusted to be a multiple of 3
+└ @ OptimizationProblems.PureJuMP ~/.julia/packages/OptimizationProblems/G0vFO/src/PureJuMP/catenary.jl:20
+┌ Warning: catenary: number of variables adjusted to be greater or equal to 6
+└ @ OptimizationProblems.PureJuMP ~/.julia/packages/OptimizationProblems/G0vFO/src/PureJuMP/catenary.jl:22
+┌ Warning: catenary: number of variables adjusted to be a multiple of 3
+└ @ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/G0vFO/src/ADNLPProblems/catenary.jl:11
+┌ Warning: catenary: number of variables adjusted to be greater or equal to 6
+└ @ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/G0vFO/src/ADNLPProblems/catenary.jl:13
+┌ Warning: catenary: number of variables adjusted to be a multiple of 3
+└ @ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/G0vFO/src/ADNLPProblems/catenary.jl:11
+┌ Warning: catenary: number of variables adjusted to be greater or equal to 6
+└ @ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/G0vFO/src/ADNLPProblems/catenary.jl:13
+[ Info:  chain with 1000 vars and 752 cons
+[ Info:  channel with 1000 vars and 1000 cons
+[ Info:  clnlbeam with 999 vars and 664 cons
+[ Info:  controlinvestment with 1000 vars and 500 cons
+[ Info:  elec with 999 vars and 333 cons
+[ Info:  hovercraft1d with 998 vars and 668 cons
+[ Info:  marine with 1007 vars and 488 cons
+[ Info:  polygon with 1000 vars and 125251 cons
+[ Info:  polygon1 with 1000 vars and 500 cons
+[ Info:  polygon2 with 1000 vars and 1 cons
+[ Info:  polygon3 with 1000 vars and 1000 cons
+[ Info:  robotarm with 1009 vars and 1002 cons
+[ Info:  structural with 3540 vars and 3652 cons
using Plots, SolverBenchmark
 costnames = ["median time (in ns)", "median allocs"]
 costs = [
   df -> df.time,
@@ -302,4 +302,4 @@
 
 gr()
 
-profile_solvers(stats, costs, costnames)
Example block output +profile_solvers(stats, costs, costnames)Example block output diff --git a/previews/PR302/sparsity_pattern/index.html b/previews/PR302/sparsity_pattern/index.html index 993eed5b..3301b62b 100644 --- a/previews/PR302/sparsity_pattern/index.html +++ b/previews/PR302/sparsity_pattern/index.html @@ -29,7 +29,7 @@ @elapsed begin nlp = ADNLPModel!(f, xi, lvar, uvar, [1], [1], T[1], c!, lcon, ucon; hessian_backend = ADNLPModels.EmptyADbackend) -end
2.287233987

ADNLPModel will automatically prepare an AD backend for computing sparse Jacobian and Hessian. We disabled the Hessian computation here to focus the measurement on the Jacobian computation. The keyword argument show_time = true can also be passed to the problem's constructor to get more detailed information about the time used to prepare the AD backend.

using NLPModels
+end
2.309108635

ADNLPModel will automatically prepare an AD backend for computing sparse Jacobian and Hessian. We disabled the Hessian computation here to focus the measurement on the Jacobian computation. The keyword argument show_time = true can also be passed to the problem's constructor to get more detailed information about the time used to prepare the AD backend.

using NLPModels
 x = sqrt(2) * ones(n)
 jac_nln(nlp, x)
49999×100000 SparseArrays.SparseMatrixCSC{Float64, Int64} with 199996 stored entries:
 ⎡⠙⢦⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠳⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎤
@@ -78,7 +78,7 @@
 
   jac_back = ADNLPModels.SparseADJacobian(n, f, N - 1, c!, J)
   nlp = ADNLPModel!(f, xi, lvar, uvar, [1], [1], T[1], c!, lcon, ucon; hessian_backend = ADNLPModels.EmptyADbackend, jacobian_backend = jac_back)
-end
1.566555775

We recover the same Jacobian.

using NLPModels
+end
1.598714238

We recover the same Jacobian.

using NLPModels
 x = sqrt(2) * ones(n)
 jac_nln(nlp, x)
49999×100000 SparseArrays.SparseMatrixCSC{Float64, Int64} with 199996 stored entries:
 ⎡⠙⢦⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠳⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎤