diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 141676e..e92c169 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.8.5","generation_timestamp":"2023-11-18T11:44:32","documenter_version":"1.1.2"}} \ No newline at end of file +{"documenter":{"julia_version":"1.8.5","generation_timestamp":"2023-11-19T01:35:28","documenter_version":"1.1.2"}} \ No newline at end of file diff --git a/dev/index.html b/dev/index.html index 8481dc6..26e559e 100644 --- a/dev/index.html +++ b/dev/index.html @@ -5,4 +5,4 @@ author={Pacaud, Fran{\c{c}}ois and Shin, Sungho and Schanen, Michel and Maldonado, Daniel Adrian and Anitescu, Mihai}, journal={arXiv preprint arXiv:2203.11875}, year={2022} -}

Funding

This research was supported by the Exascale Computing Project (17-SC-20-SC), a joint project of the U.S. Department of Energy’s Office of Science and National Nuclear Security Administration, responsible for delivering a capable exascale ecosystem, including software, applications, and hardware technology, to support the nation’s exascale computing imperative.

Table of contents

Quickstart

OPF Model

OPF Solution

Wrappers

Library

+}

Funding

This research was supported by the Exascale Computing Project (17-SC-20-SC), a joint project of the U.S. Department of Energy’s Office of Science and National Nuclear Security Administration, responsible for delivering a capable exascale ecosystem, including software, applications, and hardware technology, to support the nation’s exascale computing imperative.

Table of contents

Quickstart

OPF Model

OPF Solution

Wrappers

Library

diff --git a/dev/lib/api/index.html b/dev/lib/api/index.html index 052dce0..4552d8d 100644 --- a/dev/lib/api/index.html +++ b/dev/lib/api/index.html @@ -2,4 +2,4 @@ Evaluators API · Argos.jl

Evaluator API

Description

Argos.AbstractNLPEvaluatorType
AbstractNLPEvaluator

AbstractNLPEvaluator implements the bridge between the problem formulation (see ExaPF.AbstractFormulation) and the optimization solver. Once the problem formulation bridged, the evaluator allows to evaluate:

  • the objective;
  • the gradient of the objective;
  • the constraints;
  • the Jacobian of the constraints;
  • the Jacobian-vector and transpose-Jacobian vector products of the constraints;
  • the Hessian of the objective;
  • the Hessian of the Lagrangian.
source

API Reference

Optimization

Argos.optimize!Function
optimize!(optimizer, nlp::AbstractNLPEvaluator, x0)

Use optimization routine implemented in optimizer to optimize the optimal power flow problem specified in the evaluator nlp. Initial point is specified by x0.

Return the solution as a named tuple, with fields

  • status::MOI.TerminationStatus: Solver's termination status, as specified by MOI
  • minimum::Float64: final objective
  • minimizer::AbstractVector: final solution vector, with same ordering as the Variables specified in nlp.
optimize!(optimizer, nlp::AbstractNLPEvaluator)

Wrap previous optimize! function and pass as initial guess x0 the initial value specified when calling initial(nlp).

Examples

nlp = ExaPF.ReducedSpaceEvaluator(datafile)
 optimizer = Ipopt.Optimizer()
 solution = ExaPF.optimize!(optimizer, nlp)
-

Notes

By default, the optimization routine solves a minimization problem.

source

Attributes

Argos.n_variablesFunction
n_variables(nlp::AbstractNLPEvaluator)

Get the number of variables in the problem.

source
Argos.constraints_typeFunction
constraints_type(nlp::AbstractNLPEvaluator)

Return the type of the non-linear constraints of the evaluator nlp, as a Symbol. Result could be :inequality if problem has only inequality constraints, :equality if problem has only equality constraints, or :mixed if problem has both types of constraints.

source

Callbacks

Argos.update!Function
update!(nlp::AbstractNLPEvaluator, u::AbstractVector)

Update the internal structure inside nlp with the new entry u. This method has to be called before calling any other callbacks.

source
Argos.objectiveFunction
objective(nlp::AbstractNLPEvaluator, u)::Float64

Evaluate the objective at given variable u.

source
Argos.gradient!Function
gradient!(nlp::AbstractNLPEvaluator, g, u)

Evaluate the gradient of the objective, at given variable u. Store the result inplace in the vector g.

Note

The vector g should have the same dimension as u.

source
Argos.constraint!Function
constraint!(nlp::AbstractNLPEvaluator, cons, u)

Evaluate the constraints of the problem at given variable u. Store the result inplace, in the vector cons.

Note

The vector cons should have the same dimension as the result returned by n_constraints(nlp).

source
Argos.jacobian!Function
jacobian!(nlp::AbstractNLPEvaluator, jac::AbstractMatrix, u)

Evaluate the Jacobian of the constraints, at variable u. Store the result inplace, in the m x n dense matrix jac.

source
Argos.jacobian_coo!Function
jacobian_coo!(nlp::AbstractNLPEvaluator, jac::AbstractVector, u)

Evaluate the (sparse) Jacobian of the constraints at variable u in COO format. Store the result inplace, in the nnzj vector jac.

source
Argos.jprod!Function
jprod!(nlp::AbstractNLPEvaluator, jv, u, v)

Evaluate the Jacobian-vector product $J v$ of the constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • u is a vector with dimension n
  • v is a vector with dimension n
  • jv is a vector with dimension m
source
Argos.jtprod!Function
jtprod!(nlp::AbstractNLPEvaluator, jv, u, v)

Evaluate the transpose Jacobian-vector product $J^{T} v$ of the constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • u is a vector with dimension n
  • v is a vector with dimension m
  • jv is a vector with dimension n
source
Argos.ojtprod!Function
ojtprod!(nlp::AbstractNLPEvaluator, jv, u, σ, v)

Evaluate the transpose Jacobian-vector product J' * [σ ; v], with J the Jacobian of the vector [f(x); h(x)]. f(x) is the current objective and h(x) constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • jv is a vector with dimension n
  • u is a vector with dimension n
  • σ is a scalar
  • v is a vector with dimension m
source
Argos.hessian!Function
hessian!(nlp::AbstractNLPEvaluator, H, u)

Evaluate the Hessian ∇²f(u) of the objective function f(u). Store the result inplace, in the n x n dense matrix H.

source
Argos.hessian_coo!Function
hessian_coo!(nlp::AbstractNLPEvaluator, hess::AbstractVector, u)

Evaluate the (sparse) Hessian of the constraints at variable u in COO format. Store the result inplace, in the nnzh vector hess.

source
Argos.hessprod!Function
hessprod!(nlp::AbstractNLPEvaluator, hessvec, u, v)

Evaluate the Hessian-vector product ∇²f(u) * v of the objective evaluated at variable u. Store the result inplace, in the vector hessvec.

Note

The vector hessprod should have the same length as u.

source
Argos.hessian_lagrangian_prod!Function
hessian_lagrangian_prod!(nlp::AbstractNLPEvaluator, hessvec, u, y, σ, v)

Evaluate the Hessian-vector product of the Lagrangian function $L(u, y) = f(u) + \sum_i y_i c_i(u)$ with a vector v:

\[∇²L(u, y) ⋅ v = σ ∇²f(u) ⋅ v + \sum_i y_i ∇²c_i(u) ⋅ v\]

Store the result inplace, in the vector hessvec.

Arguments

  • hessvec is a AbstractVector with dimension n, which is modified inplace.
  • u is a AbstractVector with dimension n, storing the current variable.
  • y is a AbstractVector with dimension n, storing the current constraints' multipliers
  • σ is a scalar, encoding the objective's scaling
  • v is a vector with dimension n.
source
Argos.hessian_lagrangian_penalty_prod!Function
hessian_lagrangian_penalty_prod!(nlp::AbstractNLPEvaluator, hessvec, u, y, σ, d, v)

Evaluate the Hessian-vector product of the Augmented Lagrangian function $L(u, y) = f(u) + \sum_i y_i c_i(u) + \frac{1}{2} d_i c_i(u)^2$ with a vector v:

\[∇²L(u, y) ⋅ v = σ ∇²f(u) ⋅ v + \sum_i (y_i + d_i) ∇²c_i(u) ⋅ v + \sum_i d_i ∇c_i(u)^T ∇c_i(u)\]

Store the result inplace, in the vector hessvec.

Arguments

  • hessvec is a AbstractVector with dimension n, which is modified inplace.
  • u is a AbstractVector with dimension n, storing the current variable.
  • y is a AbstractVector with dimension n, storing the current constraints' multipliers
  • σ is a scalar
  • v is a vector with dimension n.
  • d is a vector with dimension m.
source

Utilities

Argos.reset!Function
reset!(nlp::AbstractNLPEvaluator)

Reset evaluator nlp to default configuration.

source
+

Notes

By default, the optimization routine solves a minimization problem.

source

Attributes

Argos.VariablesType
Variables <: AbstractNLPAttribute end

Attribute corresponding to the optimization variables attached to a given AbstractNLPEvaluator.

source
Argos.ConstraintsType
Constraints <: AbstractNLPAttribute end

Attribute corresponding to the constraints attached to a given AbstractNLPEvaluator.

source
Argos.n_variablesFunction
n_variables(nlp::AbstractNLPEvaluator)

Get the number of variables in the problem.

source
Argos.n_constraintsFunction
n_constraints(nlp::AbstractNLPEvaluator)

Get the number of constraints in the problem.

source
Argos.constraints_typeFunction
constraints_type(nlp::AbstractNLPEvaluator)

Return the type of the non-linear constraints of the evaluator nlp, as a Symbol. Result could be :inequality if problem has only inequality constraints, :equality if problem has only equality constraints, or :mixed if problem has both types of constraints.

source

Callbacks

Argos.update!Function
update!(nlp::AbstractNLPEvaluator, u::AbstractVector)

Update the internal structure inside nlp with the new entry u. This method has to be called before calling any other callbacks.

source
Argos.objectiveFunction
objective(nlp::AbstractNLPEvaluator, u)::Float64

Evaluate the objective at given variable u.

source
Argos.gradient!Function
gradient!(nlp::AbstractNLPEvaluator, g, u)

Evaluate the gradient of the objective, at given variable u. Store the result inplace in the vector g.

Note

The vector g should have the same dimension as u.

source
Argos.constraint!Function
constraint!(nlp::AbstractNLPEvaluator, cons, u)

Evaluate the constraints of the problem at given variable u. Store the result inplace, in the vector cons.

Note

The vector cons should have the same dimension as the result returned by n_constraints(nlp).

source
Argos.jacobian!Function
jacobian!(nlp::AbstractNLPEvaluator, jac::AbstractMatrix, u)

Evaluate the Jacobian of the constraints, at variable u. Store the result inplace, in the m x n dense matrix jac.

source
Argos.jacobian_coo!Function
jacobian_coo!(nlp::AbstractNLPEvaluator, jac::AbstractVector, u)

Evaluate the (sparse) Jacobian of the constraints at variable u in COO format. Store the result inplace, in the nnzj vector jac.

source
Argos.jprod!Function
jprod!(nlp::AbstractNLPEvaluator, jv, u, v)

Evaluate the Jacobian-vector product $J v$ of the constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • u is a vector with dimension n
  • v is a vector with dimension n
  • jv is a vector with dimension m
source
Argos.jtprod!Function
jtprod!(nlp::AbstractNLPEvaluator, jv, u, v)

Evaluate the transpose Jacobian-vector product $J^{T} v$ of the constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • u is a vector with dimension n
  • v is a vector with dimension m
  • jv is a vector with dimension n
source
Argos.ojtprod!Function
ojtprod!(nlp::AbstractNLPEvaluator, jv, u, σ, v)

Evaluate the transpose Jacobian-vector product J' * [σ ; v], with J the Jacobian of the vector [f(x); h(x)]. f(x) is the current objective and h(x) constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • jv is a vector with dimension n
  • u is a vector with dimension n
  • σ is a scalar
  • v is a vector with dimension m
source
Argos.hessian!Function
hessian!(nlp::AbstractNLPEvaluator, H, u)

Evaluate the Hessian ∇²f(u) of the objective function f(u). Store the result inplace, in the n x n dense matrix H.

source
Argos.hessian_coo!Function
hessian_coo!(nlp::AbstractNLPEvaluator, hess::AbstractVector, u)

Evaluate the (sparse) Hessian of the constraints at variable u in COO format. Store the result inplace, in the nnzh vector hess.

source
Argos.hessprod!Function
hessprod!(nlp::AbstractNLPEvaluator, hessvec, u, v)

Evaluate the Hessian-vector product ∇²f(u) * v of the objective evaluated at variable u. Store the result inplace, in the vector hessvec.

Note

The vector hessprod should have the same length as u.

source
Argos.hessian_lagrangian_prod!Function
hessian_lagrangian_prod!(nlp::AbstractNLPEvaluator, hessvec, u, y, σ, v)

Evaluate the Hessian-vector product of the Lagrangian function $L(u, y) = f(u) + \sum_i y_i c_i(u)$ with a vector v:

\[∇²L(u, y) ⋅ v = σ ∇²f(u) ⋅ v + \sum_i y_i ∇²c_i(u) ⋅ v\]

Store the result inplace, in the vector hessvec.

Arguments

  • hessvec is a AbstractVector with dimension n, which is modified inplace.
  • u is a AbstractVector with dimension n, storing the current variable.
  • y is a AbstractVector with dimension n, storing the current constraints' multipliers
  • σ is a scalar, encoding the objective's scaling
  • v is a vector with dimension n.
source
Argos.hessian_lagrangian_penalty_prod!Function
hessian_lagrangian_penalty_prod!(nlp::AbstractNLPEvaluator, hessvec, u, y, σ, d, v)

Evaluate the Hessian-vector product of the Augmented Lagrangian function $L(u, y) = f(u) + \sum_i y_i c_i(u) + \frac{1}{2} d_i c_i(u)^2$ with a vector v:

\[∇²L(u, y) ⋅ v = σ ∇²f(u) ⋅ v + \sum_i (y_i + d_i) ∇²c_i(u) ⋅ v + \sum_i d_i ∇c_i(u)^T ∇c_i(u)\]

Store the result inplace, in the vector hessvec.

Arguments

  • hessvec is a AbstractVector with dimension n, which is modified inplace.
  • u is a AbstractVector with dimension n, storing the current variable.
  • y is a AbstractVector with dimension n, storing the current constraints' multipliers
  • σ is a scalar
  • v is a vector with dimension n.
  • d is a vector with dimension m.
source

Utilities

Argos.reset!Function
reset!(nlp::AbstractNLPEvaluator)

Reset evaluator nlp to default configuration.

source
diff --git a/dev/lib/evaluators/index.html b/dev/lib/evaluators/index.html index 559226d..671fa43 100644 --- a/dev/lib/evaluators/index.html +++ b/dev/lib/evaluators/index.html @@ -90,4 +90,4 @@ julia> @assert isa(x, Array) # x is defined on the host memory julia> Argos.objective(bdg, x) # evaluate the objective on the device -source +source diff --git a/dev/lib/kkt/index.html b/dev/lib/kkt/index.html index 608d89d..df23af1 100644 --- a/dev/lib/kkt/index.html +++ b/dev/lib/kkt/index.html @@ -23,4 +23,4 @@ julia> kkt = Argos.MixedAuglagKKTSystem{T, VT, MT}(opf) julia> MadNLP.get_kkt(kkt) # return the matrix to factorize -

Notes

MixedAuglagKKTSystem can be instantiated both on the host memory (CPU) or on a NVIDIA GPU using CUDA.

Supports only bound-constrained optimization problem (so no Jacobian).

References

[PMSSA2022] Pacaud, François, Daniel Adrian Maldonado, Sungho Shin, Michel Schanen, and Mihai Anitescu. "A feasible reduced space method for real-time optimal power flow." Electric Power Systems Research 212 (2022): 108268.

source +

Notes

MixedAuglagKKTSystem can be instantiated both on the host memory (CPU) or on a NVIDIA GPU using CUDA.

Supports only bound-constrained optimization problem (so no Jacobian).

References

[PMSSA2022] Pacaud, François, Daniel Adrian Maldonado, Sungho Shin, Michel Schanen, and Mihai Anitescu. "A feasible reduced space method for real-time optimal power flow." Electric Power Systems Research 212 (2022): 108268.

source diff --git a/dev/lib/wrappers/index.html b/dev/lib/wrappers/index.html index 5201a03..62d319e 100644 --- a/dev/lib/wrappers/index.html +++ b/dev/lib/wrappers/index.html @@ -9,4 +9,4 @@ julia> nlp = Argos.ReducedSpaceEvaluator(datafile); julia> ev = Argos.MOIEvaluator(nlp) -

Attributes

source +

Attributes

source diff --git a/dev/man/fullspace/index.html b/dev/man/fullspace/index.html index 263ddb7..39154c4 100644 --- a/dev/man/fullspace/index.html +++ b/dev/man/fullspace/index.html @@ -83,7 +83,7 @@ #lines: 9 giving a mathematical formulation with: #controls: 5 - #states : 14, ExaPF.ComposedExpressions{ExaPF.PolarBasis{Vector{Int64}, SparseArrays.SparseMatrixCSC{Float64, Int64}}, ExaPF.MultiExpressions}(PolarBasis (AbstractExpression), ExaPF.MultiExpressions(ExaPF.AutoDiff.AbstractExpression[CostFunction (AbstractExpression), PowerFlowBalance (AbstractExpression), PowerGenerationBounds (AbstractExpression), LineFlows (AbstractExpression)])), [11, 12, 13, 14, 15, 16, 17, 18, 4, 5, 6, 7, 8, 9, 1, 2, 3, 20, 21], 21-elements NetworkStack{Vector{ForwardDiff.Dual{Nothing, Float64, 8}}}, 21-elements NetworkStack{Vector{ForwardDiff.Dual{Nothing, Float64, 8}}}, [1, 1, 1, 2, 3, 4, 2, 3, 4, 5, 6, 7, 5, 6, 7, 8, 8, 1, 1], 8, ForwardDiff.Dual{Nothing, Float64, 8}[Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0) … Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)], ForwardDiff.Dual{Nothing, Float64, 8}[Dual{Nothing}(2.121995791e-314,9.8657305e-316,5.0e-324,4.864849e-316,4.243991582e-314,0.0,0.0,1.645546518e-314,9.8657305e-316), Dual{Nothing}(7.4112835e-316,6.0e-323,6.3659873734e-314,5.0e-324,2.1219957905e-314,0.0,0.0,6.9269972706384e-310,1.243166e-316), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,4.86485335e-316,0.0,NaN), Dual{Nothing}(0.0,1.641231127e-314,4.8648585e-316,4.8648585e-316,4.86486007e-316,2.121995791e-314,9.73e-322,0.0,4.86486165e-316), Dual{Nothing}(4.243991582e-314,5.0e-324,1.136e-321,5.72938863566e-313,9.73e-322,9.92343103e-316,7.4e-323,6.3659873734e-314,5.0e-324), Dual{Nothing}(2.1219957905e-314,0.0,0.0,6.9269972706384e-310,1.24317154e-316,0.0,0.0,4.35e-322,0.0), Dual{Nothing}(0.0,0.0,4.864866e-316,0.0,NaN,0.0,7.4e-323,4.86487114e-316,4.86487114e-316), Dual{Nothing}(4.8648727e-316,2.121995791e-314,1.1097438087816836e188,1.7069689862412613e-13,4.8648743e-316,4.243991582e-314,2.9130886197887413e-289,-1.628828696661932e-307,1.39069238152491e-309), Dual{Nothing}(-5.957407874855143e-264,1.055076633e-315,7.4e-323,6.3659873734e-314,5.0e-324,2.1219957905e-314,0.0,0.0,6.9269972706384e-310), Dual{Nothing}(1.24317786e-316,0.0,8.489487147e-314,2.0e-323,4.158584937444445e69,0.0,0.0,4.86487865e-316,0.0) … Dual{Nothing}(4.86494505e-316,1.5e-323,2.641757783209559e180,0.0,0.0,4.8649419e-316,0.0,NaN,0.0), Dual{Nothing}(4.14452377e-316,4.864947e-316,4.864947e-316,4.8649486e-316,2.121995791e-314,0.0,3.0e-323,4.8649502e-316,4.243991582e-314), Dual{Nothing}(3.0e-323,1.22716737e-316,6.27143874588185e-310,4.8649502e-316,9.49561286e-316,6.4e-323,6.3659873734e-314,5.0e-324,2.1219957905e-314), Dual{Nothing}(0.0,0.0,6.9269972706384e-310,1.24321185e-316,0.0,9.238527346e-315,4.86495493e-316,0.0,0.0), Dual{Nothing}(0.0,4.86495453e-316,0.0,NaN,0.0,7.4e-323,4.86495967e-316,4.86495967e-316,4.86496125e-316), Dual{Nothing}(2.121995791e-314,6.92683086044794e-310,0.0,4.86496283e-316,4.243991582e-314,4.8649668e-316,1.7132833903705666e161,8.237586054944433e-67,2.6726986311814e-310), Dual{Nothing}(9.8204551e-316,7.0e-323,6.3659873734e-314,5.0e-324,2.1219957905e-314,0.0,0.0,6.9269972706384e-310,1.2432174e-316), Dual{Nothing}(0.0,4.8650083e-316,3.0e-323,0.0,0.0,0.0,4.8649672e-316,0.0,NaN), Dual{Nothing}(0.0,7.4e-323,4.8649723e-316,4.8649723e-316,4.8649739e-316,2.121995791e-314,0.0,0.0,4.8649755e-316), Dual{Nothing}(4.243991582e-314,0.0,3.0e-323,0.0,9.4e-323,1.041026197e-315,6.0e-323,6.3659873734e-314,5.0e-324)], sparse([1, 7, 13, 16, 2, 5, 11, 17, 3, 4 … 1, 7, 13, 16, 2, 5, 11, 17, 18, 19], [1, 1, 1, 1, 2, 2, 2, 2, 3, 3 … 16, 16, 16, 16, 17, 17, 17, 17, 18, 19], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 … 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 19, 19))

generating the matrix

flp.hess.H
19×19 SparseArrays.SparseMatrixCSC{Float64, Int64} with 103 stored entries:
+    #states  :   14, ExaPF.ComposedExpressions{ExaPF.PolarBasis{Vector{Int64}, SparseArrays.SparseMatrixCSC{Float64, Int64}}, ExaPF.MultiExpressions}(PolarBasis (AbstractExpression), ExaPF.MultiExpressions(ExaPF.AutoDiff.AbstractExpression[CostFunction (AbstractExpression), PowerFlowBalance (AbstractExpression), PowerGenerationBounds (AbstractExpression), LineFlows (AbstractExpression)])), [11, 12, 13, 14, 15, 16, 17, 18, 4, 5, 6, 7, 8, 9, 1, 2, 3, 20, 21], 21-elements NetworkStack{Vector{ForwardDiff.Dual{Nothing, Float64, 8}}}, 21-elements NetworkStack{Vector{ForwardDiff.Dual{Nothing, Float64, 8}}}, [1, 1, 1, 2, 3, 4, 2, 3, 4, 5, 6, 7, 5, 6, 7, 8, 8, 1, 1], 8, ForwardDiff.Dual{Nothing, Float64, 8}[Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)  …  Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)], ForwardDiff.Dual{Nothing, Float64, 8}[Dual{Nothing}(1.2233510826174283e-285,8.959434236586543e-287,-6.210491197753778e-264,6.535402088148637e-126,9.589185761359925e38,3.141104260774886e43,1.001076061707355e-309,-5.680222798770673e273,-6.144001835303085e-264), Dual{Nothing}(3.91044559506e-312,1.2432564883144077e235,-5.486789738138475e303,1.96353661109039e-22,5.7857542116e-314,2.2211488861697296e-192,5.078930935584057e60,4.107554744833128e39,9.24e-322), Dual{Nothing}(-8.89760583228365e-122,9.652004991607224e234,6.806473113164017e38,2.7796285477041778e-306,-5.954496620308459e-264,5.07895746406703e60,7.57698617779554e58,4.520674436488274e-133,5.650846299479797e-134), Dual{Nothing}(1.9211821357432122e-269,-2.311707958456908e156,5.9552811459319e141,3.2036038107774444e-284,2.205081659393401e13,4.778337388928867e-299,4.4607770860734254e-187,9.065039286693446e-227,5.955757664937767e-294), Dual{Nothing}(4.0351192930517033e-308,5.11992719182132e-254,4.99772448702494e-310,3.78576699573454e-270,1.6840158597101e-310,0.0,0.0,4.778315155916695e-299,0.0), Dual{Nothing}(3.8e-321,1.9494738317350114e106,0.0,5.314e-320,1.33091576009342e-309,2.883606501228941e214,9.537284141134298e-307,3.5586595886570105e-67,3.5586595886570125e-67), Dual{Nothing}(1.151836485581781e-70,1.0265435238213597e-71,6.953590142353728e-308,6.4758e-319,8.9123823268e-313,1.0265435238213502e-71,3.3589380563252954e-139,2.6724706668479e-310,4.778309975944753e-299), Dual{Nothing}(5.21501685587625e-310,0.0,1.0e-322,1.500947214415929e-303,8.4886485694e-312,5.9415882169e-313,0.0,1.507e-321,8.700263573382166e-245), Dual{Nothing}(9.858003248264812e-287,5.453135190156616e-270,6.675235156294769e-307,8.861796045429911e-246,2.1303173426e-314,1.3858392787120261e219,1.1432376340898245e243,1.83558760196145e-80,7.482693336349506e247), Dual{Nothing}(7.273780879964945e175,2.614706486674921e127,5.40227236156163e-310,8.40769445e-315,9.921714509955608e247,2.125506162e-314,6.58965e-319,5.090249929790674e-294,1.1243577316491751e-215)  …  Dual{Nothing}(2.1219957915e-314,0.0,0.0,1.823e-321,3.36e-322,0.0,5.0e-324,0.0,2.1219958483e-314), Dual{Nothing}(0.0,0.0,2.16e-321,4.74e-322,0.0,5.0e-324,0.0,8.4879832187e-314,0.0), Dual{Nothing}(0.0,5.89e-321,1.423e-321,1.0609978963e-313,4.0e-323,1.2e-322,2.121995835e-314,2.37e-322,0.0), Dual{Nothing}(2.633e-321,2.4e-322,0.0,5.0e-324,5.0e-324,2.121995825e-314,0.0,0.0,2.875e-321), Dual{Nothing}(2.2e-322,0.0,5.0e-324,0.0,8.4879831955e-314,0.0,0.0,7.31e-321,1.2e-322), Dual{Nothing}(1.6975966336e-313,4.0e-323,1.2e-322,2.1219958127e-314,0.0,0.0,3.1e-321,9.0e-323,0.0), Dual{Nothing}(5.0e-324,0.0,8.487983183e-314,0.0,0.0,7.43e-321,1.2e-322,2.1219957918e-313,4.0e-323), Dual{Nothing}(1.2e-322,2.121995854e-314,0.0,0.0,3.187e-321,0.0,0.0,5.0e-324,0.0), Dual{Nothing}(3.105039145794793e231,1.0e-323,0.0,3.2e-321,3.56e-322,0.0,4.0e-323,0.0,8.4879832434e-314), Dual{Nothing}(0.0,0.0,7.55e-321,1.2e-322,2.7585945291e-313,4.0e-323,1.2e-322,2.1219958646e-314,0.0)], sparse([1, 7, 13, 16, 2, 5, 11, 17, 3, 4  …  1, 7, 13, 16, 2, 5, 11, 17, 18, 19], [1, 1, 1, 1, 2, 2, 2, 2, 3, 3  …  16, 16, 16, 16, 17, 17, 17, 17, 18, 19], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0  …  0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 19, 19))

generating the matrix

flp.hess.H
19×19 SparseArrays.SparseMatrixCSC{Float64, Int64} with 103 stored entries:
 ⠑⣤⡂⠡⣤⡂⠡⠌⠂⠀
 ⠌⡈⠻⣦⡈⠻⣦⠠⠁⠀
 ⠠⠻⣦⡈⠻⣦⡈⠁⠄⠀
@@ -124,4 +124,4 @@
 ⠠⠻⣦⡈⠳⣄⠀⠀⠀⠀
 ⡁⠆⠈⡛⠆⠈⡓⢄⠀⠀
 ⠈⠀⠁⠀⠀⠁⠀⠀⠑⠄
Info

For the Hessian, only the lower-triangular are being returned.

Deport on CUDA GPU

Deporting all the operations on a CUDA GPU simply amounts to instantiating a FullSpaceEvaluator`](@ref) on the GPU, with

using CUDAKernels # suppose CUDAKernels has been downloaded
-flp = Argos.FullSpaceEvaluator(datafile; device=CUDADevice())

Then, the API remains exactly the same as on the CPU.

When using device=CUDADevice(), the model is entirely instantiated on the device, without data left on the host (hence minimizing the communication costs). The computation of the derivatives is streamlined by propagating the tangents in parallel, leading to faster evaluations of the callbacks. As expected, the larger the model, the more significant the performance gain.

+flp = Argos.FullSpaceEvaluator(datafile; device=CUDADevice())

Then, the API remains exactly the same as on the CPU.

When using device=CUDADevice(), the model is entirely instantiated on the device, without data left on the host (hence minimizing the communication costs). The computation of the derivatives is streamlined by propagating the tangents in parallel, leading to faster evaluations of the callbacks. As expected, the larger the model, the more significant the performance gain.

diff --git a/dev/man/moi_wrapper/index.html b/dev/man/moi_wrapper/index.html index ad7e5b9..7f11c38 100644 --- a/dev/man/moi_wrapper/index.html +++ b/dev/man/moi_wrapper/index.html @@ -72,6 +72,6 @@ Number of equality constraint Jacobian evaluations = 16 Number of inequality constraint Jacobian evaluations = 16 Number of Lagrangian Hessian evaluations = 15 -Total seconds in IPOPT = 5.417 +Total seconds in IPOPT = 5.276 -EXIT: Optimal Solution Found. +EXIT: Optimal Solution Found. diff --git a/dev/man/nlpmodel_wrapper/index.html b/dev/man/nlpmodel_wrapper/index.html index 4f64d32..3552581 100644 --- a/dev/man/nlpmodel_wrapper/index.html +++ b/dev/man/nlpmodel_wrapper/index.html @@ -72,4 +72,4 @@ flp = Argos.FullSpaceEvaluator(datafile; device=CUDADevice())

The OPFModel structure works exclusively on the host memory, so we have to bridge the evaluator flp to the host before creating a new instance of OPFModel:

brige = Argos.bridge(flp)
 model = Argos.OPFModel(bridge)
-
Note

Bridging an evaluator between the host and the device induces significant data movements between the host and the device, as for each input or for each output we have to move the data back and forth between the host and the device. However, we have noticed that in practice the time to operate the data transfer is negligible compared to the other operations (linear algebra, KKT system solution) pursued inside the optimization algorithm.

+
Note

Bridging an evaluator between the host and the device induces significant data movements between the host and the device, as for each input or for each output we have to move the data back and forth between the host and the device. However, we have noticed that in practice the time to operate the data transfer is negligible compared to the other operations (linear algebra, KKT system solution) pursued inside the optimization algorithm.

diff --git a/dev/man/overview/index.html b/dev/man/overview/index.html index 9740656..92c686e 100644 --- a/dev/man/overview/index.html +++ b/dev/man/overview/index.html @@ -27,79 +27,79 @@ Argos.update!(flp, x) # The values in the cache are modified accordingly [stack.vmag stack.vang]
9×2 Matrix{Float64}:
- 0.226624  0.0
- 0.501395  0.637496
- 0.917216  0.79987
- 0.334287  0.177238
- 0.733512  0.0578569
- 0.929176  0.848572
- 0.198684  0.929921
- 0.157821  0.640291
- 0.442635  0.383217
Note

Every time we have a new variable x, it is important to refresh the cache by calling explicitly Argos.update!(flp, x) before calling the other callbacks.

Callbacks

Now the cache has been refreshed by calling update!, one can query the different callbacks to evaluate the objective, the constraints and the derivatives:

Objective:

obj = Argos.objective(flp, x)
2157.256644208572

Gradient:

g = zeros(n)
+ 0.15041   0.0
+ 0.50092   0.924992
+ 0.473759  0.358728
+ 0.518277  0.876356
+ 0.92416   0.555134
+ 0.76888   0.725515
+ 0.740655  0.45902
+ 0.626103  0.432592
+ 0.368029  0.830113
Note

Every time we have a new variable x, it is important to refresh the cache by calling explicitly Argos.update!(flp, x) before calling the other callbacks.

Callbacks

Now the cache has been refreshed by calling update!, one can query the different callbacks to evaluate the objective, the constraints and the derivatives:

Objective:

obj = Argos.objective(flp, x)
2884.1561671399973

Gradient:

g = zeros(n)
 Argos.gradient!(flp, g, x)
 g
19-element Vector{Float64}:
-    0.0
-    0.0
-   13.155149511667696
-    0.0
-    0.0
-    0.0
-    0.0
-    0.0
-    7.048804671727464
-    0.0
-    0.0
-    0.0
-    0.0
-    0.0
-   10.397499704490965
-    0.0
-    0.0
- 1752.4422673312572
- 1066.3835998361485

Constraints:

cons = zeros(m)
+     0.0
+     0.0
+  1548.469090944695
+     0.0
+     0.0
+     0.0
+     0.0
+     0.0
+  3587.4389917479543
+     0.0
+     0.0
+     0.0
+     0.0
+     0.0
+ 12361.478790090398
+     0.0
+     0.0
+   605.2804108981928
+  2244.392745054761

Constraints:

cons = zeros(m)
 Argos.constraint!(flp, cons, x)
 cons
36-element Vector{Float64}:
- -0.9637979656689323
- -1.1024645185405844
- -0.11332057208831653
- -1.4670548501856002
-  4.545720654327502
-  1.117749192843455
- -0.0719796011860856
-  1.7172380706369474
- -1.1569930090230782
-  4.263147291989427
+  2.0867816596061397
+ -3.1044758937922508
+  2.4773279884069503
+ -0.38948567462106976
+  4.278286386938623
+ -0.16297745240170824
+ -2.975594320160333
+  1.5140267745435063
+  2.3991513077462567
+  5.587304029125447
   ⋮
-  0.47036113152405074
-  9.745290639744656
- 12.115906056363478
-  0.5440920744620615
-  2.0611829618917836
-  0.020501149688071187
-  7.5969722098727335
-  0.5768774775286676
-  0.2813629166294773
Note

All the callbacks are written to modify the data (constraints, gradient) inplace, to avoid unneeded allocations. In addition, Argos.jl provides a version allocating automatically the return values:

g = Argos.gradient(flp, x)
+ 15.500905744331796
+ 20.253654774364026
+  1.0413827764711137
+ 23.3366245200085
+  2.1675692664401685
+  1.0597346199923012
+  5.793307883082476
+  0.5393196329874921
+  0.801071250769947
Note

All the callbacks are written to modify the data (constraints, gradient) inplace, to avoid unneeded allocations. In addition, Argos.jl provides a version allocating automatically the return values:

g = Argos.gradient(flp, x)
 c = Argos.constraint(flp, x)
36-element Vector{Float64}:
- -0.9637979656689323
- -1.1024645185405844
- -0.11332057208831653
- -1.4670548501856002
-  4.545720654327502
-  1.117749192843455
- -0.0719796011860856
-  1.7172380706369474
- -1.1569930090230782
-  4.263147291989427
+  2.0867816596061397
+ -3.1044758937922508
+  2.4773279884069503
+ -0.38948567462106976
+  4.278286386938623
+ -0.16297745240170824
+ -2.975594320160333
+  1.5140267745435063
+  2.3991513077462567
+  5.587304029125447
   ⋮
-  0.47036113152405074
-  9.745290639744656
- 12.115906056363478
-  0.5440920744620615
-  2.0611829618917836
-  0.020501149688071187
-  7.5969722098727335
-  0.5768774775286676
-  0.2813629166294773

Eventually, one can reset the evaluator to its original state by using reset!:

Argos.reset!(flp)
+ 15.500905744331796
+ 20.253654774364026
+  1.0413827764711137
+ 23.3366245200085
+  2.1675692664401685
+  1.0597346199923012
+  5.793307883082476
+  0.5393196329874921
+  0.801071250769947

Eventually, one can reset the evaluator to its original state by using reset!:

Argos.reset!(flp)
 [stack.vmag stack.vang]
9×2 Matrix{Float64}:
  1.0  0.0
  1.0  0.0
@@ -109,4 +109,4 @@
  1.0  0.0
  1.0  0.0
  1.0  0.0
- 1.0  0.0
+ 1.0 0.0 diff --git a/dev/man/reducedspace/index.html b/dev/man/reducedspace/index.html index 3da654a..5e4abc2 100644 --- a/dev/man/reducedspace/index.html +++ b/dev/man/reducedspace/index.html @@ -94,7 +94,7 @@ * #iterations: 4 * Time Jacobian (s) ........: 0.0001 * Time linear solver (s) ...: 0.0001 - * Time total (s) ...........: 0.5340

with a slightly different solution (as we have loosened the tolerance):

stack = red.stack
+  * Time total (s) ...........: 0.5276

with a slightly different solution (as we have loosened the tolerance):

stack = red.stack
 [stack.vmag stack.vang]
9×2 Matrix{Float64}:
  1.1       0.0
  1.1       0.0478953
@@ -141,4 +141,4 @@
  -1573.41    -760.654    2476.81     -21.0085   -94.5838
    100.337    -60.9243    -21.0085  3922.1     2181.62
    105.971    -11.7018    -94.5838  2181.62    4668.9

As we will explain later, the computation of the reduced Jacobian and reduced Hessian can be streamlined on the GPU.

Deport on CUDA GPU

Instantiating a ReducedSpaceEvaluator on an NVIDIA GPU translates to:

using CUDAKernels # suppose CUDAKernels has been downloaded
-red = Argos.ReducedSpaceEvaluator(datafile; device=CUDADevice(), nbatch_hessian=256)

The number of batches nbatch_hessian is the number of right-hand sides used to streamline the solution of the linear systems.

+red = Argos.ReducedSpaceEvaluator(datafile; device=CUDADevice(), nbatch_hessian=256)

The number of batches nbatch_hessian is the number of right-hand sides used to streamline the solution of the linear systems.

diff --git a/dev/optim/biegler/index.html b/dev/optim/biegler/index.html index 2d8a56f..5502e2b 100644 --- a/dev/optim/biegler/index.html +++ b/dev/optim/biegler/index.html @@ -91,10 +91,10 @@ Number of constraint evaluations = 17 Number of constraint Jacobian evaluations = 15 Number of Lagrangian Hessian evaluations = 14 -Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 13.029 +Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 12.736 Total wall-clock secs in linear solver = 0.022 Total wall-clock secs in NLP function evaluations = 0.002 -Total wall-clock secs = 13.053 +Total wall-clock secs = 12.760 EXIT: Optimal Solution Found. -"Execution stats: Optimal Solution Found."
Info

Note that we get the exact same convergence as in the full-space.

+"Execution stats: Optimal Solution Found."
Info

Note that we get the exact same convergence as in the full-space.

diff --git a/dev/optim/fullspace/index.html b/dev/optim/fullspace/index.html index b708550..d490c39 100644 --- a/dev/optim/fullspace/index.html +++ b/dev/optim/fullspace/index.html @@ -64,49 +64,49 @@ 11 5.2966860e+03 7.29e-06 2.10e-04 -5.7 2.78e-03 - 1.00e+00 1.00e+00h 1 12 5.2966867e+03 2.58e-07 7.50e-06 -5.7 5.23e-04 - 1.00e+00 1.00e+00h 1 13 5.2966862e+03 1.20e-08 5.67e-07 -8.6 1.14e-04 - 1.00e+00 1.00e+00h 1 - 14 5.2966862e+03 1.18e-12 3.38e-11 -8.6 1.12e-06 - 1.00e+00 1.00e+00h 1 + 14 5.2966862e+03 1.18e-12 3.33e-11 -8.6 1.12e-06 - 1.00e+00 1.00e+00h 1 Number of Iterations....: 14 (scaled) (unscaled) Objective...............: 6.1017825057066950e+01 5.2966862028703945e+03 -Dual infeasibility......: 3.3764990803319961e-11 2.9309887850104133e-09 -Constraint violation....: 1.1765033391952784e-12 1.1765033391952784e-12 -Complementarity.........: 2.8885453188276795e-11 2.5074178114823606e-09 -Overall NLP error.......: 2.5074178114823606e-09 2.5074178114823606e-09 +Dual infeasibility......: 3.3310243452433497e-11 2.8915141885792965e-09 +Constraint violation....: 1.1763923168928159e-12 1.1763923168928159e-12 +Complementarity.........: 2.8885453188273518e-11 2.5074178114820760e-09 +Overall NLP error.......: 2.5074178114820760e-09 2.5074178114820760e-09 Number of objective function evaluations = 16 Number of objective gradient evaluations = 15 Number of constraint evaluations = 17 Number of constraint Jacobian evaluations = 15 Number of Lagrangian Hessian evaluations = 14 -Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 2.566 -Total wall-clock secs in linear solver = 0.324 +Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 2.498 +Total wall-clock secs in linear solver = 0.316 Total wall-clock secs in NLP function evaluations = 0.001 -Total wall-clock secs = 2.891 +Total wall-clock secs = 2.815 EXIT: Optimal Solution Found. "Execution stats: Optimal Solution Found."

Querying the solution

MadNLP returns a MadNLPExecutionStats object storing the solution. One can query the optimal objective as:

stats.objective
5296.6862028703945

and the optimal solution:

stats.solution
41-element Vector{Float64}:
-  0.08541019351901065
-  0.05671519595851584
- -0.04298613531681524
- -0.06949870510848581
-  0.01052239631297763
- -0.020878890700930543
-  0.01580627665931866
- -0.08055111183511196
+  0.08541019351901052
+  0.05671519595851574
+ -0.04298613531681527
+ -0.06949870510848584
+  0.010522396312977545
+ -0.02087889070093063
+  0.015806276659318563
+ -0.08055111183511203
   1.0942215071535502
   1.0844484919148973
   ⋮
-  0.8145655559162711
-  0.1420569724798873
-  0.3624966590934654
-  0.9616074529857073
-  0.17983405590901225
-  0.3870702737199752
-  1.8042024795633307
-  0.5359062432430305
-  0.314606817749897

Also, remind that each time the callback update! is being called, the values are updated internally in the stack stored inside flp. Hence, an alternative way to query the solution is to directly have a look at the values in the stack. For instance, one can query the optimal values of the voltage

stack = flp.stack
+  0.8145655559162723
+  0.14205697247988733
+  0.36249665909346507
+  0.9616074529857067
+  0.17983405590901222
+  0.387070273719975
+  1.8042024795633298
+  0.53590624324303
+  0.31460681774989746

Also, remind that each time the callback update! is being called, the values are updated internally in the stack stored inside flp. Hence, an alternative way to query the solution is to directly have a look at the values in the stack. For instance, one can query the optimal values of the voltage

stack = flp.stack
 [stack.vmag stack.vang]
9×2 Matrix{Float64}:
  1.1       0.0
  1.09735   0.0854102
@@ -117,9 +117,9 @@
  1.08949  -0.0208789
  1.1       0.0158063
  1.07176  -0.0805511

and of the power generation:

stack.pgen
3-element Vector{Float64}:
- 0.8979870769892052
- 1.3432060073263454
- 0.9418738041880945
Info

The values inside stack are used to compute the initial point in the optimization routine. Hence, if one calls solve! again the optimization would start from the optimal solution found in the previous call to solve!, leading to a different convergence pattern. If one wants to launch a new optimization from scratch without reinitializing all the data structure, we recommend using the reset! function:

Argos.reset!(flp)

Playing with different parameters

MadNLP has different options we may want to tune when solving the OPF. For instance, we can loosen the tolerance to 1e-5 and set the maximum number of iterations to 5 with:

julia> solver = MadNLP.MadNLPSolver(model; tol=1e-5, max_iter=5)Interior point solver
+ 0.8979870769892058
+ 1.3432060073263452
+ 0.9418738041880942
Info

The values inside stack are used to compute the initial point in the optimization routine. Hence, if one calls solve! again the optimization would start from the optimal solution found in the previous call to solve!, leading to a different convergence pattern. If one wants to launch a new optimization from scratch without reinitializing all the data structure, we recommend using the reset! function:

Argos.reset!(flp)

Playing with different parameters

MadNLP has different options we may want to tune when solving the OPF. For instance, we can loosen the tolerance to 1e-5 and set the maximum number of iterations to 5 with:

julia> solver = MadNLP.MadNLPSolver(model; tol=1e-5, max_iter=5)Interior point solver
 
 number of variables......................: 19
 number of constraints....................: 36
@@ -162,7 +162,7 @@
 Number of constraint evaluations                     = 8
 Number of constraint Jacobian evaluations            = 6
 Number of Lagrangian Hessian evaluations             = 5
-Total wall-clock secs in solver (w/o fun. eval./lin. alg.)  =  0.007
+Total wall-clock secs in solver (w/o fun. eval./lin. alg.)  =  0.006
 Total wall-clock secs in linear solver                      =  0.001
 Total wall-clock secs in NLP function evaluations           =  0.001
 Total wall-clock secs                                       =  0.008
@@ -170,4 +170,4 @@
 EXIT: Maximum Number of Iterations Exceeded.
 "Execution stats: Maximum Number of Iterations Exceeded."

Most importantly, one may want to use a different sparse linear solver than UMFPACK, employed by default in MadNLP. We recommend using HSL solvers (the installation procedure is detailed here). Once HSL is installed, one can solve the OPF with:

using MadNLPHSL
 solver = MadNLP.MadNLPSolver(model; linear_solver=Ma27Solver)
-MadNLP.solve!(solver)
+MadNLP.solve!(solver) diff --git a/dev/optim/reducedspace/index.html b/dev/optim/reducedspace/index.html index 9ecd860..35dea94 100644 --- a/dev/optim/reducedspace/index.html +++ b/dev/optim/reducedspace/index.html @@ -141,10 +141,10 @@ Number of constraint evaluations = 25 Number of constraint Jacobian evaluations = 23 Number of Lagrangian Hessian evaluations = 22 -Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 6.201 +Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 6.070 Total wall-clock secs in linear solver = 0.007 -Total wall-clock secs in NLP function evaluations = 0.321 -Total wall-clock secs = 6.529 +Total wall-clock secs in NLP function evaluations = 0.312 +Total wall-clock secs = 6.389 EXIT: Optimal Solution Found. "Execution stats: Optimal Solution Found."
Info

We recommend changing the default tolerance to be above the tolerance of the Newton-Raphson used inside ReducedSpaceEvaluator. Indeed, the power flow is solved only approximately, leading to slightly inaccurate evaluations and derivatives, impacting the convergence of the interior-point algorithm. In general, we recommend setting tol=1e-5.

Info

Here, we are using Lapack on the CPU to solve the condensed KKT system at each iteration of the interior-point algorithm. However, if an NVIDIA GPU is available, we recommend using a CUDA-accelerated Lapack version, more efficient than the default Lapack. If MadNLPGPU is installed, this amounts to

using MadNLPGPU
@@ -184,4 +184,4 @@
  1.1       0.0105224
  1.08949  -0.0208788
  1.1       0.0158063
- 1.07176  -0.0805509
+ 1.07176 -0.0805509 diff --git a/dev/quickstart/cpu/index.html b/dev/quickstart/cpu/index.html index 7e369c3..5940818 100644 --- a/dev/quickstart/cpu/index.html +++ b/dev/quickstart/cpu/index.html @@ -52,10 +52,10 @@ Number of constraint evaluations = 21 Number of constraint Jacobian evaluations = 20 Number of Lagrangian Hessian evaluations = 19 -Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 2.376 -Total wall-clock secs in linear solver = 0.050 -Total wall-clock secs in NLP function evaluations = 3.704 -Total wall-clock secs = 6.130 +Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 2.247 +Total wall-clock secs in linear solver = 0.049 +Total wall-clock secs in NLP function evaluations = 3.708 +Total wall-clock secs = 6.005 EXIT: Optimal Solution Found.

Biegler's method (linearize-then-reduce)

Tip
julia> Argos.run_opf(datafile, Argos.BieglerReduction(); lapack_algorithm=MadNLP.CHOLESKY);This is MadNLP version v0.7.0, running with Lapack-CPU (CHOLESKY)
 
@@ -276,9 +276,9 @@
 Number of constraint evaluations                     = 19
 Number of constraint Jacobian evaluations            = 18
 Number of Lagrangian Hessian evaluations             = 17
-Total wall-clock secs in solver (w/o fun. eval./lin. alg.)  =  4.167
+Total wall-clock secs in solver (w/o fun. eval./lin. alg.)  =  4.141
 Total wall-clock secs in linear solver                      =  0.007
-Total wall-clock secs in NLP function evaluations           =  1.140
-Total wall-clock secs                                       =  5.315
+Total wall-clock secs in NLP function evaluations           =  1.132
+Total wall-clock secs                                       =  5.280
 
-EXIT: Optimal Solution Found.
+EXIT: Optimal Solution Found. diff --git a/dev/quickstart/cuda/index.html b/dev/quickstart/cuda/index.html index d41167f..d0b6d8b 100644 --- a/dev/quickstart/cuda/index.html +++ b/dev/quickstart/cuda/index.html @@ -6,4 +6,4 @@

Full-space method

ArgosCUDA.run_opf_gpu(datafile, Argos.FullSpace())
 

Biegler's method (linearize-then-reduce)

ArgosCUDA.run_opf_gpu(datafile, Argos.BieglerReduction(); linear_solver=LapackGPUSolver)
 

Dommel & Tinney's method (reduce-then-linearize)

ArgosCUDA.run_opf_gpu(datafile, Argos.DommelTinney(); linear_solver=LapackGPUSolver)
-
+ diff --git a/dev/references/index.html b/dev/references/index.html index 4153b7f..6efb315 100644 --- a/dev/references/index.html +++ b/dev/references/index.html @@ -1,2 +1,2 @@ -References · Argos.jl

References

Argos has lead to several publications in peer-reviewed journals.

PP2022 details how Argos is evaluating the second-order reduced derivatives in parallel on the GPU. All results were generated with this artifact.

PSCC2022 uses the augmented Lagrangian algorithm implemented in Argos to solve static and real-time OPF. All results were generated with this artifact.

ARXIV2022 demonstrates the full capabilities of Argos to solve large-scale OPF on CUDA GPU, both in the full-space and in the reduced-space.

+References · Argos.jl

References

Argos has lead to several publications in peer-reviewed journals.

PP2022 details how Argos is evaluating the second-order reduced derivatives in parallel on the GPU. All results were generated with this artifact.

PSCC2022 uses the augmented Lagrangian algorithm implemented in Argos to solve static and real-time OPF. All results were generated with this artifact.

ARXIV2022 demonstrates the full capabilities of Argos to solve large-scale OPF on CUDA GPU, both in the full-space and in the reduced-space.