A nonlinear programming solver based on the filter line-search interior point method (as in Ipopt) that can handle/exploit diverse classes of data structures, either on host or device memories.
License | Documentation | Build Status | Coverage | DOI |
---|---|---|---|---|
pkg> add MadNLP
Optionally, various extension packages can be installed together:
pkg> add MadNLPHSL, MadNLPPardiso, MadNLPMumps, MadNLPGPU
These packages are stored in the lib
subdirectory within the main MadNLP repository. Some extension packages may require additional dependencies or specific hardware. For the instructions for the build procedure, see the following links:
MadNLP is interfaced with modeling packages:
Users can pass various options to MadNLP also through the modeling packages. The interface-specific syntax are shown below. To see the list of MadNLP solver options, check the documentation.
using MadNLP, JuMP
model = Model(()->MadNLP.Optimizer(print_level=MadNLP.INFO, max_iter=100))
@variable(model, x, start = 0.0)
@variable(model, y, start = 0.0)
@NLobjective(model, Min, (1 - x)^2 + 100 * (y - x^2)^2)
optimize!(model)
using MadNLP, CUTEst
model = CUTEstModel("PRIMALC1")
madnlp(model, print_level=MadNLP.WARN, max_wall_time=3600)
MadNLP is interfaced with non-Julia sparse/dense linear solvers:
- Umfpack
- Lapack
- HSL solvers (requires extension)
- Pardiso (requires extension)
- Pardiso-MKL (requires extension)
- Mumps (requires extension)
- cuSOLVER (requires extension)
- cuDSS (requires extension)
Each linear solver in MadNLP is a Julia type, and the linear_solver
option should be specified by the actual type. Note that the linear solvers are always exported to Main
.
using MadNLP, JuMP
# ...
model = Model(()->MadNLP.Optimizer(linear_solver=UmfpackSolver)) # default
model = Model(()->MadNLP.Optimizer(linear_solver=LDLSolver)) # works only for convex problems
model = Model(()->MadNLP.Optimizer(linear_solver=CHOLMODSolver)) # works only for convex problems
model = Model(()->MadNLP.Optimizer(linear_solver=LapackCPUSolver))
using MadNLPHSL, JuMP
# ...
model = Model(()->MadNLP.Optimizer(linear_solver=Ma27Solver))
model = Model(()->MadNLP.Optimizer(linear_solver=Ma57Solver))
model = Model(()->MadNLP.Optimizer(linear_solver=Ma77Solver))
model = Model(()->MadNLP.Optimizer(linear_solver=Ma86Solver))
model = Model(()->MadNLP.Optimizer(linear_solver=Ma97Solver))
using MadNLPMumps, JuMP
# ...
model = Model(()->MadNLP.Optimizer(linear_solver=MumpsSolver))
using MadNLPPardiso, JuMP
# ...
model = Model(()->MadNLP.Optimizer(linear_solver=PardisoSolver))
model = Model(()->MadNLP.Optimizer(linear_solver=PardisoMKLSolver))
using MadNLPGPU, JuMP
# ...
model = Model(()->MadNLP.Optimizer(linear_solver=LapackGPUSolver)) # for dense problems
model = Model(()->MadNLP.Optimizer(linear_solver=CUDSSSolver)) # for sparse problems
model = Model(()->MadNLP.Optimizer(linear_solver=CuCholeskySolver)) # for sparse problems
model = Model(()->MadNLP.Optimizer(linear_solver=GLUSolver)) # for sparse problems
model = Model(()->MadNLP.Optimizer(linear_solver=RFSolver)) # for sparse problems
If you use MadNLP.jl in your research, we would greatly appreciate your citing it.
@article{shin2023accelerating,
title={Accelerating optimal power flow with {GPU}s: {SIMD} abstraction of nonlinear programs and condensed-space interior-point methods},
author={Shin, Sungho and Pacaud, Fran{\c{c}}ois and Anitescu, Mihai},
journal={arXiv preprint arXiv:2307.16830},
year={2023}
}
@article{shin2020graph,
title={Graph-Based Modeling and Decomposition of Energy Infrastructures},
author={Shin, Sungho and Coffrin, Carleton and Sundar, Kaarthik and Zavala, Victor M},
journal={arXiv preprint arXiv:2010.02404},
year={2020}
}
- Please report issues and feature requests via the GitHub issue tracker.
- Questions are welcome at GitHub discussion forum.