Reference
Contents
Index
RegularizedProblems.FirstOrderModel
RegularizedProblems.FirstOrderNLSModel
RegularizedProblems.RegularizedNLPModel
RegularizedProblems.MIT_matrix_completion_model
RegularizedProblems.bpdn_model
RegularizedProblems.fh_model
RegularizedProblems.group_lasso_model
RegularizedProblems.nnmf_model
RegularizedProblems.qp_rand_model
RegularizedProblems.random_matrix_completion_model
RegularizedProblems.FirstOrderModel
— Typemodel = FirstOrderModel(f, ∇f!; name = "first-order model")
A simple subtype of AbstractNLPModel
to represent a smooth objective.
Arguments
f :: F <: Function
: a function such thatf(x)
returns the objective value atx
;∇f! :: G <: Function
: a function such that∇f!(g, x)
stores the gradient of the objective atx
ing
;x :: AbstractVector
: an initial guess.
All keyword arguments are passed through to the NLPModelMeta
constructor.
RegularizedProblems.FirstOrderNLSModel
— Typemodel = FirstOrderNLSModel(r!, jv!, jtv!; name = "first-order NLS model")
A simple subtype of AbstractNLSModel
to represent a nonlinear least-squares problem with a smooth residual.
Arguments
r! :: R <: Function
: a function such thatr!(y, x)
stores the residual atx
iny
;jv! :: J <: Function
: a function such thatjv!(u, x, v)
stores the product between the residual Jacobian atx
and the vectorv
inu
;jtv! :: Jt <: Function
: a function such thatjtv!(u, x, v)
stores the product between the transpose of the residual Jacobian atx
and the vectorv
inu
;x :: AbstractVector
: an initial guess.
All keyword arguments are passed through to the NLPModelMeta
constructor.
RegularizedProblems.RegularizedNLPModel
— Typermodel = RegularizedNLPModel(model, regularizer)
-rmodel = RegularizedNLSModel(model, regularizer)
An aggregate type to represent a regularized optimization model, .i.e., of the form
minimize f(x) + h(x),
where f is smooth (and is usually assumed to have Lipschitz-continuous gradient), and h is lower semi-continuous (and may have to be prox-bounded).
The regularized model is made of
model <: AbstractNLPModel
: the smooth part of the model, for example aFirstOrderModel
h
: the nonsmooth part of the model; typically a regularizer defined inProximalOperators.jl
selected
: the subset of variables to which the regularizer h should be applied (default: all).
This aggregate type can be used to call solvers with a single object representing the model, but is especially useful for use with SolverBenchmark.jl, which expects problems to be defined by a single object.
RegularizedProblems.MIT_matrix_completion_model
— Methodmodel, nls_model, sol = MIT_matrix_completion_model()
A special case of matrix completion problem in which the exact image is a noisy MIT logo.
See the documentation of random_matrix_completion_model()
for more information.
RegularizedProblems.bpdn_model
— Methodmodel, nls_model, sol = bpdn_model(args...; kwargs...)
+Reference · RegularizedProblems.jl Reference
Contents
Index
RegularizedProblems.FirstOrderModel
RegularizedProblems.FirstOrderNLSModel
RegularizedProblems.RegularizedNLPModel
RegularizedProblems.MIT_matrix_completion_model
RegularizedProblems.bpdn_model
RegularizedProblems.fh_model
RegularizedProblems.group_lasso_model
RegularizedProblems.nnmf_model
RegularizedProblems.qp_rand_model
RegularizedProblems.random_matrix_completion_model
RegularizedProblems.FirstOrderModel
— Typemodel = FirstOrderModel(f, ∇f!; name = "first-order model")
A simple subtype of AbstractNLPModel
to represent a smooth objective.
Arguments
f :: F <: Function
: a function such that f(x)
returns the objective value at x
;∇f! :: G <: Function
: a function such that ∇f!(g, x)
stores the gradient of the objective at x
in g
;x :: AbstractVector
: an initial guess.
All keyword arguments are passed through to the NLPModelMeta
constructor.
sourceRegularizedProblems.FirstOrderNLSModel
— Typemodel = FirstOrderNLSModel(r!, jv!, jtv!; name = "first-order NLS model")
A simple subtype of AbstractNLSModel
to represent a nonlinear least-squares problem with a smooth residual.
Arguments
r! :: R <: Function
: a function such that r!(y, x)
stores the residual at x
in y
;jv! :: J <: Function
: a function such that jv!(u, x, v)
stores the product between the residual Jacobian at x
and the vector v
in u
;jtv! :: Jt <: Function
: a function such that jtv!(u, x, v)
stores the product between the transpose of the residual Jacobian at x
and the vector v
in u
;x :: AbstractVector
: an initial guess.
All keyword arguments are passed through to the NLPModelMeta
constructor.
sourceRegularizedProblems.RegularizedNLPModel
— Typermodel = RegularizedNLPModel(model, regularizer)
+rmodel = RegularizedNLSModel(model, regularizer)
An aggregate type to represent a regularized optimization model, .i.e., of the form
minimize f(x) + h(x),
where f is smooth (and is usually assumed to have Lipschitz-continuous gradient), and h is lower semi-continuous (and may have to be prox-bounded).
The regularized model is made of
model <: AbstractNLPModel
: the smooth part of the model, for example a FirstOrderModel
h
: the nonsmooth part of the model; typically a regularizer defined in ProximalOperators.jl
selected
: the subset of variables to which the regularizer h should be applied (default: all).
This aggregate type can be used to call solvers with a single object representing the model, but is especially useful for use with SolverBenchmark.jl, which expects problems to be defined by a single object.
sourceRegularizedProblems.MIT_matrix_completion_model
— Methodmodel, nls_model, sol = MIT_matrix_completion_model()
A special case of matrix completion problem in which the exact image is a noisy MIT logo.
See the documentation of random_matrix_completion_model()
for more information.
sourceRegularizedProblems.bpdn_model
— Methodmodel, nls_model, sol = bpdn_model(args...; kwargs...)
model, nls_model, sol = bpdn_model(compound = 1, args...; kwargs...)
Return an instance of an NLPModel
and an instance of an NLSModel
representing the same basis-pursuit denoise problem, i.e., the under-determined linear least-squares objective
½ ‖Ax - b‖₂²,
where A has orthonormal rows and b = A * x̄ + ϵ, x̄ is sparse and ϵ is a noise vector following a normal distribution with mean zero and standard deviation σ.
Arguments
m :: Int
: the number of rows of An :: Int
: the number of columns of A (with n
≥ m
)k :: Int
: the number of nonzero elements in x̄noise :: Float64
: noise standard deviation σ (default: 0.01).
The second form calls the first form with arguments
m = 200 * compound
n = 512 * compound
-k = 10 * compound
Keyword arguments
bounds :: Bool
: whether or not to include nonnegativity bounds in the model (default: false).
Return Value
An instance of a FirstOrderModel
and of a FirstOrderNLSModel
that represent the same basis-pursuit denoise problem, and the exact solution x̄.
If bounds == true
, the positive part of x̄ is returned.
sourceRegularizedProblems.fh_model
— Methodfh_model(; kwargs...)
Return an instance of an NLPModel
and an instance of an NLSModel
representing the same Fitzhugh-Nagumo problem, i.e., the over-determined nonlinear least-squares objective
½ ‖F(x)‖₂²,
where F: ℝ⁵ → ℝ²⁰² represents the fitting error between a simulation of the Fitzhugh-Nagumo model with parameters x and a simulation of the Van der Pol oscillator with fixed, but unknown, parameters.
Keyword Arguments
All keyword arguments are passed directly to the ADNLPModel
(or ADNLSModel
) constructure, e.g., to set the automatic differentiation backend.
Return Value
An instance of an ADNLPModel
that represents the Fitzhugh-Nagumo problem, an instance of an ADNLSModel
that represents the same problem, and the exact solution.
sourceRegularizedProblems.group_lasso_model
— Methodmodel, nls_model, sol = group_lasso_model(; kwargs...)
Return an instance of an NLPModel
and NLSModel
representing the group-lasso problem, i.e., the under-determined linear least-squares objective
½ ‖Ax - b‖₂²,
where A has orthonormal rows and b = A * x̄ + ϵ, x̄ is sparse and ϵ is a noise vector following a normal distribution with mean zero and standard deviation σ. Note that with this format, all groups have a the same number of elements and the number of groups divides evenly into the total number of elements.
Keyword Arguments
m :: Int
: the number of rows of A (default: 200)n :: Int
: the number of columns of A, with n
≥ m
(default: 512)g :: Int
: the number of groups (default: 16)ag :: Int
: the number of active groups (default: 5)noise :: Float64
: noise amount (default: 0.01)compound :: Int
: multiplier for m
, n
, g
, and ag
(default: 1).
Return Value
An instance of a FirstOrderModel
that represents the group-lasso problem. An instance of a FirstOrderNLSModel
that represents the group-lasso problem. Also returns true x, number of groups g, group-index denoting which groups are active, and a Matrix where rows are group indices of x.
sourceRegularizedProblems.nnmf_model
— Functionmodel, Av, selected = nnmf_model(m = 100, n = 50, k = 10, T = Float64)
Return an instance of an NLPModel
representing the non-negative matrix factorization objective
f(W, H) = ½ ‖A - WH‖₂²,
where A ∈ Rᵐˣⁿ has non-negative entries and can be separeted into k clusters, Av = A[:]
. The vector of indices selected = k*m+1: k*(m+n)
is used to indicate the components of W ∈ Rᵐˣᵏ and H ∈ Rᵏˣⁿ to apply the regularizer to (so that the regularizer only applies to entries of H).
Arguments
m :: Int
: the number of rows of An :: Int
: the number of columns of A (with n
≥ m
)k :: Int
: the number of clusters
sourceRegularizedProblems.qp_rand_model
— Methodmodel = qp_rand_model(n = 100_000; dens = 1.0e-4, convex = false)
Return an instance of a QuadraticModel
representing
min cᵀx + ½ xᵀHx s.t. l ≤ x ≤ u,
with H = A + A' or H = A * A' (see the convex
keyword argument) where A is a random square matrix with density dens
, l = -e - tₗ
and u = e + tᵤ
where e
is the vector of ones, and tₗ
and tᵤ
are sampled from a uniform distribution between 0 and 1.
Arguments
n :: Int
: size of the problem (default: 100_000
).
Keyword arguments
dens :: Real
: density of A
with 0 < dens ≤ 1
used to generate the quadratic model (default: 1.0e-4
).convex :: Bool
: true to generate positive definite H
(default: false
).
Return Value
An instance of a QuadraticModel
.
sourceRegularizedProblems.random_matrix_completion_model
— Methodmodel, nls_model, sol = random_matrix_completion_model(; kwargs...)
Return an instance of an NLPModel
and an instance of an NLSModel
representing the same matrix completion problem, i.e., the square linear least-squares objective
½ ‖P(X - A)‖²
in the Frobenius norm, where X is the unknown image represented as an m x n matrix, A is a fixed image, and the operator P only retains a certain subset of pixels of X and A.
Keyword Arguments
m :: Int
: the number of rows of X and A (default: 100)n :: Int
: the number of columns of X and A (default: 100)r :: Int
: the desired rank of A (default: 5)sr :: AbstractFloat
: a threshold between 0 and 1 used to determine the set of pixels
retained by the operator P (default: 0.8)
va :: AbstractFloat
: the variance of a first Gaussian perturbation to be applied to A (default: 1.0e-4)vb :: AbstractFloat
: the variance of a second Gaussian perturbation to be applied to A (default: 1.0e-2)c :: AbstractFloat
: the coefficient of the convex combination of the two Gaussian perturbations (default: 0.2).
Return Value
An instance of a FirstOrderModel
and of a FirstOrderNLSModel
that represent the same matrix completion problem, and the exact solution.
sourceSettings
This document was generated with Documenter.jl on Sunday 15 September 2024. Using Julia version 1.10.5.
+k = 10 * compound
Keyword arguments
bounds :: Bool
: whether or not to include nonnegativity bounds in the model (default: false).
Return Value
An instance of a FirstOrderModel
and of a FirstOrderNLSModel
that represent the same basis-pursuit denoise problem, and the exact solution x̄.
If bounds == true
, the positive part of x̄ is returned.
RegularizedProblems.fh_model
— Methodfh_model(; kwargs...)
Return an instance of an NLPModel
and an instance of an NLSModel
representing the same Fitzhugh-Nagumo problem, i.e., the over-determined nonlinear least-squares objective
½ ‖F(x)‖₂²,
where F: ℝ⁵ → ℝ²⁰² represents the fitting error between a simulation of the Fitzhugh-Nagumo model with parameters x and a simulation of the Van der Pol oscillator with fixed, but unknown, parameters.
Keyword Arguments
All keyword arguments are passed directly to the ADNLPModel
(or ADNLSModel
) constructure, e.g., to set the automatic differentiation backend.
Return Value
An instance of an ADNLPModel
that represents the Fitzhugh-Nagumo problem, an instance of an ADNLSModel
that represents the same problem, and the exact solution.
RegularizedProblems.group_lasso_model
— Methodmodel, nls_model, sol = group_lasso_model(; kwargs...)
Return an instance of an NLPModel
and NLSModel
representing the group-lasso problem, i.e., the under-determined linear least-squares objective
½ ‖Ax - b‖₂²,
where A has orthonormal rows and b = A * x̄ + ϵ, x̄ is sparse and ϵ is a noise vector following a normal distribution with mean zero and standard deviation σ. Note that with this format, all groups have a the same number of elements and the number of groups divides evenly into the total number of elements.
Keyword Arguments
m :: Int
: the number of rows of A (default: 200)n :: Int
: the number of columns of A, withn
≥m
(default: 512)g :: Int
: the number of groups (default: 16)ag :: Int
: the number of active groups (default: 5)noise :: Float64
: noise amount (default: 0.01)compound :: Int
: multiplier form
,n
,g
, andag
(default: 1).
Return Value
An instance of a FirstOrderModel
that represents the group-lasso problem. An instance of a FirstOrderNLSModel
that represents the group-lasso problem. Also returns true x, number of groups g, group-index denoting which groups are active, and a Matrix where rows are group indices of x.
RegularizedProblems.nnmf_model
— Functionmodel, Av, selected = nnmf_model(m = 100, n = 50, k = 10, T = Float64)
Return an instance of an NLPModel
representing the non-negative matrix factorization objective
f(W, H) = ½ ‖A - WH‖₂²,
where A ∈ Rᵐˣⁿ has non-negative entries and can be separeted into k clusters, Av = A[:]
. The vector of indices selected = k*m+1: k*(m+n)
is used to indicate the components of W ∈ Rᵐˣᵏ and H ∈ Rᵏˣⁿ to apply the regularizer to (so that the regularizer only applies to entries of H).
Arguments
m :: Int
: the number of rows of An :: Int
: the number of columns of A (withn
≥m
)k :: Int
: the number of clusters
RegularizedProblems.qp_rand_model
— Methodmodel = qp_rand_model(n = 100_000; dens = 1.0e-4, convex = false)
Return an instance of a QuadraticModel
representing
min cᵀx + ½ xᵀHx s.t. l ≤ x ≤ u,
with H = A + A' or H = A * A' (see the convex
keyword argument) where A is a random square matrix with density dens
, l = -e - tₗ
and u = e + tᵤ
where e
is the vector of ones, and tₗ
and tᵤ
are sampled from a uniform distribution between 0 and 1.
Arguments
n :: Int
: size of the problem (default:100_000
).
Keyword arguments
dens :: Real
: density ofA
with0 < dens ≤ 1
used to generate the quadratic model (default:1.0e-4
).convex :: Bool
: true to generate positive definiteH
(default:false
).
Return Value
An instance of a QuadraticModel
.
RegularizedProblems.random_matrix_completion_model
— Methodmodel, nls_model, sol = random_matrix_completion_model(; kwargs...)
Return an instance of an NLPModel
and an instance of an NLSModel
representing the same matrix completion problem, i.e., the square linear least-squares objective
½ ‖P(X - A)‖²
in the Frobenius norm, where X is the unknown image represented as an m x n matrix, A is a fixed image, and the operator P only retains a certain subset of pixels of X and A.
Keyword Arguments
m :: Int
: the number of rows of X and A (default: 100)n :: Int
: the number of columns of X and A (default: 100)r :: Int
: the desired rank of A (default: 5)sr :: AbstractFloat
: a threshold between 0 and 1 used to determine the set of pixels
retained by the operator P (default: 0.8)
va :: AbstractFloat
: the variance of a first Gaussian perturbation to be applied to A (default: 1.0e-4)vb :: AbstractFloat
: the variance of a second Gaussian perturbation to be applied to A (default: 1.0e-2)c :: AbstractFloat
: the coefficient of the convex combination of the two Gaussian perturbations (default: 0.2).
Return Value
An instance of a FirstOrderModel
and of a FirstOrderNLSModel
that represent the same matrix completion problem, and the exact solution.