From 2f3f85f151e6144863a81e49d2ed81a7a0b8b5c6 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Sat, 2 Nov 2024 13:56:06 +0000 Subject: [PATCH] build based on f8146c6 --- dev/index.html | 2 +- dev/reference/index.html | 6 +++--- dev/search/index.html | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/dev/index.html b/dev/index.html index c8192d4..32ea45c 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,2 +1,2 @@ -Home · RegularizedProblems.jl

RegularizedProblems

Synopsis

This package provides sameple problems suitable for developing and testing first and second-order methods for regularized optimization, i.e., they have the general form

\[\min_{x \in \mathbb{R}^n} \ f(x) + h(x),\]

where $f: \mathbb{R}^n \to \mathbb{R}$ has Lipschitz-continuous gradient and $h: \mathbb{R}^n \to \mathbb{R} \cup \{\infty\}$ is lower semi-continuous and proper. The smooth term f describes the objective to minimize while the role of the regularizer h is to select a solution with desirable properties: minimum norm, sparsity below a certain level, maximum sparsity, etc.

Models for f are instances of NLPModels and often represent nonlinear least-squares residuals, i.e., $f(x) = \tfrac{1}{2} \|F(x)\|_2^2$ where $F: \mathbb{R}^n \to \mathbb{R}^m$.

The regularizer $h$ should be obtained from ProximalOperators.jl.

The final regularized problem is intended to be solved by way of solver for nonsmooth regularized optimization such as those in RegularizedOptimization.jl.

Problems implemented

Basis-pursuit denoise

Calling model = bpdn_model() returns a model representing the smooth underdetermined linear least-squares residual

\[f(x) = \tfrac{1}{2} \|Ax - b\|_2^2,\]

where $A$ has orthonormal rows. The right-hand side is generated as $b = A x_{\star} + \varepsilon$ where $x_{\star}$ is a sparse vector, $\varepsilon \sim \mathcal{N}(0, \sigma)$ and $\sigma \in (0, 1)$ is a fixed noise level.

When solving the basis-pursuit denoise problem, the goal is to recover $x \approx x_{\star}$. In particular, $x$ should have the same sparsity pattern as $x_{\star}$. That is typically accomplished by choosing a regularizer of the form

  • $h(x) = \lambda \|x\|_1$ for a well-chosen $\lambda > 0$;
  • $h(x) = \|x\|_0$;
  • $h(x) = \chi(x; k \mathbb{B}_0)$ for $k \approx \|x_{\star}\|_0$;

where $\chi(x; k \mathbb{B}_0)$ is the indicator of the $\ell_0$-pseudonorm ball of radius $k$.

Calling model = bpdn_nls_model() returns the same problem modeled explicitly as a least-squares problem.

Fitzhugh-Nagumo data-fitting problem

If ADNLPModels and DifferentialEquations have been imported, model = fh_model() returns a model representing the over-determined nonlinear least-squares residual

\[f(x) = \tfrac{1}{2} \|F(x)\|_2^2,\]

where $F: \mathbb{R}^5 \to \mathbb{R}^{202}$ represents the residual between a simulation of the Fitzhugh-Nagumo system with parameters $x$ and a simulation of the Van der Pol oscillator with preset, but unknown, parameters $x_{\star}$.

A feature of the Fitzhugh-Nagumo model is that it reduces to the Van der Pol oscillator when certain parameters are set to zero. Thus here again, the objective is to recover a sparse solution to the data-fitting problem. Hence, typical regularizers are the same as those used for the basis-pursuit denoise problem.

+Home · RegularizedProblems.jl

RegularizedProblems

Synopsis

This package provides sameple problems suitable for developing and testing first and second-order methods for regularized optimization, i.e., they have the general form

\[\min_{x \in \mathbb{R}^n} \ f(x) + h(x),\]

where $f: \mathbb{R}^n \to \mathbb{R}$ has Lipschitz-continuous gradient and $h: \mathbb{R}^n \to \mathbb{R} \cup \{\infty\}$ is lower semi-continuous and proper. The smooth term f describes the objective to minimize while the role of the regularizer h is to select a solution with desirable properties: minimum norm, sparsity below a certain level, maximum sparsity, etc.

Models for f are instances of NLPModels and often represent nonlinear least-squares residuals, i.e., $f(x) = \tfrac{1}{2} \|F(x)\|_2^2$ where $F: \mathbb{R}^n \to \mathbb{R}^m$.

The regularizer $h$ should be obtained from ProximalOperators.jl.

The final regularized problem is intended to be solved by way of solver for nonsmooth regularized optimization such as those in RegularizedOptimization.jl.

Problems implemented

Basis-pursuit denoise

Calling model = bpdn_model() returns a model representing the smooth underdetermined linear least-squares residual

\[f(x) = \tfrac{1}{2} \|Ax - b\|_2^2,\]

where $A$ has orthonormal rows. The right-hand side is generated as $b = A x_{\star} + \varepsilon$ where $x_{\star}$ is a sparse vector, $\varepsilon \sim \mathcal{N}(0, \sigma)$ and $\sigma \in (0, 1)$ is a fixed noise level.

When solving the basis-pursuit denoise problem, the goal is to recover $x \approx x_{\star}$. In particular, $x$ should have the same sparsity pattern as $x_{\star}$. That is typically accomplished by choosing a regularizer of the form

  • $h(x) = \lambda \|x\|_1$ for a well-chosen $\lambda > 0$;
  • $h(x) = \|x\|_0$;
  • $h(x) = \chi(x; k \mathbb{B}_0)$ for $k \approx \|x_{\star}\|_0$;

where $\chi(x; k \mathbb{B}_0)$ is the indicator of the $\ell_0$-pseudonorm ball of radius $k$.

Calling model = bpdn_nls_model() returns the same problem modeled explicitly as a least-squares problem.

Fitzhugh-Nagumo data-fitting problem

If ADNLPModels and DifferentialEquations have been imported, model = fh_model() returns a model representing the over-determined nonlinear least-squares residual

\[f(x) = \tfrac{1}{2} \|F(x)\|_2^2,\]

where $F: \mathbb{R}^5 \to \mathbb{R}^{202}$ represents the residual between a simulation of the Fitzhugh-Nagumo system with parameters $x$ and a simulation of the Van der Pol oscillator with preset, but unknown, parameters $x_{\star}$.

A feature of the Fitzhugh-Nagumo model is that it reduces to the Van der Pol oscillator when certain parameters are set to zero. Thus here again, the objective is to recover a sparse solution to the data-fitting problem. Hence, typical regularizers are the same as those used for the basis-pursuit denoise problem.

diff --git a/dev/reference/index.html b/dev/reference/index.html index 50871a5..fc071be 100644 --- a/dev/reference/index.html +++ b/dev/reference/index.html @@ -1,6 +1,6 @@ -Reference · RegularizedProblems.jl

Reference

Contents

Index

RegularizedProblems.FirstOrderModelType
model = FirstOrderModel(f, ∇f!; name = "first-order model")

A simple subtype of AbstractNLPModel to represent a smooth objective.

Arguments

  • f :: F <: Function: a function such that f(x) returns the objective value at x;
  • ∇f! :: G <: Function: a function such that ∇f!(g, x) stores the gradient of the objective at x in g;
  • x :: AbstractVector: an initial guess.

All keyword arguments are passed through to the NLPModelMeta constructor.

source
RegularizedProblems.FirstOrderNLSModelType
model = FirstOrderNLSModel(r!, jv!, jtv!; name = "first-order NLS model")

A simple subtype of AbstractNLSModel to represent a nonlinear least-squares problem with a smooth residual.

Arguments

  • r! :: R <: Function: a function such that r!(y, x) stores the residual at x in y;
  • jv! :: J <: Function: a function such that jv!(u, x, v) stores the product between the residual Jacobian at x and the vector v in u;
  • jtv! :: Jt <: Function: a function such that jtv!(u, x, v) stores the product between the transpose of the residual Jacobian at x and the vector v in u;
  • x :: AbstractVector: an initial guess.

All keyword arguments are passed through to the NLPModelMeta constructor.

source
RegularizedProblems.RegularizedNLPModelType
rmodel = RegularizedNLPModel(model, regularizer)
-rmodel = RegularizedNLSModel(model, regularizer)

An aggregate type to represent a regularized optimization model, .i.e., of the form

minimize f(x) + h(x),

where f is smooth (and is usually assumed to have Lipschitz-continuous gradient), and h is lower semi-continuous (and may have to be prox-bounded).

The regularized model is made of

  • model <: AbstractNLPModel: the smooth part of the model, for example a FirstOrderModel
  • h: the nonsmooth part of the model; typically a regularizer defined in ProximalOperators.jl
  • selected: the subset of variables to which the regularizer h should be applied (default: all).

This aggregate type can be used to call solvers with a single object representing the model, but is especially useful for use with SolverBenchmark.jl, which expects problems to be defined by a single object.

source
RegularizedProblems.MIT_matrix_completion_modelMethod
model, nls_model, sol = MIT_matrix_completion_model()

A special case of matrix completion problem in which the exact image is a noisy MIT logo.

See the documentation of random_matrix_completion_model() for more information.

source
RegularizedProblems.bpdn_modelMethod
model, nls_model, sol = bpdn_model(args...; kwargs...)
+Reference · RegularizedProblems.jl

Reference

Contents

Index

RegularizedProblems.FirstOrderModelType
model = FirstOrderModel(f, ∇f!; name = "first-order model")

A simple subtype of AbstractNLPModel to represent a smooth objective.

Arguments

  • f :: F <: Function: a function such that f(x) returns the objective value at x;
  • ∇f! :: G <: Function: a function such that ∇f!(g, x) stores the gradient of the objective at x in g;
  • x :: AbstractVector: an initial guess.

All keyword arguments are passed through to the NLPModelMeta constructor.

source
RegularizedProblems.FirstOrderNLSModelType
model = FirstOrderNLSModel(r!, jv!, jtv!; name = "first-order NLS model")

A simple subtype of AbstractNLSModel to represent a nonlinear least-squares problem with a smooth residual.

Arguments

  • r! :: R <: Function: a function such that r!(y, x) stores the residual at x in y;
  • jv! :: J <: Function: a function such that jv!(u, x, v) stores the product between the residual Jacobian at x and the vector v in u;
  • jtv! :: Jt <: Function: a function such that jtv!(u, x, v) stores the product between the transpose of the residual Jacobian at x and the vector v in u;
  • x :: AbstractVector: an initial guess.

All keyword arguments are passed through to the NLPModelMeta constructor.

source
RegularizedProblems.RegularizedNLPModelType
rmodel = RegularizedNLPModel(model, regularizer)
+rmodel = RegularizedNLSModel(model, regularizer)

An aggregate type to represent a regularized optimization model, .i.e., of the form

minimize f(x) + h(x),

where f is smooth (and is usually assumed to have Lipschitz-continuous gradient), and h is lower semi-continuous (and may have to be prox-bounded).

The regularized model is made of

  • model <: AbstractNLPModel: the smooth part of the model, for example a FirstOrderModel
  • h: the nonsmooth part of the model; typically a regularizer defined in ProximalOperators.jl
  • selected: the subset of variables to which the regularizer h should be applied (default: all).

This aggregate type can be used to call solvers with a single object representing the model, but is especially useful for use with SolverBenchmark.jl, which expects problems to be defined by a single object.

source
RegularizedProblems.MIT_matrix_completion_modelMethod
model, nls_model, sol = MIT_matrix_completion_model()

A special case of matrix completion problem in which the exact image is a noisy MIT logo.

See the documentation of random_matrix_completion_model() for more information.

source
RegularizedProblems.bpdn_modelMethod
model, nls_model, sol = bpdn_model(args...; kwargs...)
 model, nls_model, sol = bpdn_model(compound = 1, args...; kwargs...)

Return an instance of an NLPModel and an instance of an NLSModel representing the same basis-pursuit denoise problem, i.e., the under-determined linear least-squares objective

½ ‖Ax - b‖₂²,

where A has orthonormal rows and b = A * x̄ + ϵ, x̄ is sparse and ϵ is a noise vector following a normal distribution with mean zero and standard deviation σ.

Arguments

  • m :: Int: the number of rows of A
  • n :: Int: the number of columns of A (with nm)
  • k :: Int: the number of nonzero elements in x̄
  • noise :: Float64: noise standard deviation σ (default: 0.01).

The second form calls the first form with arguments

m = 200 * compound
 n = 512 * compound
-k =  10 * compound

Keyword arguments

  • bounds :: Bool: whether or not to include nonnegativity bounds in the model (default: false).

Return Value

An instance of a FirstOrderModel and of a FirstOrderNLSModel that represent the same basis-pursuit denoise problem, and the exact solution x̄.

If bounds == true, the positive part of x̄ is returned.

source
RegularizedProblems.fh_modelMethod
fh_model(; kwargs...)

Return an instance of an NLPModel and an instance of an NLSModel representing the same Fitzhugh-Nagumo problem, i.e., the over-determined nonlinear least-squares objective

½ ‖F(x)‖₂²,

where F: ℝ⁵ → ℝ²⁰² represents the fitting error between a simulation of the Fitzhugh-Nagumo model with parameters x and a simulation of the Van der Pol oscillator with fixed, but unknown, parameters.

Keyword Arguments

All keyword arguments are passed directly to the ADNLPModel (or ADNLSModel) constructure, e.g., to set the automatic differentiation backend.

Return Value

An instance of an ADNLPModel that represents the Fitzhugh-Nagumo problem, an instance of an ADNLSModel that represents the same problem, and the exact solution.

source
RegularizedProblems.group_lasso_modelMethod
model, nls_model, sol = group_lasso_model(; kwargs...)

Return an instance of an NLPModel and NLSModel representing the group-lasso problem, i.e., the under-determined linear least-squares objective

½ ‖Ax - b‖₂²,

where A has orthonormal rows and b = A * x̄ + ϵ, x̄ is sparse and ϵ is a noise vector following a normal distribution with mean zero and standard deviation σ. Note that with this format, all groups have a the same number of elements and the number of groups divides evenly into the total number of elements.

Keyword Arguments

  • m :: Int: the number of rows of A (default: 200)
  • n :: Int: the number of columns of A, with nm (default: 512)
  • g :: Int: the number of groups (default: 16)
  • ag :: Int: the number of active groups (default: 5)
  • noise :: Float64: noise amount (default: 0.01)
  • compound :: Int: multiplier for m, n, g, and ag (default: 1).

Return Value

An instance of a FirstOrderModel that represents the group-lasso problem. An instance of a FirstOrderNLSModel that represents the group-lasso problem. Also returns true x, number of groups g, group-index denoting which groups are active, and a Matrix where rows are group indices of x.

source
RegularizedProblems.nnmf_modelFunction
model, Av, selected = nnmf_model(m = 100, n = 50, k = 10, T = Float64)

Return an instance of an NLPModel representing the non-negative matrix factorization objective

f(W, H) = ½ ‖A - WH‖₂²,

where A ∈ Rᵐˣⁿ has non-negative entries and can be separeted into k clusters, Av = A[:]. The vector of indices selected = k*m+1: k*(m+n) is used to indicate the components of W ∈ Rᵐˣᵏ and H ∈ Rᵏˣⁿ to apply the regularizer to (so that the regularizer only applies to entries of H).

Arguments

  • m :: Int: the number of rows of A
  • n :: Int: the number of columns of A (with nm)
  • k :: Int: the number of clusters
source
RegularizedProblems.qp_rand_modelMethod
model = qp_rand_model(n = 100_000; dens = 1.0e-4, convex = false)

Return an instance of a QuadraticModel representing

min cᵀx + ½ xᵀHx s.t. l ≤ x ≤ u,

with H = A + A' or H = A * A' (see the convex keyword argument) where A is a random square matrix with density dens, l = -e - tₗ and u = e + tᵤ where e is the vector of ones, and tₗ and tᵤ are sampled from a uniform distribution between 0 and 1.

Arguments

  • n :: Int: size of the problem (default: 100_000).

Keyword arguments

  • dens :: Real: density of A with 0 < dens ≤ 1 used to generate the quadratic model (default: 1.0e-4).
  • convex :: Bool: true to generate positive definite H (default: false).

Return Value

An instance of a QuadraticModel.

source
RegularizedProblems.random_matrix_completion_modelMethod
model, nls_model, sol = random_matrix_completion_model(; kwargs...)

Return an instance of an NLPModel and an instance of an NLSModel representing the same matrix completion problem, i.e., the square linear least-squares objective

½ ‖P(X - A)‖²

in the Frobenius norm, where X is the unknown image represented as an m x n matrix, A is a fixed image, and the operator P only retains a certain subset of pixels of X and A.

Keyword Arguments

  • m :: Int: the number of rows of X and A (default: 100)
  • n :: Int: the number of columns of X and A (default: 100)
  • r :: Int: the desired rank of A (default: 5)
  • sr :: AbstractFloat: a threshold between 0 and 1 used to determine the set of pixels

retained by the operator P (default: 0.8)

  • va :: AbstractFloat: the variance of a first Gaussian perturbation to be applied to A (default: 1.0e-4)
  • vb :: AbstractFloat: the variance of a second Gaussian perturbation to be applied to A (default: 1.0e-2)
  • c :: AbstractFloat: the coefficient of the convex combination of the two Gaussian perturbations (default: 0.2).

Return Value

An instance of a FirstOrderModel and of a FirstOrderNLSModel that represent the same matrix completion problem, and the exact solution.

source
+k = 10 * compound

Keyword arguments

  • bounds :: Bool: whether or not to include nonnegativity bounds in the model (default: false).

Return Value

An instance of a FirstOrderModel and of a FirstOrderNLSModel that represent the same basis-pursuit denoise problem, and the exact solution x̄.

If bounds == true, the positive part of x̄ is returned.

source
RegularizedProblems.fh_modelMethod
fh_model(; kwargs...)

Return an instance of an NLPModel and an instance of an NLSModel representing the same Fitzhugh-Nagumo problem, i.e., the over-determined nonlinear least-squares objective

½ ‖F(x)‖₂²,

where F: ℝ⁵ → ℝ²⁰² represents the fitting error between a simulation of the Fitzhugh-Nagumo model with parameters x and a simulation of the Van der Pol oscillator with fixed, but unknown, parameters.

Keyword Arguments

All keyword arguments are passed directly to the ADNLPModel (or ADNLSModel) constructure, e.g., to set the automatic differentiation backend.

Return Value

An instance of an ADNLPModel that represents the Fitzhugh-Nagumo problem, an instance of an ADNLSModel that represents the same problem, and the exact solution.

source
RegularizedProblems.group_lasso_modelMethod
model, nls_model, sol = group_lasso_model(; kwargs...)

Return an instance of an NLPModel and NLSModel representing the group-lasso problem, i.e., the under-determined linear least-squares objective

½ ‖Ax - b‖₂²,

where A has orthonormal rows and b = A * x̄ + ϵ, x̄ is sparse and ϵ is a noise vector following a normal distribution with mean zero and standard deviation σ. Note that with this format, all groups have a the same number of elements and the number of groups divides evenly into the total number of elements.

Keyword Arguments

  • m :: Int: the number of rows of A (default: 200)
  • n :: Int: the number of columns of A, with nm (default: 512)
  • g :: Int: the number of groups (default: 16)
  • ag :: Int: the number of active groups (default: 5)
  • noise :: Float64: noise amount (default: 0.01)
  • compound :: Int: multiplier for m, n, g, and ag (default: 1).

Return Value

An instance of a FirstOrderModel that represents the group-lasso problem. An instance of a FirstOrderNLSModel that represents the group-lasso problem. Also returns true x, number of groups g, group-index denoting which groups are active, and a Matrix where rows are group indices of x.

source
RegularizedProblems.nnmf_modelFunction
model, Av, selected = nnmf_model(m = 100, n = 50, k = 10, T = Float64)

Return an instance of an NLPModel representing the non-negative matrix factorization objective

f(W, H) = ½ ‖A - WH‖₂²,

where A ∈ Rᵐˣⁿ has non-negative entries and can be separeted into k clusters, Av = A[:]. The vector of indices selected = k*m+1: k*(m+n) is used to indicate the components of W ∈ Rᵐˣᵏ and H ∈ Rᵏˣⁿ to apply the regularizer to (so that the regularizer only applies to entries of H).

Arguments

  • m :: Int: the number of rows of A
  • n :: Int: the number of columns of A (with nm)
  • k :: Int: the number of clusters
source
RegularizedProblems.qp_rand_modelMethod
model = qp_rand_model(n = 100_000; dens = 1.0e-4, convex = false)

Return an instance of a QuadraticModel representing

min cᵀx + ½ xᵀHx s.t. l ≤ x ≤ u,

with H = A + A' or H = A * A' (see the convex keyword argument) where A is a random square matrix with density dens, l = -e - tₗ and u = e + tᵤ where e is the vector of ones, and tₗ and tᵤ are sampled from a uniform distribution between 0 and 1.

Arguments

  • n :: Int: size of the problem (default: 100_000).

Keyword arguments

  • dens :: Real: density of A with 0 < dens ≤ 1 used to generate the quadratic model (default: 1.0e-4).
  • convex :: Bool: true to generate positive definite H (default: false).

Return Value

An instance of a QuadraticModel.

source
RegularizedProblems.random_matrix_completion_modelMethod
model, nls_model, sol = random_matrix_completion_model(; kwargs...)

Return an instance of an NLPModel and an instance of an NLSModel representing the same matrix completion problem, i.e., the square linear least-squares objective

½ ‖P(X - A)‖²

in the Frobenius norm, where X is the unknown image represented as an m x n matrix, A is a fixed image, and the operator P only retains a certain subset of pixels of X and A.

Keyword Arguments

  • m :: Int: the number of rows of X and A (default: 100)
  • n :: Int: the number of columns of X and A (default: 100)
  • r :: Int: the desired rank of A (default: 5)
  • sr :: AbstractFloat: a threshold between 0 and 1 used to determine the set of pixels

retained by the operator P (default: 0.8)

  • va :: AbstractFloat: the variance of a first Gaussian perturbation to be applied to A (default: 1.0e-4)
  • vb :: AbstractFloat: the variance of a second Gaussian perturbation to be applied to A (default: 1.0e-2)
  • c :: AbstractFloat: the coefficient of the convex combination of the two Gaussian perturbations (default: 0.2).

Return Value

An instance of a FirstOrderModel and of a FirstOrderNLSModel that represent the same matrix completion problem, and the exact solution.

source
diff --git a/dev/search/index.html b/dev/search/index.html index 3e634e2..a736bf2 100644 --- a/dev/search/index.html +++ b/dev/search/index.html @@ -1,2 +1,2 @@ -Search · RegularizedProblems.jl

Loading search...

    +Search · RegularizedProblems.jl

    Loading search...