From e3ed0970ecdee895c2d9e4602766bf7c3403f355 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Mon, 27 May 2024 15:46:05 +0000 Subject: [PATCH] build based on bc6a282 --- dev/index.html | 2 +- dev/reference/index.html | 10 ++++------ dev/search/index.html | 2 +- dev/search_index.js | 2 +- 4 files changed, 7 insertions(+), 9 deletions(-) diff --git a/dev/index.html b/dev/index.html index daaf0f9..506e645 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,2 +1,2 @@ -Home · RegularizedProblems.jl

RegularizedProblems

Synopsis

This package provides sameple problems suitable for developing and testing first and second-order methods for regularized optimization, i.e., they have the general form

\[\min_{x \in \mathbb{R}^n} \ f(x) + h(x),\]

where $f: \mathbb{R}^n \to \mathbb{R}$ has Lipschitz-continuous gradient and $h: \mathbb{R}^n \to \mathbb{R} \cup \{\infty\}$ is lower semi-continuous and proper. The smooth term f describes the objective to minimize while the role of the regularizer h is to select a solution with desirable properties: minimum norm, sparsity below a certain level, maximum sparsity, etc.

Models for f are instances of NLPModels and often represent nonlinear least-squares residuals, i.e., $f(x) = \tfrac{1}{2} \|F(x)\|_2^2$ where $F: \mathbb{R}^n \to \mathbb{R}^m$.

The regularizer $h$ should be obtained from ProximalOperators.jl.

The final regularized problem is intended to be solved by way of solver for nonsmooth regularized optimization such as those in RegularizedOptimization.jl.

Problems implemented

Basis-pursuit denoise

Calling model = bpdn_model() returns a model representing the smooth underdetermined linear least-squares residual

\[f(x) = \tfrac{1}{2} \|Ax - b\|_2^2,\]

where $A$ has orthonormal rows. The right-hand side is generated as $b = A x_{\star} + \varepsilon$ where $x_{\star}$ is a sparse vector, $\varepsilon \sim \mathcal{N}(0, \sigma)$ and $\sigma \in (0, 1)$ is a fixed noise level.

When solving the basis-pursuit denoise problem, the goal is to recover $x \approx x_{\star}$. In particular, $x$ should have the same sparsity pattern as $x_{\star}$. That is typically accomplished by choosing a regularizer of the form

  • $h(x) = \lambda \|x\|_1$ for a well-chosen $\lambda > 0$;
  • $h(x) = \|x\|_0$;
  • $h(x) = \chi(x; k \mathbb{B}_0)$ for $k \approx \|x_{\star}\|_0$;

where $\chi(x; k \mathbb{B}_0)$ is the indicator of the $\ell_0$-pseudonorm ball of radius $k$.

Calling model = bpdn_nls_model() returns the same problem modeled explicitly as a least-squares problem.

Fitzhugh-Nagumo data-fitting problem

If ADNLPModels and DifferentialEquations have been imported, model = fh_model() returns a model representing the over-determined nonlinear least-squares residual

\[f(x) = \tfrac{1}{2} \|F(x)\|_2^2,\]

where $F: \mathbb{R}^5 \to \mathbb{R}^{202}$ represents the residual between a simulation of the Fitzhugh-Nagumo system with parameters $x$ and a simulation of the Van der Pol oscillator with preset, but unknown, parameters $x_{\star}$.

A feature of the Fitzhugh-Nagumo model is that it reduces to the Van der Pol oscillator when certain parameters are set to zero. Thus here again, the objective is to recover a sparse solution to the data-fitting problem. Hence, typical regularizers are the same as those used for the basis-pursuit denoise problem.

+Home · RegularizedProblems.jl

RegularizedProblems

Synopsis

This package provides sameple problems suitable for developing and testing first and second-order methods for regularized optimization, i.e., they have the general form

\[\min_{x \in \mathbb{R}^n} \ f(x) + h(x),\]

where $f: \mathbb{R}^n \to \mathbb{R}$ has Lipschitz-continuous gradient and $h: \mathbb{R}^n \to \mathbb{R} \cup \{\infty\}$ is lower semi-continuous and proper. The smooth term f describes the objective to minimize while the role of the regularizer h is to select a solution with desirable properties: minimum norm, sparsity below a certain level, maximum sparsity, etc.

Models for f are instances of NLPModels and often represent nonlinear least-squares residuals, i.e., $f(x) = \tfrac{1}{2} \|F(x)\|_2^2$ where $F: \mathbb{R}^n \to \mathbb{R}^m$.

The regularizer $h$ should be obtained from ProximalOperators.jl.

The final regularized problem is intended to be solved by way of solver for nonsmooth regularized optimization such as those in RegularizedOptimization.jl.

Problems implemented

Basis-pursuit denoise

Calling model = bpdn_model() returns a model representing the smooth underdetermined linear least-squares residual

\[f(x) = \tfrac{1}{2} \|Ax - b\|_2^2,\]

where $A$ has orthonormal rows. The right-hand side is generated as $b = A x_{\star} + \varepsilon$ where $x_{\star}$ is a sparse vector, $\varepsilon \sim \mathcal{N}(0, \sigma)$ and $\sigma \in (0, 1)$ is a fixed noise level.

When solving the basis-pursuit denoise problem, the goal is to recover $x \approx x_{\star}$. In particular, $x$ should have the same sparsity pattern as $x_{\star}$. That is typically accomplished by choosing a regularizer of the form

  • $h(x) = \lambda \|x\|_1$ for a well-chosen $\lambda > 0$;
  • $h(x) = \|x\|_0$;
  • $h(x) = \chi(x; k \mathbb{B}_0)$ for $k \approx \|x_{\star}\|_0$;

where $\chi(x; k \mathbb{B}_0)$ is the indicator of the $\ell_0$-pseudonorm ball of radius $k$.

Calling model = bpdn_nls_model() returns the same problem modeled explicitly as a least-squares problem.

Fitzhugh-Nagumo data-fitting problem

If ADNLPModels and DifferentialEquations have been imported, model = fh_model() returns a model representing the over-determined nonlinear least-squares residual

\[f(x) = \tfrac{1}{2} \|F(x)\|_2^2,\]

where $F: \mathbb{R}^5 \to \mathbb{R}^{202}$ represents the residual between a simulation of the Fitzhugh-Nagumo system with parameters $x$ and a simulation of the Van der Pol oscillator with preset, but unknown, parameters $x_{\star}$.

A feature of the Fitzhugh-Nagumo model is that it reduces to the Van der Pol oscillator when certain parameters are set to zero. Thus here again, the objective is to recover a sparse solution to the data-fitting problem. Hence, typical regularizers are the same as those used for the basis-pursuit denoise problem.

diff --git a/dev/reference/index.html b/dev/reference/index.html index e1faad5..e217c1f 100644 --- a/dev/reference/index.html +++ b/dev/reference/index.html @@ -1,8 +1,6 @@ -Reference · RegularizedProblems.jl

Reference

Contents

Index

RegularizedProblems.FirstOrderModelType
model = FirstOrderModel(f, ∇f!; name = "first-order model")

A simple subtype of AbstractNLPModel to represent a smooth objective.

Arguments

  • f :: F <: Function: a function such that f(x) returns the objective value at x;
  • ∇f! :: G <: Function: a function such that ∇f!(g, x) stores the gradient of the objective at x in g;
  • x :: AbstractVector: an initial guess.

All keyword arguments are passed through to the NLPModelMeta constructor.

source
RegularizedProblems.FirstOrderNLSModelType
model = FirstOrderNLSModel(r!, jv!, jtv!; name = "first-order NLS model")

A simple subtype of AbstractNLSModel to represent a nonlinear least-squares problem with a smooth residual.

Arguments

  • r! :: R <: Function: a function such that r!(y, x) stores the residual at x in y;
  • jv! :: J <: Function: a function such that jv!(u, x, v) stores the product between the residual Jacobian at x and the vector v in u;
  • jtv! :: Jt <: Function: a function such that jtv!(u, x, v) stores the product between the transpose of the residual Jacobian at x and the vector v in u;
  • x :: AbstractVector: an initial guess.

All keyword arguments are passed through to the NLPModelMeta constructor.

source
RegularizedProblems.MIT_matrix_completion_modelMethod
model, nls_model, sol = MIT_matrix_completion_model(args...)

A special case of matrix completion problem in which the exact image is a noisy MIT logo.

See the documentation of random_matrix_completion_model() for more information.

source
RegularizedProblems.bpdn_modelMethod
model, nls_model, sol = bpdn_model(args...)
-model, nls_model, sol = bpdn_model(compound = 1, args...)

Return an instance of an NLPModel and an instance of an NLSModel representing the same basis-pursuit denoise problem, i.e., the under-determined linear least-squares objective

½ ‖Ax - b‖₂²,

where A has orthonormal rows and b = A * x̄ + ϵ, x̄ is sparse and ϵ is a noise vector following a normal distribution with mean zero and standard deviation σ.

Arguments

  • m :: Int: the number of rows of A
  • n :: Int: the number of columns of A (with nm)
  • k :: Int: the number of nonzero elements in x̄
  • noise :: Float64: noise standard deviation σ (default: 0.01).

The second form calls the first form with arguments

m = 200 * compound
+Reference · RegularizedProblems.jl

Reference

Contents

Index

RegularizedProblems.FirstOrderModelType
model = FirstOrderModel(f, ∇f!; name = "first-order model")

A simple subtype of AbstractNLPModel to represent a smooth objective.

Arguments

  • f :: F <: Function: a function such that f(x) returns the objective value at x;
  • ∇f! :: G <: Function: a function such that ∇f!(g, x) stores the gradient of the objective at x in g;
  • x :: AbstractVector: an initial guess.

All keyword arguments are passed through to the NLPModelMeta constructor.

source
RegularizedProblems.FirstOrderNLSModelType
model = FirstOrderNLSModel(r!, jv!, jtv!; name = "first-order NLS model")

A simple subtype of AbstractNLSModel to represent a nonlinear least-squares problem with a smooth residual.

Arguments

  • r! :: R <: Function: a function such that r!(y, x) stores the residual at x in y;
  • jv! :: J <: Function: a function such that jv!(u, x, v) stores the product between the residual Jacobian at x and the vector v in u;
  • jtv! :: Jt <: Function: a function such that jtv!(u, x, v) stores the product between the transpose of the residual Jacobian at x and the vector v in u;
  • x :: AbstractVector: an initial guess.

All keyword arguments are passed through to the NLPModelMeta constructor.

source
RegularizedProblems.RegularizedNLPModelType
rmodel = RegularizedNLPModel(model, regularizer)
+rmodel = RegularizedNLSModel(model, regularizer)

An aggregate type to represent a regularized optimization model, .i.e., of the form

minimize f(x) + h(x),

where f is smooth (and is usually assumed to have Lipschitz-continuous gradient), and h is lower semi-continuous (and may have to be prox-bounded).

The regularized model is made of

  • model <: AbstractNLPModel: the smooth part of the model, for example a FirstOrderModel
  • h: the nonsmooth part of the model; typically a regularizer defined in ProximalOperators.jl
  • selected: the subset of variables to which the regularizer h should be applied (default: all).

This aggregate type can be used to call solvers with a single object representing the model, but is especially useful for use with SolverBenchmark.jl, which expects problems to be defined by a single object.

source
RegularizedProblems.MIT_matrix_completion_modelMethod
model, nls_model, sol = MIT_matrix_completion_model()

A special case of matrix completion problem in which the exact image is a noisy MIT logo.

See the documentation of random_matrix_completion_model() for more information.

source
RegularizedProblems.bpdn_modelMethod
model, nls_model, sol = bpdn_model(args...; kwargs...)
+model, nls_model, sol = bpdn_model(compound = 1, args...; kwargs...)

Return an instance of an NLPModel and an instance of an NLSModel representing the same basis-pursuit denoise problem, i.e., the under-determined linear least-squares objective

½ ‖Ax - b‖₂²,

where A has orthonormal rows and b = A * x̄ + ϵ, x̄ is sparse and ϵ is a noise vector following a normal distribution with mean zero and standard deviation σ.

Arguments

  • m :: Int: the number of rows of A
  • n :: Int: the number of columns of A (with nm)
  • k :: Int: the number of nonzero elements in x̄
  • noise :: Float64: noise standard deviation σ (default: 0.01).

The second form calls the first form with arguments

m = 200 * compound
 n = 512 * compound
-k =  10 * compound

Keyword arguments

  • bounds :: Bool: whether or not to include nonnegativity bounds in the model (default: false).

Return Value

An instance of a FirstOrderModel and of a FirstOrderNLSModel that represent the same basis-pursuit denoise problem, and the exact solution x̄.

If bounds == true, the positive part of x̄ is returned.

source
RegularizedProblems.fh_modelMethod
fh_model(; kwargs...)

Return an instance of an NLPModel and an instance of an NLSModel representing the same Fitzhugh-Nagumo problem, i.e., the over-determined nonlinear least-squares objective

½ ‖F(x)‖₂²,

where F: ℝ⁵ → ℝ²⁰² represents the fitting error between a simulation of the Fitzhugh-Nagumo model with parameters x and a simulation of the Van der Pol oscillator with fixed, but unknown, parameters.

Keyword Arguments

All keyword arguments are passed directly to the ADNLPModel (or ADNLSModel) constructure, e.g., to set the automatic differentiation backend.

Return Value

An instance of an ADNLPModel that represents the Fitzhugh-Nagumo problem, an instance of an ADNLSModel that represents the same problem, and the exact solution.

source
RegularizedProblems.group_lasso_modelMethod
model, nls_model, sol = group_lasso_model(args...)
-model, nls_model, sol = group_lasso_model(compound = 1, args...)

Return an instance of an NLPModel and NLSModel representing the group-lasso problem, i.e., the under-determined linear least-squares objective

½ ‖Ax - b‖₂²,

where A has orthonormal rows and b = A * x̄ + ϵ, x̄ is sparse and ϵ is a noise vector following a normal distribution with mean zero and standard deviation σ. Note that with this format, all groups have a the same number of elements and the number of groups divides evenly into the total number of elements.

Arguments

  • m :: Int: the number of rows of A
  • n :: Int: the number of columns of A (with nm)
  • g :: Int : the number of groups
  • ag :: Array{Int}: group-index denoting which groups are active (with max(ag) ≤ g), i.e. [1, 4, 5] when there are 7 groups
  • noise :: Float64: noise amount ϵ (default: 0.01).

The second form calls the first form with arguments

m = 200 * compound
-n = 512 * compound
-k =  10 * compound

Return Value

An instance of a FirstOrderModel that represents the group-lasso problem. An instance of a FirstOrderNLSModel that represents the group-lasso problem. Also returns true x, number of groups g, group-index denoting which groups are active, and a Matrix where rows are group indices of x

source
RegularizedProblems.nnmf_modelFunction
model, Av, selected = nnmf_model(m = 100, n = 50, k = 10, T = Float64)

Return an instance of an NLPModel representing the non-negative matrix factorization objective

f(W, H) = ½ ‖A - WH‖₂²,

where A ∈ Rᵐˣⁿ has non-negative entries and can be separeted into k clusters, Av = A[:]. The vector of indices selected = k*m+1: k*(m+n) is used to indicate the components of W ∈ Rᵐˣᵏ and H ∈ Rᵏˣⁿ to apply the regularizer to (so that the regularizer only applies to entries of H).

Arguments

  • m :: Int: the number of rows of A
  • n :: Int: the number of columns of A (with nm)
  • k :: Int: the number of clusters
source
RegularizedProblems.qp_rand_modelMethod
model = qp_rand_model(n; dens = 1.0e-4, convex = false)

Return an instance of a QuadraticModel representing

min cᵀx + ½ xᵀHx s.t. l ≤ x ≤ u,

with H = A + A' or H = A * A' (see the convex keyword argument) where A is a random square matrix with density dens, l = -e - tₗ and u = e + tᵤ where e is the vector of ones, and tₗ and tᵤ are sampled from a uniform distribution between 0 and 1.

Arguments

  • n :: Int: size of the problem,

Keyword arguments

  • dens :: Real: density of A with 0 < dens ≤ 1 used to generate the quadratic model (default: 1.0e-4).
  • convex :: Bool: true to generate positive definite H (default: false).

Return Value

An instance of a QuadraticModel.

source
RegularizedProblems.random_matrix_completion_modelMethod
model, nls_model, sol = random_matrix_completion_model(args...)

Return an instance of an NLPModel and an instance of an NLSModel representing the same matrix completion problem, i.e., the square linear least-squares objective

½ ‖P(X - A)‖²

in the Frobenius norm, where X is the unknown image represented as an m x n matrix, A is a fixed image, and the operator P only retains a certain subset of pixels of X and A.

Arguments

  • m :: Int: the number of rows of X and A
  • n :: Int: the number of columns of X and A
  • r :: Int: the desired rank of A
  • sr :: AbstractFloat: a threshold between 0 and 1 used to determine the set of pixels retained by the operator P
  • va :: AbstractFloat: the variance of a first Gaussian perturbation to be applied to A
  • vb :: AbstractFloat: the variance of a second Gaussian perturbation to be applied to A
  • c :: AbstractFloat: the coefficient of the convex combination of the two Gaussian perturbations.

Return Value

An instance of a FirstOrderModel and of a FirstOrderNLSModel that represent the same matrix completion problem, and the exact solution.

source
+k = 10 * compound

Keyword arguments

  • bounds :: Bool: whether or not to include nonnegativity bounds in the model (default: false).

Return Value

An instance of a FirstOrderModel and of a FirstOrderNLSModel that represent the same basis-pursuit denoise problem, and the exact solution x̄.

If bounds == true, the positive part of x̄ is returned.

source
RegularizedProblems.fh_modelMethod
fh_model(; kwargs...)

Return an instance of an NLPModel and an instance of an NLSModel representing the same Fitzhugh-Nagumo problem, i.e., the over-determined nonlinear least-squares objective

½ ‖F(x)‖₂²,

where F: ℝ⁵ → ℝ²⁰² represents the fitting error between a simulation of the Fitzhugh-Nagumo model with parameters x and a simulation of the Van der Pol oscillator with fixed, but unknown, parameters.

Keyword Arguments

All keyword arguments are passed directly to the ADNLPModel (or ADNLSModel) constructure, e.g., to set the automatic differentiation backend.

Return Value

An instance of an ADNLPModel that represents the Fitzhugh-Nagumo problem, an instance of an ADNLSModel that represents the same problem, and the exact solution.

source
RegularizedProblems.group_lasso_modelMethod
model, nls_model, sol = group_lasso_model(; kwargs...)

Return an instance of an NLPModel and NLSModel representing the group-lasso problem, i.e., the under-determined linear least-squares objective

½ ‖Ax - b‖₂²,

where A has orthonormal rows and b = A * x̄ + ϵ, x̄ is sparse and ϵ is a noise vector following a normal distribution with mean zero and standard deviation σ. Note that with this format, all groups have a the same number of elements and the number of groups divides evenly into the total number of elements.

Keyword Arguments

  • m :: Int: the number of rows of A (default: 200)
  • n :: Int: the number of columns of A, with nm (default: 512)
  • g :: Int: the number of groups (default: 16)
  • ag :: Int: the number of active groups (default: 5)
  • noise :: Float64: noise amount (default: 0.01)
  • compound :: Int: multiplier for m, n, g, and ag (default: 1).

Return Value

An instance of a FirstOrderModel that represents the group-lasso problem. An instance of a FirstOrderNLSModel that represents the group-lasso problem. Also returns true x, number of groups g, group-index denoting which groups are active, and a Matrix where rows are group indices of x.

source
RegularizedProblems.nnmf_modelFunction
model, Av, selected = nnmf_model(m = 100, n = 50, k = 10, T = Float64)

Return an instance of an NLPModel representing the non-negative matrix factorization objective

f(W, H) = ½ ‖A - WH‖₂²,

where A ∈ Rᵐˣⁿ has non-negative entries and can be separeted into k clusters, Av = A[:]. The vector of indices selected = k*m+1: k*(m+n) is used to indicate the components of W ∈ Rᵐˣᵏ and H ∈ Rᵏˣⁿ to apply the regularizer to (so that the regularizer only applies to entries of H).

Arguments

  • m :: Int: the number of rows of A
  • n :: Int: the number of columns of A (with nm)
  • k :: Int: the number of clusters
source
RegularizedProblems.qp_rand_modelMethod
model = qp_rand_model(n = 100_000; dens = 1.0e-4, convex = false)

Return an instance of a QuadraticModel representing

min cᵀx + ½ xᵀHx s.t. l ≤ x ≤ u,

with H = A + A' or H = A * A' (see the convex keyword argument) where A is a random square matrix with density dens, l = -e - tₗ and u = e + tᵤ where e is the vector of ones, and tₗ and tᵤ are sampled from a uniform distribution between 0 and 1.

Arguments

  • n :: Int: size of the problem (default: 100_000).

Keyword arguments

  • dens :: Real: density of A with 0 < dens ≤ 1 used to generate the quadratic model (default: 1.0e-4).
  • convex :: Bool: true to generate positive definite H (default: false).

Return Value

An instance of a QuadraticModel.

source
RegularizedProblems.random_matrix_completion_modelMethod
model, nls_model, sol = random_matrix_completion_model(; kwargs...)

Return an instance of an NLPModel and an instance of an NLSModel representing the same matrix completion problem, i.e., the square linear least-squares objective

½ ‖P(X - A)‖²

in the Frobenius norm, where X is the unknown image represented as an m x n matrix, A is a fixed image, and the operator P only retains a certain subset of pixels of X and A.

Keyword Arguments

  • m :: Int: the number of rows of X and A (default: 100)
  • n :: Int: the number of columns of X and A (default: 100)
  • r :: Int: the desired rank of A (default: 5)
  • sr :: AbstractFloat: a threshold between 0 and 1 used to determine the set of pixels

retained by the operator P (default: 0.8)

  • va :: AbstractFloat: the variance of a first Gaussian perturbation to be applied to A (default: 1.0e-4)
  • vb :: AbstractFloat: the variance of a second Gaussian perturbation to be applied to A (default: 1.0e-2)
  • c :: AbstractFloat: the coefficient of the convex combination of the two Gaussian perturbations (default: 0.2).

Return Value

An instance of a FirstOrderModel and of a FirstOrderNLSModel that represent the same matrix completion problem, and the exact solution.

source
diff --git a/dev/search/index.html b/dev/search/index.html index d604bd6..5764aa6 100644 --- a/dev/search/index.html +++ b/dev/search/index.html @@ -1,2 +1,2 @@ -Search · RegularizedProblems.jl

Loading search...

    +Search · RegularizedProblems.jl

    Loading search...

      diff --git a/dev/search_index.js b/dev/search_index.js index de9bc4b..24c7160 100644 --- a/dev/search_index.js +++ b/dev/search_index.js @@ -1,3 +1,3 @@ var documenterSearchIndex = {"docs": -[{"location":"reference/#Reference","page":"Reference","title":"Reference","text":"","category":"section"},{"location":"reference/","page":"Reference","title":"Reference","text":"​","category":"page"},{"location":"reference/#Contents","page":"Reference","title":"Contents","text":"","category":"section"},{"location":"reference/","page":"Reference","title":"Reference","text":"​","category":"page"},{"location":"reference/","page":"Reference","title":"Reference","text":"Pages = [\"reference.md\"]","category":"page"},{"location":"reference/","page":"Reference","title":"Reference","text":"​","category":"page"},{"location":"reference/#Index","page":"Reference","title":"Index","text":"","category":"section"},{"location":"reference/","page":"Reference","title":"Reference","text":"​","category":"page"},{"location":"reference/","page":"Reference","title":"Reference","text":"Pages = [\"reference.md\"]","category":"page"},{"location":"reference/","page":"Reference","title":"Reference","text":"​","category":"page"},{"location":"reference/","page":"Reference","title":"Reference","text":"Modules = [RegularizedProblems]","category":"page"},{"location":"reference/#RegularizedProblems.FirstOrderModel","page":"Reference","title":"RegularizedProblems.FirstOrderModel","text":"model = FirstOrderModel(f, ∇f!; name = \"first-order model\")\n\nA simple subtype of AbstractNLPModel to represent a smooth objective.\n\nArguments\n\nf :: F <: Function: a function such that f(x) returns the objective value at x;\n∇f! :: G <: Function: a function such that ∇f!(g, x) stores the gradient of the objective at x in g;\nx :: AbstractVector: an initial guess.\n\nAll keyword arguments are passed through to the NLPModelMeta constructor.\n\n\n\n\n\n","category":"type"},{"location":"reference/#RegularizedProblems.FirstOrderNLSModel","page":"Reference","title":"RegularizedProblems.FirstOrderNLSModel","text":"model = FirstOrderNLSModel(r!, jv!, jtv!; name = \"first-order NLS model\")\n\nA simple subtype of AbstractNLSModel to represent a nonlinear least-squares problem with a smooth residual.\n\nArguments\n\nr! :: R <: Function: a function such that r!(y, x) stores the residual at x in y;\njv! :: J <: Function: a function such that jv!(u, x, v) stores the product between the residual Jacobian at x and the vector v in u;\njtv! :: Jt <: Function: a function such that jtv!(u, x, v) stores the product between the transpose of the residual Jacobian at x and the vector v in u;\nx :: AbstractVector: an initial guess.\n\nAll keyword arguments are passed through to the NLPModelMeta constructor.\n\n\n\n\n\n","category":"type"},{"location":"reference/#RegularizedProblems.MIT_matrix_completion_model-Tuple{}","page":"Reference","title":"RegularizedProblems.MIT_matrix_completion_model","text":"model, nls_model, sol = MIT_matrix_completion_model(args...)\n\nA special case of matrix completion problem in which the exact image is a noisy MIT logo.\n\nSee the documentation of random_matrix_completion_model() for more information.\n\n\n\n\n\n","category":"method"},{"location":"reference/#RegularizedProblems.bpdn_model-Tuple","page":"Reference","title":"RegularizedProblems.bpdn_model","text":"model, nls_model, sol = bpdn_model(args...)\nmodel, nls_model, sol = bpdn_model(compound = 1, args...)\n\nReturn an instance of an NLPModel and an instance of an NLSModel representing the same basis-pursuit denoise problem, i.e., the under-determined linear least-squares objective\n\n½ ‖Ax - b‖₂²,\n\nwhere A has orthonormal rows and b = A * x̄ + ϵ, x̄ is sparse and ϵ is a noise vector following a normal distribution with mean zero and standard deviation σ.\n\nArguments\n\nm :: Int: the number of rows of A\nn :: Int: the number of columns of A (with n ≥ m)\nk :: Int: the number of nonzero elements in x̄\nnoise :: Float64: noise standard deviation σ (default: 0.01).\n\nThe second form calls the first form with arguments\n\nm = 200 * compound\nn = 512 * compound\nk = 10 * compound\n\nKeyword arguments\n\nbounds :: Bool: whether or not to include nonnegativity bounds in the model (default: false).\n\nReturn Value\n\nAn instance of a FirstOrderModel and of a FirstOrderNLSModel that represent the same basis-pursuit denoise problem, and the exact solution x̄.\n\nIf bounds == true, the positive part of x̄ is returned.\n\n\n\n\n\n","category":"method"},{"location":"reference/#RegularizedProblems.fh_model-Tuple{}","page":"Reference","title":"RegularizedProblems.fh_model","text":"fh_model(; kwargs...)\n\nReturn an instance of an NLPModel and an instance of an NLSModel representing the same Fitzhugh-Nagumo problem, i.e., the over-determined nonlinear least-squares objective\n\n½ ‖F(x)‖₂²,\n\nwhere F: ℝ⁵ → ℝ²⁰² represents the fitting error between a simulation of the Fitzhugh-Nagumo model with parameters x and a simulation of the Van der Pol oscillator with fixed, but unknown, parameters.\n\nKeyword Arguments\n\nAll keyword arguments are passed directly to the ADNLPModel (or ADNLSModel) constructure, e.g., to set the automatic differentiation backend.\n\nReturn Value\n\nAn instance of an ADNLPModel that represents the Fitzhugh-Nagumo problem, an instance of an ADNLSModel that represents the same problem, and the exact solution.\n\n\n\n\n\n","category":"method"},{"location":"reference/#RegularizedProblems.group_lasso_model-Tuple","page":"Reference","title":"RegularizedProblems.group_lasso_model","text":"model, nls_model, sol = group_lasso_model(args...)\nmodel, nls_model, sol = group_lasso_model(compound = 1, args...)\n\nReturn an instance of an NLPModel and NLSModel representing the group-lasso problem, i.e., the under-determined linear least-squares objective\n\n½ ‖Ax - b‖₂²,\n\nwhere A has orthonormal rows and b = A * x̄ + ϵ, x̄ is sparse and ϵ is a noise vector following a normal distribution with mean zero and standard deviation σ. Note that with this format, all groups have a the same number of elements and the number of groups divides evenly into the total number of elements.\n\nArguments\n\nm :: Int: the number of rows of A\nn :: Int: the number of columns of A (with n ≥ m)\ng :: Int : the number of groups\nag :: Array{Int}: group-index denoting which groups are active (with max(ag) ≤ g), i.e. [1, 4, 5] when there are 7 groups\nnoise :: Float64: noise amount ϵ (default: 0.01).\n\nThe second form calls the first form with arguments\n\nm = 200 * compound\nn = 512 * compound\nk = 10 * compound\n\nReturn Value\n\nAn instance of a FirstOrderModel that represents the group-lasso problem. An instance of a FirstOrderNLSModel that represents the group-lasso problem. Also returns true x, number of groups g, group-index denoting which groups are active, and a Matrix where rows are group indices of x\n\n\n\n\n\n","category":"method"},{"location":"reference/#RegularizedProblems.nnmf_model","page":"Reference","title":"RegularizedProblems.nnmf_model","text":"model, Av, selected = nnmf_model(m = 100, n = 50, k = 10, T = Float64)\n\nReturn an instance of an NLPModel representing the non-negative matrix factorization objective\n\nf(W, H) = ½ ‖A - WH‖₂²,\n\nwhere A ∈ Rᵐˣⁿ has non-negative entries and can be separeted into k clusters, Av = A[:]. The vector of indices selected = k*m+1: k*(m+n) is used to indicate the components of W ∈ Rᵐˣᵏ and H ∈ Rᵏˣⁿ to apply the regularizer to (so that the regularizer only applies to entries of H).\n\nArguments\n\nm :: Int: the number of rows of A\nn :: Int: the number of columns of A (with n ≥ m)\nk :: Int: the number of clusters\n\n\n\n\n\n","category":"function"},{"location":"reference/#RegularizedProblems.qp_rand_model-Union{Tuple{Int64}, Tuple{R}} where R<:Real","page":"Reference","title":"RegularizedProblems.qp_rand_model","text":"model = qp_rand_model(n; dens = 1.0e-4, convex = false)\n\nReturn an instance of a QuadraticModel representing\n\nmin cᵀx + ½ xᵀHx s.t. l ≤ x ≤ u,\n\nwith H = A + A' or H = A * A' (see the convex keyword argument) where A is a random square matrix with density dens, l = -e - tₗ and u = e + tᵤ where e is the vector of ones, and tₗ and tᵤ are sampled from a uniform distribution between 0 and 1. \n\nArguments\n\nn :: Int: size of the problem,\n\nKeyword arguments\n\ndens :: Real: density of A with 0 < dens ≤ 1 used to generate the quadratic model (default: 1.0e-4).\nconvex :: Bool: true to generate positive definite H (default: false).\n\nReturn Value\n\nAn instance of a QuadraticModel.\n\n\n\n\n\n","category":"method"},{"location":"reference/#RegularizedProblems.random_matrix_completion_model-Union{Tuple{R}, Tuple{Int64, Int64, Int64, Vararg{R, 4}}} where R<:AbstractFloat","page":"Reference","title":"RegularizedProblems.random_matrix_completion_model","text":"model, nls_model, sol = random_matrix_completion_model(args...)\n\nReturn an instance of an NLPModel and an instance of an NLSModel representing the same matrix completion problem, i.e., the square linear least-squares objective\n\n½ ‖P(X - A)‖²\n\nin the Frobenius norm, where X is the unknown image represented as an m x n matrix, A is a fixed image, and the operator P only retains a certain subset of pixels of X and A.\n\nArguments\n\nm :: Int: the number of rows of X and A\nn :: Int: the number of columns of X and A\nr :: Int: the desired rank of A\nsr :: AbstractFloat: a threshold between 0 and 1 used to determine the set of pixels retained by the operator P\nva :: AbstractFloat: the variance of a first Gaussian perturbation to be applied to A\nvb :: AbstractFloat: the variance of a second Gaussian perturbation to be applied to A\nc :: AbstractFloat: the coefficient of the convex combination of the two Gaussian perturbations.\n\nReturn Value\n\nAn instance of a FirstOrderModel and of a FirstOrderNLSModel that represent the same matrix completion problem, and the exact solution.\n\n\n\n\n\n","category":"method"},{"location":"#RegularizedProblems","page":"Home","title":"RegularizedProblems","text":"","category":"section"},{"location":"#Synopsis","page":"Home","title":"Synopsis","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"This package provides sameple problems suitable for developing and testing first and second-order methods for regularized optimization, i.e., they have the general form","category":"page"},{"location":"","page":"Home","title":"Home","text":"min_x in mathbbR^n f(x) + h(x)","category":"page"},{"location":"","page":"Home","title":"Home","text":"where f mathbbR^n to mathbbR has Lipschitz-continuous gradient and h mathbbR^n to mathbbR cup infty is lower semi-continuous and proper. The smooth term f describes the objective to minimize while the role of the regularizer h is to select a solution with desirable properties: minimum norm, sparsity below a certain level, maximum sparsity, etc.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Models for f are instances of NLPModels and often represent nonlinear least-squares residuals, i.e., f(x) = tfrac12 F(x)_2^2 where F mathbbR^n to mathbbR^m.","category":"page"},{"location":"","page":"Home","title":"Home","text":"The regularizer h should be obtained from ProximalOperators.jl.","category":"page"},{"location":"","page":"Home","title":"Home","text":"The final regularized problem is intended to be solved by way of solver for nonsmooth regularized optimization such as those in RegularizedOptimization.jl.","category":"page"},{"location":"#Problems-implemented","page":"Home","title":"Problems implemented","text":"","category":"section"},{"location":"#Basis-pursuit-denoise","page":"Home","title":"Basis-pursuit denoise","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Calling model = bpdn_model() returns a model representing the smooth underdetermined linear least-squares residual","category":"page"},{"location":"","page":"Home","title":"Home","text":"f(x) = tfrac12 Ax - b_2^2","category":"page"},{"location":"","page":"Home","title":"Home","text":"where A has orthonormal rows. The right-hand side is generated as b = A x_star + varepsilon where x_star is a sparse vector, varepsilon sim mathcalN(0 sigma) and sigma in (0 1) is a fixed noise level.","category":"page"},{"location":"","page":"Home","title":"Home","text":"When solving the basis-pursuit denoise problem, the goal is to recover x approx x_star. In particular, x should have the same sparsity pattern as x_star. That is typically accomplished by choosing a regularizer of the form","category":"page"},{"location":"","page":"Home","title":"Home","text":"h(x) = lambda x_1 for a well-chosen lambda 0;\nh(x) = x_0;\nh(x) = chi(x k mathbbB_0) for k approx x_star_0;","category":"page"},{"location":"","page":"Home","title":"Home","text":"where chi(x k mathbbB_0) is the indicator of the ell_0-pseudonorm ball of radius k.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Calling model = bpdn_nls_model() returns the same problem modeled explicitly as a least-squares problem.","category":"page"},{"location":"#Fitzhugh-Nagumo-data-fitting-problem","page":"Home","title":"Fitzhugh-Nagumo data-fitting problem","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"If ADNLPModels and DifferentialEquations have been imported, model = fh_model() returns a model representing the over-determined nonlinear least-squares residual","category":"page"},{"location":"","page":"Home","title":"Home","text":"f(x) = tfrac12 F(x)_2^2","category":"page"},{"location":"","page":"Home","title":"Home","text":"where F mathbbR^5 to mathbbR^202 represents the residual between a simulation of the Fitzhugh-Nagumo system with parameters x and a simulation of the Van der Pol oscillator with preset, but unknown, parameters x_star.","category":"page"},{"location":"","page":"Home","title":"Home","text":"A feature of the Fitzhugh-Nagumo model is that it reduces to the Van der Pol oscillator when certain parameters are set to zero. Thus here again, the objective is to recover a sparse solution to the data-fitting problem. Hence, typical regularizers are the same as those used for the basis-pursuit denoise problem.","category":"page"}] +[{"location":"reference/#Reference","page":"Reference","title":"Reference","text":"","category":"section"},{"location":"reference/","page":"Reference","title":"Reference","text":"​","category":"page"},{"location":"reference/#Contents","page":"Reference","title":"Contents","text":"","category":"section"},{"location":"reference/","page":"Reference","title":"Reference","text":"​","category":"page"},{"location":"reference/","page":"Reference","title":"Reference","text":"Pages = [\"reference.md\"]","category":"page"},{"location":"reference/","page":"Reference","title":"Reference","text":"​","category":"page"},{"location":"reference/#Index","page":"Reference","title":"Index","text":"","category":"section"},{"location":"reference/","page":"Reference","title":"Reference","text":"​","category":"page"},{"location":"reference/","page":"Reference","title":"Reference","text":"Pages = [\"reference.md\"]","category":"page"},{"location":"reference/","page":"Reference","title":"Reference","text":"​","category":"page"},{"location":"reference/","page":"Reference","title":"Reference","text":"Modules = [RegularizedProblems]","category":"page"},{"location":"reference/#RegularizedProblems.FirstOrderModel","page":"Reference","title":"RegularizedProblems.FirstOrderModel","text":"model = FirstOrderModel(f, ∇f!; name = \"first-order model\")\n\nA simple subtype of AbstractNLPModel to represent a smooth objective.\n\nArguments\n\nf :: F <: Function: a function such that f(x) returns the objective value at x;\n∇f! :: G <: Function: a function such that ∇f!(g, x) stores the gradient of the objective at x in g;\nx :: AbstractVector: an initial guess.\n\nAll keyword arguments are passed through to the NLPModelMeta constructor.\n\n\n\n\n\n","category":"type"},{"location":"reference/#RegularizedProblems.FirstOrderNLSModel","page":"Reference","title":"RegularizedProblems.FirstOrderNLSModel","text":"model = FirstOrderNLSModel(r!, jv!, jtv!; name = \"first-order NLS model\")\n\nA simple subtype of AbstractNLSModel to represent a nonlinear least-squares problem with a smooth residual.\n\nArguments\n\nr! :: R <: Function: a function such that r!(y, x) stores the residual at x in y;\njv! :: J <: Function: a function such that jv!(u, x, v) stores the product between the residual Jacobian at x and the vector v in u;\njtv! :: Jt <: Function: a function such that jtv!(u, x, v) stores the product between the transpose of the residual Jacobian at x and the vector v in u;\nx :: AbstractVector: an initial guess.\n\nAll keyword arguments are passed through to the NLPModelMeta constructor.\n\n\n\n\n\n","category":"type"},{"location":"reference/#RegularizedProblems.RegularizedNLPModel","page":"Reference","title":"RegularizedProblems.RegularizedNLPModel","text":"rmodel = RegularizedNLPModel(model, regularizer)\nrmodel = RegularizedNLSModel(model, regularizer)\n\nAn aggregate type to represent a regularized optimization model, .i.e., of the form\n\nminimize f(x) + h(x),\n\nwhere f is smooth (and is usually assumed to have Lipschitz-continuous gradient), and h is lower semi-continuous (and may have to be prox-bounded).\n\nThe regularized model is made of\n\nmodel <: AbstractNLPModel: the smooth part of the model, for example a FirstOrderModel\nh: the nonsmooth part of the model; typically a regularizer defined in ProximalOperators.jl\nselected: the subset of variables to which the regularizer h should be applied (default: all).\n\nThis aggregate type can be used to call solvers with a single object representing the model, but is especially useful for use with SolverBenchmark.jl, which expects problems to be defined by a single object.\n\n\n\n\n\n","category":"type"},{"location":"reference/#RegularizedProblems.MIT_matrix_completion_model-Tuple{}","page":"Reference","title":"RegularizedProblems.MIT_matrix_completion_model","text":"model, nls_model, sol = MIT_matrix_completion_model()\n\nA special case of matrix completion problem in which the exact image is a noisy MIT logo.\n\nSee the documentation of random_matrix_completion_model() for more information.\n\n\n\n\n\n","category":"method"},{"location":"reference/#RegularizedProblems.bpdn_model-Tuple","page":"Reference","title":"RegularizedProblems.bpdn_model","text":"model, nls_model, sol = bpdn_model(args...; kwargs...)\nmodel, nls_model, sol = bpdn_model(compound = 1, args...; kwargs...)\n\nReturn an instance of an NLPModel and an instance of an NLSModel representing the same basis-pursuit denoise problem, i.e., the under-determined linear least-squares objective\n\n½ ‖Ax - b‖₂²,\n\nwhere A has orthonormal rows and b = A * x̄ + ϵ, x̄ is sparse and ϵ is a noise vector following a normal distribution with mean zero and standard deviation σ.\n\nArguments\n\nm :: Int: the number of rows of A\nn :: Int: the number of columns of A (with n ≥ m)\nk :: Int: the number of nonzero elements in x̄\nnoise :: Float64: noise standard deviation σ (default: 0.01).\n\nThe second form calls the first form with arguments\n\nm = 200 * compound\nn = 512 * compound\nk = 10 * compound\n\nKeyword arguments\n\nbounds :: Bool: whether or not to include nonnegativity bounds in the model (default: false).\n\nReturn Value\n\nAn instance of a FirstOrderModel and of a FirstOrderNLSModel that represent the same basis-pursuit denoise problem, and the exact solution x̄.\n\nIf bounds == true, the positive part of x̄ is returned.\n\n\n\n\n\n","category":"method"},{"location":"reference/#RegularizedProblems.fh_model-Tuple{}","page":"Reference","title":"RegularizedProblems.fh_model","text":"fh_model(; kwargs...)\n\nReturn an instance of an NLPModel and an instance of an NLSModel representing the same Fitzhugh-Nagumo problem, i.e., the over-determined nonlinear least-squares objective\n\n½ ‖F(x)‖₂²,\n\nwhere F: ℝ⁵ → ℝ²⁰² represents the fitting error between a simulation of the Fitzhugh-Nagumo model with parameters x and a simulation of the Van der Pol oscillator with fixed, but unknown, parameters.\n\nKeyword Arguments\n\nAll keyword arguments are passed directly to the ADNLPModel (or ADNLSModel) constructure, e.g., to set the automatic differentiation backend.\n\nReturn Value\n\nAn instance of an ADNLPModel that represents the Fitzhugh-Nagumo problem, an instance of an ADNLSModel that represents the same problem, and the exact solution.\n\n\n\n\n\n","category":"method"},{"location":"reference/#RegularizedProblems.group_lasso_model-Tuple","page":"Reference","title":"RegularizedProblems.group_lasso_model","text":"model, nls_model, sol = group_lasso_model(; kwargs...)\n\nReturn an instance of an NLPModel and NLSModel representing the group-lasso problem, i.e., the under-determined linear least-squares objective\n\n½ ‖Ax - b‖₂²,\n\nwhere A has orthonormal rows and b = A * x̄ + ϵ, x̄ is sparse and ϵ is a noise vector following a normal distribution with mean zero and standard deviation σ. Note that with this format, all groups have a the same number of elements and the number of groups divides evenly into the total number of elements.\n\nKeyword Arguments\n\nm :: Int: the number of rows of A (default: 200)\nn :: Int: the number of columns of A, with n ≥ m (default: 512)\ng :: Int: the number of groups (default: 16)\nag :: Int: the number of active groups (default: 5)\nnoise :: Float64: noise amount (default: 0.01)\ncompound :: Int: multiplier for m, n, g, and ag (default: 1).\n\nReturn Value\n\nAn instance of a FirstOrderModel that represents the group-lasso problem. An instance of a FirstOrderNLSModel that represents the group-lasso problem. Also returns true x, number of groups g, group-index denoting which groups are active, and a Matrix where rows are group indices of x.\n\n\n\n\n\n","category":"method"},{"location":"reference/#RegularizedProblems.nnmf_model","page":"Reference","title":"RegularizedProblems.nnmf_model","text":"model, Av, selected = nnmf_model(m = 100, n = 50, k = 10, T = Float64)\n\nReturn an instance of an NLPModel representing the non-negative matrix factorization objective\n\nf(W, H) = ½ ‖A - WH‖₂²,\n\nwhere A ∈ Rᵐˣⁿ has non-negative entries and can be separeted into k clusters, Av = A[:]. The vector of indices selected = k*m+1: k*(m+n) is used to indicate the components of W ∈ Rᵐˣᵏ and H ∈ Rᵏˣⁿ to apply the regularizer to (so that the regularizer only applies to entries of H).\n\nArguments\n\nm :: Int: the number of rows of A\nn :: Int: the number of columns of A (with n ≥ m)\nk :: Int: the number of clusters\n\n\n\n\n\n","category":"function"},{"location":"reference/#RegularizedProblems.qp_rand_model-Union{Tuple{}, Tuple{Int64}, Tuple{R}} where R<:Real","page":"Reference","title":"RegularizedProblems.qp_rand_model","text":"model = qp_rand_model(n = 100_000; dens = 1.0e-4, convex = false)\n\nReturn an instance of a QuadraticModel representing\n\nmin cᵀx + ½ xᵀHx s.t. l ≤ x ≤ u,\n\nwith H = A + A' or H = A * A' (see the convex keyword argument) where A is a random square matrix with density dens, l = -e - tₗ and u = e + tᵤ where e is the vector of ones, and tₗ and tᵤ are sampled from a uniform distribution between 0 and 1.\n\nArguments\n\nn :: Int: size of the problem (default: 100_000).\n\nKeyword arguments\n\ndens :: Real: density of A with 0 < dens ≤ 1 used to generate the quadratic model (default: 1.0e-4).\nconvex :: Bool: true to generate positive definite H (default: false).\n\nReturn Value\n\nAn instance of a QuadraticModel.\n\n\n\n\n\n","category":"method"},{"location":"reference/#RegularizedProblems.random_matrix_completion_model-Tuple{}","page":"Reference","title":"RegularizedProblems.random_matrix_completion_model","text":"model, nls_model, sol = random_matrix_completion_model(; kwargs...)\n\nReturn an instance of an NLPModel and an instance of an NLSModel representing the same matrix completion problem, i.e., the square linear least-squares objective\n\n½ ‖P(X - A)‖²\n\nin the Frobenius norm, where X is the unknown image represented as an m x n matrix, A is a fixed image, and the operator P only retains a certain subset of pixels of X and A.\n\nKeyword Arguments\n\nm :: Int: the number of rows of X and A (default: 100)\nn :: Int: the number of columns of X and A (default: 100)\nr :: Int: the desired rank of A (default: 5)\nsr :: AbstractFloat: a threshold between 0 and 1 used to determine the set of pixels\n\nretained by the operator P (default: 0.8)\n\nva :: AbstractFloat: the variance of a first Gaussian perturbation to be applied to A (default: 1.0e-4)\nvb :: AbstractFloat: the variance of a second Gaussian perturbation to be applied to A (default: 1.0e-2)\nc :: AbstractFloat: the coefficient of the convex combination of the two Gaussian perturbations (default: 0.2).\n\nReturn Value\n\nAn instance of a FirstOrderModel and of a FirstOrderNLSModel that represent the same matrix completion problem, and the exact solution.\n\n\n\n\n\n","category":"method"},{"location":"#RegularizedProblems","page":"Home","title":"RegularizedProblems","text":"","category":"section"},{"location":"#Synopsis","page":"Home","title":"Synopsis","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"This package provides sameple problems suitable for developing and testing first and second-order methods for regularized optimization, i.e., they have the general form","category":"page"},{"location":"","page":"Home","title":"Home","text":"min_x in mathbbR^n f(x) + h(x)","category":"page"},{"location":"","page":"Home","title":"Home","text":"where f mathbbR^n to mathbbR has Lipschitz-continuous gradient and h mathbbR^n to mathbbR cup infty is lower semi-continuous and proper. The smooth term f describes the objective to minimize while the role of the regularizer h is to select a solution with desirable properties: minimum norm, sparsity below a certain level, maximum sparsity, etc.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Models for f are instances of NLPModels and often represent nonlinear least-squares residuals, i.e., f(x) = tfrac12 F(x)_2^2 where F mathbbR^n to mathbbR^m.","category":"page"},{"location":"","page":"Home","title":"Home","text":"The regularizer h should be obtained from ProximalOperators.jl.","category":"page"},{"location":"","page":"Home","title":"Home","text":"The final regularized problem is intended to be solved by way of solver for nonsmooth regularized optimization such as those in RegularizedOptimization.jl.","category":"page"},{"location":"#Problems-implemented","page":"Home","title":"Problems implemented","text":"","category":"section"},{"location":"#Basis-pursuit-denoise","page":"Home","title":"Basis-pursuit denoise","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Calling model = bpdn_model() returns a model representing the smooth underdetermined linear least-squares residual","category":"page"},{"location":"","page":"Home","title":"Home","text":"f(x) = tfrac12 Ax - b_2^2","category":"page"},{"location":"","page":"Home","title":"Home","text":"where A has orthonormal rows. The right-hand side is generated as b = A x_star + varepsilon where x_star is a sparse vector, varepsilon sim mathcalN(0 sigma) and sigma in (0 1) is a fixed noise level.","category":"page"},{"location":"","page":"Home","title":"Home","text":"When solving the basis-pursuit denoise problem, the goal is to recover x approx x_star. In particular, x should have the same sparsity pattern as x_star. That is typically accomplished by choosing a regularizer of the form","category":"page"},{"location":"","page":"Home","title":"Home","text":"h(x) = lambda x_1 for a well-chosen lambda 0;\nh(x) = x_0;\nh(x) = chi(x k mathbbB_0) for k approx x_star_0;","category":"page"},{"location":"","page":"Home","title":"Home","text":"where chi(x k mathbbB_0) is the indicator of the ell_0-pseudonorm ball of radius k.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Calling model = bpdn_nls_model() returns the same problem modeled explicitly as a least-squares problem.","category":"page"},{"location":"#Fitzhugh-Nagumo-data-fitting-problem","page":"Home","title":"Fitzhugh-Nagumo data-fitting problem","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"If ADNLPModels and DifferentialEquations have been imported, model = fh_model() returns a model representing the over-determined nonlinear least-squares residual","category":"page"},{"location":"","page":"Home","title":"Home","text":"f(x) = tfrac12 F(x)_2^2","category":"page"},{"location":"","page":"Home","title":"Home","text":"where F mathbbR^5 to mathbbR^202 represents the residual between a simulation of the Fitzhugh-Nagumo system with parameters x and a simulation of the Van der Pol oscillator with preset, but unknown, parameters x_star.","category":"page"},{"location":"","page":"Home","title":"Home","text":"A feature of the Fitzhugh-Nagumo model is that it reduces to the Van der Pol oscillator when certain parameters are set to zero. Thus here again, the objective is to recover a sparse solution to the data-fitting problem. Hence, typical regularizers are the same as those used for the basis-pursuit denoise problem.","category":"page"}] }