From d1427daf1a211f0903f34923fed1e64efff441fd Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Thu, 20 Jul 2023 05:31:17 +0000 Subject: [PATCH] build based on c62515e --- dev/about/CONTRIBUTING/index.html | 2 +- dev/about/license/index.html | 2 +- dev/about/release_notes/index.html | 2 +- dev/eigenproblems/lobpcg/index.html | 2 +- dev/eigenproblems/power_method/index.html | 6 +++--- dev/getting_started/index.html | 4 ++-- dev/index.html | 2 +- dev/iterators/index.html | 2 +- dev/linear_systems/bicgstabl/index.html | 2 +- dev/linear_systems/cg/index.html | 4 ++-- dev/linear_systems/chebyshev/index.html | 2 +- dev/linear_systems/gmres/index.html | 2 +- dev/linear_systems/idrs/index.html | 2 +- dev/linear_systems/lsmr/index.html | 2 +- dev/linear_systems/lsqr/index.html | 2 +- dev/linear_systems/minres/index.html | 2 +- dev/linear_systems/qmr/index.html | 2 +- dev/linear_systems/stationary/index.html | 2 +- dev/preconditioning/index.html | 2 +- dev/search/index.html | 2 +- dev/svd/svdl/index.html | 2 +- 21 files changed, 25 insertions(+), 25 deletions(-) diff --git a/dev/about/CONTRIBUTING/index.html b/dev/about/CONTRIBUTING/index.html index ba2e5c8f..399a97bd 100644 --- a/dev/about/CONTRIBUTING/index.html +++ b/dev/about/CONTRIBUTING/index.html @@ -12,4 +12,4 @@ reserve!(log,:conv, maxiter, T=BitArray) # Vector of length maxiter reserve!(log,:ritz, maxiter, k) # Matrix of size (maxiter, k)

To store information at each iteration use push!.

push!(log, :conv, conv)
 push!(log, :ritz, F[:S][1:k])
-push!(log, :betas, L.β)

To advance the log index to the next iteration use nextiter!.

nextiter!(log)

A more detailed explanation of all the functions is in both the public and internal documentation of ConvergenceHistory.

The most rich example of the usage of ConvergenceHistory is in svdl.

+push!(log, :betas, L.β)

To advance the log index to the next iteration use nextiter!.

nextiter!(log)

A more detailed explanation of all the functions is in both the public and internal documentation of ConvergenceHistory.

The most rich example of the usage of ConvergenceHistory is in svdl.

diff --git a/dev/about/license/index.html b/dev/about/license/index.html index d325cbd1..1267a3e6 100644 --- a/dev/about/license/index.html +++ b/dev/about/license/index.html @@ -16,4 +16,4 @@ FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. +CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/dev/about/release_notes/index.html b/dev/about/release_notes/index.html index b0a9fc1a..59b293b2 100644 --- a/dev/about/release_notes/index.html +++ b/dev/about/release_notes/index.html @@ -1,2 +1,2 @@ -- · IterativeSolvers.jl
+- · IterativeSolvers.jl
diff --git a/dev/eigenproblems/lobpcg/index.html b/dev/eigenproblems/lobpcg/index.html index 1c0da3c0..b7ae55ad 100644 --- a/dev/eigenproblems/lobpcg/index.html +++ b/dev/eigenproblems/lobpcg/index.html @@ -1,2 +1,2 @@ -LOBPCG · IterativeSolvers.jl

Locally optimal block preconditioned conjugate gradient (LOBPCG)

Solves the generalized eigenproblem $Ax = λBx$ approximately where $A$ and $B$ are Hermitian linear maps, and $B$ is positive definite. $B$ is taken to be the identity by default. It can find the smallest (or largest) k eigenvalues and their corresponding eigenvectors which are B-orthonormal. It also admits a preconditioner and a "constraints" matrix C, such that the algorithm returns the smallest (or largest) eigenvalues associated with the eigenvectors in the nullspace of C'B.

Usage

IterativeSolvers.lobpcgFunction

The Locally Optimal Block Preconditioned Conjugate Gradient Method (LOBPCG)

Finds the nev extremal eigenvalues and their corresponding eigenvectors satisfying AX = λBX.

A and B may be generic types but Base.mul!(C, AorB, X) must be defined for vectors and strided matrices X and C. size(A, i::Int) and eltype(A) must also be defined for A.

lobpcg(A, [B,] largest, nev; kwargs...) -> results

Arguments

  • A: linear operator;
  • B: linear operator;
  • largest: true if largest eigenvalues are desired and false if smallest;
  • nev: number of eigenvalues desired.

Keywords

  • log::Bool: default is false; if true, results.trace will store iterations states; if false only results.trace will be empty;

  • P: preconditioner of residual vectors, must overload ldiv!;

  • C: constraint to deflate the residual and solution vectors orthogonal to a subspace; must overload mul!;

  • maxiter: maximum number of iterations; default is 200;

  • tol::Real: tolerance to which residual vector norms must be under.

Output

  • results: a LOBPCGResults struct. r.λ and r.X store the eigenvalues and eigenvectors.
source
lobpcg(A, [B,] largest, X0; kwargs...) -> results

Arguments

  • A: linear operator;
  • B: linear operator;
  • largest: true if largest eigenvalues are desired and false if smallest;
  • X0: Initial guess, will not be modified. The number of columns is the number of eigenvectors desired.

Keywords

  • not_zeros: default is false. If true, X0 will be assumed to not have any all-zeros column.

  • log::Bool: default is false; if true, results.trace will store iterations states; if false only results.trace will be empty;

  • P: preconditioner of residual vectors, must overload ldiv!;

  • C: constraint to deflate the residual and solution vectors orthogonal to a subspace; must overload mul!;

  • maxiter: maximum number of iterations; default is 200;

  • tol::Real: tolerance to which residual vector norms must be under.

Output

  • results: a LOBPCGResults struct. r.λ and r.X store the eigenvalues and eigenvectors.
source

lobpcg(A, [B,] largest, X0, nev; kwargs...) -> results

Arguments

  • A: linear operator;
  • B: linear operator;
  • largest: true if largest eigenvalues are desired and false if smallest;
  • X0: block vectors such that the eigenvalues will be found size(X0, 2) at a time; the columns are also used to initialize the first batch of Ritz vectors;
  • nev: number of eigenvalues desired.

Keywords

  • log::Bool: default is false; if true, results.trace will store iterations states; if false only results.trace will be empty;

  • P: preconditioner of residual vectors, must overload ldiv!;

  • C: constraint to deflate the residual and solution vectors orthogonal to a subspace; must overload mul!;

  • maxiter: maximum number of iterations; default is 200;

  • tol::Real: tolerance to which residual vector norms must be under.

Output

  • results: a LOBPCGResults struct. r.λ and r.X store the eigenvalues and eigenvectors.
source
IterativeSolvers.lobpcg!Function
lobpcg!(iterator::LOBPCGIterator; kwargs...) -> results

Arguments

  • iterator::LOBPCGIterator: a struct having all the variables required for the LOBPCG algorithm.

Keywords

  • not_zeros: default is false. If true, the initial Ritz vectors will be assumed to not have any all-zeros column.

  • log::Bool: default is false; if true, results.trace will store iterations states; if false only results.trace will be empty;

  • maxiter: maximum number of iterations; default is 200;

  • tol::Real: tolerance to which residual vector norms must be under.

Output

  • results: a LOBPCGResults struct. r.λ and r.X store the eigenvalues and eigenvectors.
source

Implementation Details

A LOBPCGIterator is created to pre-allocate all the memory required by the method using the constructor LOBPCGIterator(A, B, largest, X, P, C) where A and B are the matrices from the generalized eigenvalue problem, largest indicates if the problem is a maximum or minimum eigenvalue problem, X is the initial eigenbasis, randomly sampled if not input, where size(X, 2) is the block size bs. P is the preconditioner, nothing by default, and C is the constraints matrix. The desired k eigenvalues are found bs at a time.

References

Implementation is based on [Knyazev1993] and [Scipy].

  • Knyazev1993Andrew V. Knyazev. "Toward the Optimal Preconditioned Eigensolver: Locally Optimal Block Preconditioned Conjugate Gradient Method" SIAM Journal on Scientific Computing, 23(2):517–541 2001.
  • ScipySee Scipy LOBPCG implementation
+LOBPCG · IterativeSolvers.jl

Locally optimal block preconditioned conjugate gradient (LOBPCG)

Solves the generalized eigenproblem $Ax = λBx$ approximately where $A$ and $B$ are Hermitian linear maps, and $B$ is positive definite. $B$ is taken to be the identity by default. It can find the smallest (or largest) k eigenvalues and their corresponding eigenvectors which are B-orthonormal. It also admits a preconditioner and a "constraints" matrix C, such that the algorithm returns the smallest (or largest) eigenvalues associated with the eigenvectors in the nullspace of C'B.

Usage

IterativeSolvers.lobpcgFunction

The Locally Optimal Block Preconditioned Conjugate Gradient Method (LOBPCG)

Finds the nev extremal eigenvalues and their corresponding eigenvectors satisfying AX = λBX.

A and B may be generic types but Base.mul!(C, AorB, X) must be defined for vectors and strided matrices X and C. size(A, i::Int) and eltype(A) must also be defined for A.

lobpcg(A, [B,] largest, nev; kwargs...) -> results

Arguments

  • A: linear operator;
  • B: linear operator;
  • largest: true if largest eigenvalues are desired and false if smallest;
  • nev: number of eigenvalues desired.

Keywords

  • log::Bool: default is false; if true, results.trace will store iterations states; if false only results.trace will be empty;

  • P: preconditioner of residual vectors, must overload ldiv!;

  • C: constraint to deflate the residual and solution vectors orthogonal to a subspace; must overload mul!;

  • maxiter: maximum number of iterations; default is 200;

  • tol::Real: tolerance to which residual vector norms must be under.

Output

  • results: a LOBPCGResults struct. r.λ and r.X store the eigenvalues and eigenvectors.
source
lobpcg(A, [B,] largest, X0; kwargs...) -> results

Arguments

  • A: linear operator;
  • B: linear operator;
  • largest: true if largest eigenvalues are desired and false if smallest;
  • X0: Initial guess, will not be modified. The number of columns is the number of eigenvectors desired.

Keywords

  • not_zeros: default is false. If true, X0 will be assumed to not have any all-zeros column.

  • log::Bool: default is false; if true, results.trace will store iterations states; if false only results.trace will be empty;

  • P: preconditioner of residual vectors, must overload ldiv!;

  • C: constraint to deflate the residual and solution vectors orthogonal to a subspace; must overload mul!;

  • maxiter: maximum number of iterations; default is 200;

  • tol::Real: tolerance to which residual vector norms must be under.

Output

  • results: a LOBPCGResults struct. r.λ and r.X store the eigenvalues and eigenvectors.
source

lobpcg(A, [B,] largest, X0, nev; kwargs...) -> results

Arguments

  • A: linear operator;
  • B: linear operator;
  • largest: true if largest eigenvalues are desired and false if smallest;
  • X0: block vectors such that the eigenvalues will be found size(X0, 2) at a time; the columns are also used to initialize the first batch of Ritz vectors;
  • nev: number of eigenvalues desired.

Keywords

  • log::Bool: default is false; if true, results.trace will store iterations states; if false only results.trace will be empty;

  • P: preconditioner of residual vectors, must overload ldiv!;

  • C: constraint to deflate the residual and solution vectors orthogonal to a subspace; must overload mul!;

  • maxiter: maximum number of iterations; default is 200;

  • tol::Real: tolerance to which residual vector norms must be under.

Output

  • results: a LOBPCGResults struct. r.λ and r.X store the eigenvalues and eigenvectors.
source
IterativeSolvers.lobpcg!Function
lobpcg!(iterator::LOBPCGIterator; kwargs...) -> results

Arguments

  • iterator::LOBPCGIterator: a struct having all the variables required for the LOBPCG algorithm.

Keywords

  • not_zeros: default is false. If true, the initial Ritz vectors will be assumed to not have any all-zeros column.

  • log::Bool: default is false; if true, results.trace will store iterations states; if false only results.trace will be empty;

  • maxiter: maximum number of iterations; default is 200;

  • tol::Real: tolerance to which residual vector norms must be under.

Output

  • results: a LOBPCGResults struct. r.λ and r.X store the eigenvalues and eigenvectors.
source

Implementation Details

A LOBPCGIterator is created to pre-allocate all the memory required by the method using the constructor LOBPCGIterator(A, B, largest, X, P, C) where A and B are the matrices from the generalized eigenvalue problem, largest indicates if the problem is a maximum or minimum eigenvalue problem, X is the initial eigenbasis, randomly sampled if not input, where size(X, 2) is the block size bs. P is the preconditioner, nothing by default, and C is the constraints matrix. The desired k eigenvalues are found bs at a time.

References

Implementation is based on [Knyazev1993] and [Scipy].

  • Knyazev1993Andrew V. Knyazev. "Toward the Optimal Preconditioned Eigensolver: Locally Optimal Block Preconditioned Conjugate Gradient Method" SIAM Journal on Scientific Computing, 23(2):517–541 2001.
  • ScipySee Scipy LOBPCG implementation
diff --git a/dev/eigenproblems/power_method/index.html b/dev/eigenproblems/power_method/index.html index 1509098d..b941e9c9 100644 --- a/dev/eigenproblems/power_method/index.html +++ b/dev/eigenproblems/power_method/index.html @@ -1,12 +1,12 @@ -Power method · IterativeSolvers.jl

(Inverse) power method

Solves the eigenproblem $Ax = λx$ approximately where $A$ is a general linear map. By default converges towards the dominant eigenpair $(λ, x)$ such that $|λ|$ is largest. Shift-and-invert can be applied to target a specific eigenvalue near shift in the complex plane.

Usage

IterativeSolvers.powmFunction
powm(B; kwargs...) -> λ, x, [history]

See powm!. Calls powm!(B, x0; kwargs...) with x0 initialized as a random, complex unit vector.

source
IterativeSolvers.powm!Function
powm!(B, x; shift = zero(eltype(B)), inverse::Bool = false, kwargs...) -> λ, x, [history]

By default finds the approximate eigenpair (λ, x) of B where |λ| is largest.

Arguments

  • B: linear map, see the note below.
  • x: normalized initial guess. Don't forget to use complex arithmetic when necessary.

Keywords

  • tol::Real = eps(real(eltype(B))) * size(B, 2) ^ 3: stopping tolerance for the residual norm;
  • maxiter::Integer = size(B,2): maximum number of iterations;
  • log::Bool: keep track of the residual norm in each iteration;
  • verbose::Bool: print convergence information during the iterations.
Shift-and-invert

When applying shift-and-invert to $Ax = λx$ with invert = true and shift = ..., note that the role of B * b becomes computing inv(A - shift I) * b. So rather than passing the linear map $A$ itself, pass a linear map B that has the action of shift-and-invert. The eigenvalue is transformed back to an eigenvalue of the actual matrix $A$.

Return values

if log is false

  • λ::Number approximate eigenvalue computed as the Rayleigh quotient;
  • x::Vector approximate eigenvector.

if log is true

  • λ::Number: approximate eigenvalue computed as the Rayleigh quotient;
  • x::Vector: approximate eigenvector;
  • history: convergence history.

ConvergenceHistory keys

  • :tol => ::Real: stopping tolerance;
  • :resnom => ::Vector: residual norm at each iteration.

Examples

using LinearMaps
+Power method · IterativeSolvers.jl

(Inverse) power method

Solves the eigenproblem $Ax = λx$ approximately where $A$ is a general linear map. By default converges towards the dominant eigenpair $(λ, x)$ such that $|λ|$ is largest. Shift-and-invert can be applied to target a specific eigenvalue near shift in the complex plane.

Usage

IterativeSolvers.powmFunction
powm(B; kwargs...) -> λ, x, [history]

See powm!. Calls powm!(B, x0; kwargs...) with x0 initialized as a random, complex unit vector.

source
IterativeSolvers.powm!Function
powm!(B, x; shift = zero(eltype(B)), inverse::Bool = false, kwargs...) -> λ, x, [history]

By default finds the approximate eigenpair (λ, x) of B where |λ| is largest.

Arguments

  • B: linear map, see the note below.
  • x: normalized initial guess. Don't forget to use complex arithmetic when necessary.

Keywords

  • tol::Real = eps(real(eltype(B))) * size(B, 2) ^ 3: stopping tolerance for the residual norm;
  • maxiter::Integer = size(B,2): maximum number of iterations;
  • log::Bool: keep track of the residual norm in each iteration;
  • verbose::Bool: print convergence information during the iterations.
Shift-and-invert

When applying shift-and-invert to $Ax = λx$ with invert = true and shift = ..., note that the role of B * b becomes computing inv(A - shift I) * b. So rather than passing the linear map $A$ itself, pass a linear map B that has the action of shift-and-invert. The eigenvalue is transformed back to an eigenvalue of the actual matrix $A$.

Return values

if log is false

  • λ::Number approximate eigenvalue computed as the Rayleigh quotient;
  • x::Vector approximate eigenvector.

if log is true

  • λ::Number: approximate eigenvalue computed as the Rayleigh quotient;
  • x::Vector: approximate eigenvector;
  • history: convergence history.

ConvergenceHistory keys

  • :tol => ::Real: stopping tolerance;
  • :resnom => ::Vector: residual norm at each iteration.

Examples

using LinearMaps
 σ = 1.0 + 1.3im
 A = rand(ComplexF64, 50, 50)
 F = lu(A - σ * I)
 Fmap = LinearMap{ComplexF64}((y, x) -> ldiv!(y, F, x), 50, ismutating = true)
-λ, x = powm(Fmap, inverse = true, shift = σ, tol = 1e-4, maxiter = 200)
source
IterativeSolvers.invpowmFunction
invpowm(B; shift = σ, kwargs...) -> λ, x, [history]

Find the approximate eigenpair (λ, x) of $A$ near shift, where B is a linear map that has the effect B * v = inv(A - σI) * v.

The method calls powm!(B, x0; inverse = true, shift = σ) with x0 a random, complex unit vector. See powm!

Examples

using LinearMaps
+λ, x = powm(Fmap, inverse = true, shift = σ, tol = 1e-4, maxiter = 200)
source
IterativeSolvers.invpowmFunction
invpowm(B; shift = σ, kwargs...) -> λ, x, [history]

Find the approximate eigenpair (λ, x) of $A$ near shift, where B is a linear map that has the effect B * v = inv(A - σI) * v.

The method calls powm!(B, x0; inverse = true, shift = σ) with x0 a random, complex unit vector. See powm!

Examples

using LinearMaps
 σ = 1.0 + 1.3im
 A = rand(ComplexF64, 50, 50)
 F = lu(A - σ * I)
 Fmap = LinearMap{ComplexF64}((y, x) -> ldiv!(y, F, x), 50, ismutating = true)
-λ, x = invpowm(Fmap, shift = σ, tol = 1e-4, maxiter = 200)
source
IterativeSolvers.invpowm!Function
invpowm!(B, x0; shift = σ, kwargs...) -> λ, x, [history]

Find the approximate eigenpair (λ, x) of $A$ near shift, where B is a linear map that has the effect B * v = inv(A - σI) * v.

The method calls powm!(B, x0; inverse = true, shift = σ). See powm!.

source

Implementation details

Storage requirements are 3 vectors: the approximate eigenvector x, the residual vector r and a temporary. The residual norm lags behind one iteration, as it is computed when $Ax$ is performed. Therefore the final residual norm is even smaller.

+λ, x = invpowm(Fmap, shift = σ, tol = 1e-4, maxiter = 200)
source
IterativeSolvers.invpowm!Function
invpowm!(B, x0; shift = σ, kwargs...) -> λ, x, [history]

Find the approximate eigenpair (λ, x) of $A$ near shift, where B is a linear map that has the effect B * v = inv(A - σI) * v.

The method calls powm!(B, x0; inverse = true, shift = σ). See powm!.

source

Implementation details

Storage requirements are 3 vectors: the approximate eigenvector x, the residual vector r and a temporary. The residual norm lags behind one iteration, as it is computed when $Ax$ is performed. Therefore the final residual norm is even smaller.

diff --git a/dev/getting_started/index.html b/dev/getting_started/index.html index 547f32da..e523df45 100644 --- a/dev/getting_started/index.html +++ b/dev/getting_started/index.html @@ -4,5 +4,5 @@ svd, L, ch = svdl(Master, rand(100, 100), log=true)

The function will now return one more parameter of type ConvergenceHistory.

ConvergenceHistory

A ConvergenceHistory instance stores information of a solver.

Number of iterations.

ch.iters

Convergence status.

ch.isconverged

Stopping tolerances. (A Symbol key is needed to access)

ch[:tol]

Maximum number of iterations per restart. (Only on restarted methods)

nrests(ch)

Number of matrix-vectors and matrix-transposed-vector products.

nprods(ch)

Data stored on each iteration, accessed information can be either a vector or matrix. This data can be a lot of things, most commonly residual. (A Symbol key is needed to access)

ch[:resnorm] #Vector or Matrix
 ch[:resnorm, x] #Vector or Matrix element
 ch[:resnorm, x, y] #Matrix element
IterativeSolvers.ConvergenceHistoryType

Store general and in-depth information about an iterative method.

Fields

mvps::Int: number of matrix vector products.

mtvps::Int: number of transposed matrix-vector products

iters::Int: iterations taken by the method.

restart::T: restart relevant information.

  • T == Int: iterations per restart.
  • T == Nothing: methods without restarts.

isconverged::Bool: convergence of the method.

data::Dict{Symbol,Any}: Stores all the information stored during the method execution. It stores tolerances, residuals and other information, e.g. Ritz values in svdl.

Constructors

ConvergenceHistory()
-ConvergenceHistory(restart)

Create ConvergenceHistory with empty fields.

Arguments

restart: number of iterations per restart.

Plots

Supports plots using the Plots.jl package via a type recipe. Vectors are ploted as series and matrices as scatterplots.

Implements

Base: getindex, setindex!, push!

source

Plotting

ConvergeHistory provides a recipe to use with the package Plots.jl, this makes it really easy to plot on different plot backends. There are two recipes provided:

One for the whole ConvergenceHistory.

plot(ch)

The other one to plot data binded to a key.

_, ch = gmres(rand(10,10), rand(10), maxiter = 100, log=true)
-plot(ch, :resnorm, sep = :blue)

Plot additional keywords

sep::Symbol = :white: color of the line separator in restarted methods.

+ConvergenceHistory(restart)

Create ConvergenceHistory with empty fields.

Arguments

restart: number of iterations per restart.

Plots

Supports plots using the Plots.jl package via a type recipe. Vectors are ploted as series and matrices as scatterplots.

Implements

Base: getindex, setindex!, push!

source

Plotting

ConvergeHistory provides a recipe to use with the package Plots.jl, this makes it really easy to plot on different plot backends. There are two recipes provided:

One for the whole ConvergenceHistory.

plot(ch)

The other one to plot data binded to a key.

_, ch = gmres(rand(10,10), rand(10), maxiter = 100, log=true)
+plot(ch, :resnorm, sep = :blue)

Plot additional keywords

sep::Symbol = :white: color of the line separator in restarted methods.

diff --git a/dev/index.html b/dev/index.html index 7a2ac1f3..53076a1d 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,2 +1,2 @@ -Home · IterativeSolvers.jl

IterativeSolvers.jl

IterativeSolvers.jl is a Julia package that provides efficient iterative algorithms for solving large linear systems, eigenproblems, and singular value problems. Most of the methods can be used matrix-free.

For bug reports, feature requests and questions please submit an issue. If you're interested in contributing, please see the Contributing guide.

For more information on future methods have a look at the package roadmap.

What method should I use for linear systems?

When solving linear systems $Ax = b$ for a square matrix $A$ there are quite some options. The typical choices are listed below:

MethodWhen to use it
Conjugate GradientsBest choice for symmetric, positive-definite matrices
MINRESFor symmetric, indefinite matrices
GMRESFor nonsymmetric matrices when a good preconditioner is available
IDR(s)For nonsymmetric, strongly indefinite problems without a good preconditioner
BiCGStab(l)Otherwise for nonsymmetric problems

We also offer Chebyshev iteration as an alternative to Conjugate Gradients when bounds on the spectrum are known.

Stationary methods like Jacobi, Gauss-Seidel, SOR and SSOR can be used as smoothers to reduce high-frequency components in the error in just a few iterations.

When solving least-squares problems we currently offer just LSMR and LSQR.

Eigenproblems and SVD

For the Singular Value Decomposition we offer SVDL, which is the Golub-Kahan-Lanczos procedure.

For eigenvalue problems we have at this point just the Power Method and some convenience wrappers to do shift-and-invert.

+Home · IterativeSolvers.jl

IterativeSolvers.jl

IterativeSolvers.jl is a Julia package that provides efficient iterative algorithms for solving large linear systems, eigenproblems, and singular value problems. Most of the methods can be used matrix-free.

For bug reports, feature requests and questions please submit an issue. If you're interested in contributing, please see the Contributing guide.

For more information on future methods have a look at the package roadmap.

What method should I use for linear systems?

When solving linear systems $Ax = b$ for a square matrix $A$ there are quite some options. The typical choices are listed below:

MethodWhen to use it
Conjugate GradientsBest choice for symmetric, positive-definite matrices
MINRESFor symmetric, indefinite matrices
GMRESFor nonsymmetric matrices when a good preconditioner is available
IDR(s)For nonsymmetric, strongly indefinite problems without a good preconditioner
BiCGStab(l)Otherwise for nonsymmetric problems

We also offer Chebyshev iteration as an alternative to Conjugate Gradients when bounds on the spectrum are known.

Stationary methods like Jacobi, Gauss-Seidel, SOR and SSOR can be used as smoothers to reduce high-frequency components in the error in just a few iterations.

When solving least-squares problems we currently offer just LSMR and LSQR.

Eigenproblems and SVD

For the Singular Value Decomposition we offer SVDL, which is the Golub-Kahan-Lanczos procedure.

For eigenvalue problems we have at this point just the Power Method and some convenience wrappers to do shift-and-invert.

diff --git a/dev/iterators/index.html b/dev/iterators/index.html index 6328f944..3e50c5fe 100644 --- a/dev/iterators/index.html +++ b/dev/iterators/index.html @@ -41,4 +41,4 @@ Iteration for rhs 2 julia> norm(b2 - A * x) / norm(b2) -1.610815496107484

Other use cases

Other use cases include:

+1.610815496107484

Other use cases

Other use cases include:

diff --git a/dev/linear_systems/bicgstabl/index.html b/dev/linear_systems/bicgstabl/index.html index b4a5592d..a1eac0f4 100644 --- a/dev/linear_systems/bicgstabl/index.html +++ b/dev/linear_systems/bicgstabl/index.html @@ -1,2 +1,2 @@ -BiCGStab(l) · IterativeSolvers.jl

BiCGStab(l)

BiCGStab(l) solves the problem $Ax = b$ approximately for $x$ where $A$ is a general, linear operator and $b$ the right-hand side vector. The methods combines BiCG with $l$ GMRES iterations, resulting in a short-recurrence iteration. As a result the memory is fixed as well as the computational costs per iteration.

Usage

IterativeSolvers.bicgstabl!Function
bicgstabl!(x, A, b, l; kwargs...) -> x, [history]

Arguments

  • A: linear operator;
  • b: right hand side (vector);
  • l::Int = 2: Number of GMRES steps.

Keywords

  • max_mv_products::Int = size(A, 2): maximum number of matrix vector products.

For BiCGStab(l) this is a less dubious term than "number of iterations";

  • Pl = Identity(): left preconditioner of the method;
  • abstol::Real = zero(real(eltype(b))), reltol::Real = sqrt(eps(real(eltype(b)))): absolute and relative tolerance for the stopping condition |r_k| ≤ max(reltol * |r_0|, abstol), where r_k ≈ A * x_k - b is the approximate residual in the kth iteration;
    Note
    1. The true residual norm is never computed during the iterations, only an approximation;
    2. If a left preconditioner is given, the stopping condition is based on the preconditioned residual.

Return values

if log is false

  • x: approximate solution.

if log is true

  • x: approximate solution;
  • history: convergence history.
source

Implementation details

The method is based on the original article [Sleijpen1993], but does not implement later improvements. The normal equations arising from the GMRES steps are solved without orthogonalization. Hence the method should only be reliable for relatively small values of $l$.

The r and u factors are pre-allocated as matrices of size $n \times (l + 1)$, so that BLAS2 methods can be used. Also the random shadow residual is pre-allocated as a vector. Hence the storage costs are approximately $2l + 3$ vectors.

Tip

BiCGStabl(l) can be used as an iterator.

  • Sleijpen1993Sleijpen, Gerard LG, and Diederik R. Fokkema. "BiCGstab(l) for linear equations involving unsymmetric matrices with complex spectrum." Electronic Transactions on Numerical Analysis 1.11 (1993): 2000.
+BiCGStab(l) · IterativeSolvers.jl

BiCGStab(l)

BiCGStab(l) solves the problem $Ax = b$ approximately for $x$ where $A$ is a general, linear operator and $b$ the right-hand side vector. The methods combines BiCG with $l$ GMRES iterations, resulting in a short-recurrence iteration. As a result the memory is fixed as well as the computational costs per iteration.

Usage

IterativeSolvers.bicgstabl!Function
bicgstabl!(x, A, b, l; kwargs...) -> x, [history]

Arguments

  • A: linear operator;
  • b: right hand side (vector);
  • l::Int = 2: Number of GMRES steps.

Keywords

  • max_mv_products::Int = size(A, 2): maximum number of matrix vector products.

For BiCGStab(l) this is a less dubious term than "number of iterations";

  • Pl = Identity(): left preconditioner of the method;
  • abstol::Real = zero(real(eltype(b))), reltol::Real = sqrt(eps(real(eltype(b)))): absolute and relative tolerance for the stopping condition |r_k| ≤ max(reltol * |r_0|, abstol), where r_k ≈ A * x_k - b is the approximate residual in the kth iteration;
    Note
    1. The true residual norm is never computed during the iterations, only an approximation;
    2. If a left preconditioner is given, the stopping condition is based on the preconditioned residual.

Return values

if log is false

  • x: approximate solution.

if log is true

  • x: approximate solution;
  • history: convergence history.
source

Implementation details

The method is based on the original article [Sleijpen1993], but does not implement later improvements. The normal equations arising from the GMRES steps are solved without orthogonalization. Hence the method should only be reliable for relatively small values of $l$.

The r and u factors are pre-allocated as matrices of size $n \times (l + 1)$, so that BLAS2 methods can be used. Also the random shadow residual is pre-allocated as a vector. Hence the storage costs are approximately $2l + 3$ vectors.

Tip

BiCGStabl(l) can be used as an iterator.

  • Sleijpen1993Sleijpen, Gerard LG, and Diederik R. Fokkema. "BiCGstab(l) for linear equations involving unsymmetric matrices with complex spectrum." Electronic Transactions on Numerical Analysis 1.11 (1993): 2000.
diff --git a/dev/linear_systems/cg/index.html b/dev/linear_systems/cg/index.html index 32c3beaf..6d4a329a 100644 --- a/dev/linear_systems/cg/index.html +++ b/dev/linear_systems/cg/index.html @@ -1,8 +1,8 @@ -Conjugate Gradients · IterativeSolvers.jl

Conjugate Gradients (CG)

Conjugate Gradients solves $Ax = b$ approximately for $x$ where $A$ is a symmetric, positive-definite linear operator and $b$ the right-hand side vector. The method uses short recurrences and therefore has fixed memory costs and fixed computational costs per iteration.

Usage

IterativeSolvers.cgFunction
cg(A, b; kwargs...) -> x, [history]

Same as cg!, but allocates a solution vector x initialized with zeros.

source
IterativeSolvers.cg!Function
cg!(x, A, b; kwargs...) -> x, [history]

Arguments

  • x: Initial guess, will be updated in-place;
  • A: linear operator;
  • b: right-hand side.

Keywords

  • statevars::CGStateVariables: Has 3 arrays similar to x to hold intermediate results;
  • initially_zero::Bool: If true assumes that iszero(x) so that one matrix-vector product can be saved when computing the initial residual vector;
  • Pl = Identity(): left preconditioner of the method. Should be symmetric, positive-definite like A;
  • abstol::Real = zero(real(eltype(b))), reltol::Real = sqrt(eps(real(eltype(b)))): absolute and relative tolerance for the stopping condition |r_k| ≤ max(reltol * |r_0|, abstol), where r_k ≈ A * x_k - b is approximately the residual in the kth iteration.
    Note

    The true residual norm is never explicitly computed during the iterations for performance reasons; it may accumulate rounding errors.

  • maxiter::Int = size(A,2): maximum number of iterations;
  • verbose::Bool = false: print method information;
  • log::Bool = false: keep track of the residual norm in each iteration.

Output

if log is false

  • x: approximated solution.

if log is true

  • x: approximated solution.
  • ch: convergence history.

ConvergenceHistory keys

  • :tol => ::Real: stopping tolerance.
  • :resnom => ::Vector: residual norm at each iteration.
source

On the GPU

The method should work fine on the GPU. As a minimal working example, consider:

using LinearAlgebra, CuArrays, IterativeSolvers
+Conjugate Gradients · IterativeSolvers.jl

Conjugate Gradients (CG)

Conjugate Gradients solves $Ax = b$ approximately for $x$ where $A$ is a symmetric, positive-definite linear operator and $b$ the right-hand side vector. The method uses short recurrences and therefore has fixed memory costs and fixed computational costs per iteration.

Usage

IterativeSolvers.cgFunction
cg(A, b; kwargs...) -> x, [history]

Same as cg!, but allocates a solution vector x initialized with zeros.

source
IterativeSolvers.cg!Function
cg!(x, A, b; kwargs...) -> x, [history]

Arguments

  • x: Initial guess, will be updated in-place;
  • A: linear operator;
  • b: right-hand side.

Keywords

  • statevars::CGStateVariables: Has 3 arrays similar to x to hold intermediate results;
  • initially_zero::Bool: If true assumes that iszero(x) so that one matrix-vector product can be saved when computing the initial residual vector;
  • Pl = Identity(): left preconditioner of the method. Should be symmetric, positive-definite like A;
  • abstol::Real = zero(real(eltype(b))), reltol::Real = sqrt(eps(real(eltype(b)))): absolute and relative tolerance for the stopping condition |r_k| ≤ max(reltol * |r_0|, abstol), where r_k ≈ A * x_k - b is approximately the residual in the kth iteration.
    Note

    The true residual norm is never explicitly computed during the iterations for performance reasons; it may accumulate rounding errors.

  • maxiter::Int = size(A,2): maximum number of iterations;
  • verbose::Bool = false: print method information;
  • log::Bool = false: keep track of the residual norm in each iteration.

Output

if log is false

  • x: approximated solution.

if log is true

  • x: approximated solution.
  • ch: convergence history.

ConvergenceHistory keys

  • :tol => ::Real: stopping tolerance.
  • :resnom => ::Vector: residual norm at each iteration.
source

On the GPU

The method should work fine on the GPU. As a minimal working example, consider:

using LinearAlgebra, CuArrays, IterativeSolvers
 
 n = 100
 A = cu(rand(n, n))
 A = A + A' + 2*n*I
 b = cu(rand(n))
-x = cg(A, b)
Note

Make sure that all state vectors are stored on the GPU. For instance when calling cg!(x, A, b), one might have an issue when x is stored on the GPU, while b is stored on the CPU – IterativeSolvers.jl does not copy the vectors to the same device.

Implementation details

The current implementation follows a rather standard approach. Note that preconditioned CG (or PCG) is slightly different from ordinary CG, because the former must compute the residual explicitly, while it is available as byproduct in the latter. Our implementation of CG ensures the minimal number of vector operations.

Tip

CG can be used as an iterator.

+x = cg(A, b)
Note

Make sure that all state vectors are stored on the GPU. For instance when calling cg!(x, A, b), one might have an issue when x is stored on the GPU, while b is stored on the CPU – IterativeSolvers.jl does not copy the vectors to the same device.

Implementation details

The current implementation follows a rather standard approach. Note that preconditioned CG (or PCG) is slightly different from ordinary CG, because the former must compute the residual explicitly, while it is available as byproduct in the latter. Our implementation of CG ensures the minimal number of vector operations.

Tip

CG can be used as an iterator.

diff --git a/dev/linear_systems/chebyshev/index.html b/dev/linear_systems/chebyshev/index.html index fc18411a..e9b73945 100644 --- a/dev/linear_systems/chebyshev/index.html +++ b/dev/linear_systems/chebyshev/index.html @@ -1,2 +1,2 @@ -Chebyshev iteration · IterativeSolvers.jl

Chebyshev iteration

Chebyshev iteration solves the problem $Ax=b$ approximately for $x$ where $A$ is a symmetric, definite linear operator and $b$ the right-hand side vector. The methods assumes the interval $[\lambda_{min}, \lambda_{max}]$ containing all eigenvalues of $A$ is known, so that $x$ can be iteratively constructed via a Chebyshev polynomial with zeros in this interval. This polynomial ultimately acts as a filter that removes components in the direction of the eigenvectors from the initial residual.

The main advantage with respect to Conjugate Gradients is that BLAS1 operations such as inner products are avoided.

Usage

IterativeSolvers.chebyshev!Function
chebyshev!(x, A, b, λmin::Real, λmax::Real; kwargs...) -> x, [history]

Solve Ax = b for symmetric, definite matrices A using Chebyshev iteration.

Arguments

  • x: initial guess, will be updated in-place;
  • A: linear operator;
  • b: right-hand side;
  • λmin::Real: lower bound for the real eigenvalues
  • λmax::Real: upper bound for the real eigenvalues

Keywords

  • initially_zero::Bool = false: if true assumes that iszero(x) so that one matrix-vector product can be saved when computing the initial residual vector;
  • abstol::Real = zero(real(eltype(b))), reltol::Real = sqrt(eps(real(eltype(b)))): absolute and relative tolerance for the stopping condition |r_k| ≤ max(reltol * |r_0|, abstol), where r_k = A * x_k - b is the residual in the kth iteration;
  • maxiter::Int = size(A, 2): maximum number of inner iterations of GMRES;
  • Pl = Identity(): left preconditioner;
  • log::Bool = false: keep track of the residual norm in each iteration;
  • verbose::Bool = false: print convergence information during the iterations.

Return values

if log is false

  • x: approximate solution.

if log is true

  • x: approximate solution;
  • history: convergence history.
source

Implementation details

BLAS1 operations

Although the method is often used to avoid computation of inner products, the stopping criterion is still based on the residual norm. Hence the current implementation is not free of BLAS1 operations.

Tip

Chebyshev iteration can be used as an iterator.

+Chebyshev iteration · IterativeSolvers.jl

Chebyshev iteration

Chebyshev iteration solves the problem $Ax=b$ approximately for $x$ where $A$ is a symmetric, definite linear operator and $b$ the right-hand side vector. The methods assumes the interval $[\lambda_{min}, \lambda_{max}]$ containing all eigenvalues of $A$ is known, so that $x$ can be iteratively constructed via a Chebyshev polynomial with zeros in this interval. This polynomial ultimately acts as a filter that removes components in the direction of the eigenvectors from the initial residual.

The main advantage with respect to Conjugate Gradients is that BLAS1 operations such as inner products are avoided.

Usage

IterativeSolvers.chebyshev!Function
chebyshev!(x, A, b, λmin::Real, λmax::Real; kwargs...) -> x, [history]

Solve Ax = b for symmetric, definite matrices A using Chebyshev iteration.

Arguments

  • x: initial guess, will be updated in-place;
  • A: linear operator;
  • b: right-hand side;
  • λmin::Real: lower bound for the real eigenvalues
  • λmax::Real: upper bound for the real eigenvalues

Keywords

  • initially_zero::Bool = false: if true assumes that iszero(x) so that one matrix-vector product can be saved when computing the initial residual vector;
  • abstol::Real = zero(real(eltype(b))), reltol::Real = sqrt(eps(real(eltype(b)))): absolute and relative tolerance for the stopping condition |r_k| ≤ max(reltol * |r_0|, abstol), where r_k = A * x_k - b is the residual in the kth iteration;
  • maxiter::Int = size(A, 2): maximum number of inner iterations of GMRES;
  • Pl = Identity(): left preconditioner;
  • log::Bool = false: keep track of the residual norm in each iteration;
  • verbose::Bool = false: print convergence information during the iterations.

Return values

if log is false

  • x: approximate solution.

if log is true

  • x: approximate solution;
  • history: convergence history.
source

Implementation details

BLAS1 operations

Although the method is often used to avoid computation of inner products, the stopping criterion is still based on the residual norm. Hence the current implementation is not free of BLAS1 operations.

Tip

Chebyshev iteration can be used as an iterator.

diff --git a/dev/linear_systems/gmres/index.html b/dev/linear_systems/gmres/index.html index 7e2d5b9c..04f14064 100644 --- a/dev/linear_systems/gmres/index.html +++ b/dev/linear_systems/gmres/index.html @@ -1,2 +1,2 @@ -Restarted GMRES · IterativeSolvers.jl

Restarted GMRES

GMRES solves the problem $Ax = b$ approximately for $x$ where $A$ is a general, linear operator and $b$ the right-hand side vector. The method is optimal in the sense that it selects the solution with minimal residual from a Krylov subspace, but the price of optimality is increasing storage and computation effort per iteration. Restarts are necessary to fix these costs.

Usage

IterativeSolvers.gmres!Function
gmres!(x, A, b; kwargs...) -> x, [history]

Solves the problem $Ax = b$ with restarted GMRES.

Arguments

  • x: Initial guess, will be updated in-place;
  • A: linear operator;
  • b: right-hand side.

Keywords

  • initially_zero::Bool: If true assumes that iszero(x) so that one matrix-vector product can be saved when computing the initial residual vector;
  • abstol::Real = zero(real(eltype(b))), reltol::Real = sqrt(eps(real(eltype(b)))): absolute and relative tolerance for the stopping condition |r_k| ≤ max(reltol * |r_0|, abstol), where r_k = A * x_k - b
  • restart::Int = min(20, size(A, 2)): restarts GMRES after specified number of iterations;
  • maxiter::Int = size(A, 2): maximum number of inner iterations of GMRES;
  • Pl: left preconditioner;
  • Pr: right preconditioner;
  • log::Bool: keep track of the residual norm in each iteration;
  • verbose::Bool: print convergence information during the iterations.
  • orth_meth::OrthogonalizationMethod = ModifiedGramSchmidt(): orthogonalization method (ModifiedGramSchmidt(), ClassicalGramSchmidt(), DGKS())

Return values

if log is false

  • x: approximate solution.

if log is true

  • x: approximate solution;
  • history: convergence history.
source

Implementation details

The implementation pre-allocates a matrix $V$ of size n by restart whose columns form an orthonormal basis for the Krylov subspace. This allows BLAS2 operations when updating the solution vector $x$. The Hessenberg matrix is also pre-allocated.

By default, modified Gram-Schmidt is used to orthogonalize the columns of $V$, since it is numerically more stable than classical Gram-Schmidt. Modified Gram-Schmidt is however inherently sequential, and if stability is not a concern, classical Gram-Schmidt can be used, which is implemented using BLAS2 operations. As a compromise the "DGKS criterion" can be used, which conditionally applies classical Gram-Schmidt repeatedly to stabilize it, and is typically one to two times slower than classical Gram-Schmidt.

The computation of the residual norm is implemented in a non-standard way, namely keeping track of a vector $\gamma$ in the null-space of $H_k^*$, which is the adjoint of the $(k + 1) \times k$ Hessenberg matrix $H_k$ at the $k$th iteration. Only when $x$ needs to be updated is the Hessenberg matrix mutated with Givens rotations.

Tip

GMRES can be used as an iterator. This makes it possible to access the Hessenberg matrix and Krylov basis vectors during the iterations.

+Restarted GMRES · IterativeSolvers.jl

Restarted GMRES

GMRES solves the problem $Ax = b$ approximately for $x$ where $A$ is a general, linear operator and $b$ the right-hand side vector. The method is optimal in the sense that it selects the solution with minimal residual from a Krylov subspace, but the price of optimality is increasing storage and computation effort per iteration. Restarts are necessary to fix these costs.

Usage

IterativeSolvers.gmres!Function
gmres!(x, A, b; kwargs...) -> x, [history]

Solves the problem $Ax = b$ with restarted GMRES.

Arguments

  • x: Initial guess, will be updated in-place;
  • A: linear operator;
  • b: right-hand side.

Keywords

  • initially_zero::Bool: If true assumes that iszero(x) so that one matrix-vector product can be saved when computing the initial residual vector;
  • abstol::Real = zero(real(eltype(b))), reltol::Real = sqrt(eps(real(eltype(b)))): absolute and relative tolerance for the stopping condition |r_k| ≤ max(reltol * |r_0|, abstol), where r_k = A * x_k - b
  • restart::Int = min(20, size(A, 2)): restarts GMRES after specified number of iterations;
  • maxiter::Int = size(A, 2): maximum number of inner iterations of GMRES;
  • Pl: left preconditioner;
  • Pr: right preconditioner;
  • log::Bool: keep track of the residual norm in each iteration;
  • verbose::Bool: print convergence information during the iterations.
  • orth_meth::OrthogonalizationMethod = ModifiedGramSchmidt(): orthogonalization method (ModifiedGramSchmidt(), ClassicalGramSchmidt(), DGKS())

Return values

if log is false

  • x: approximate solution.

if log is true

  • x: approximate solution;
  • history: convergence history.
source

Implementation details

The implementation pre-allocates a matrix $V$ of size n by restart whose columns form an orthonormal basis for the Krylov subspace. This allows BLAS2 operations when updating the solution vector $x$. The Hessenberg matrix is also pre-allocated.

By default, modified Gram-Schmidt is used to orthogonalize the columns of $V$, since it is numerically more stable than classical Gram-Schmidt. Modified Gram-Schmidt is however inherently sequential, and if stability is not a concern, classical Gram-Schmidt can be used, which is implemented using BLAS2 operations. As a compromise the "DGKS criterion" can be used, which conditionally applies classical Gram-Schmidt repeatedly to stabilize it, and is typically one to two times slower than classical Gram-Schmidt.

The computation of the residual norm is implemented in a non-standard way, namely keeping track of a vector $\gamma$ in the null-space of $H_k^*$, which is the adjoint of the $(k + 1) \times k$ Hessenberg matrix $H_k$ at the $k$th iteration. Only when $x$ needs to be updated is the Hessenberg matrix mutated with Givens rotations.

Tip

GMRES can be used as an iterator. This makes it possible to access the Hessenberg matrix and Krylov basis vectors during the iterations.

diff --git a/dev/linear_systems/idrs/index.html b/dev/linear_systems/idrs/index.html index 180516c4..c0758add 100644 --- a/dev/linear_systems/idrs/index.html +++ b/dev/linear_systems/idrs/index.html @@ -1,2 +1,2 @@ -IDR(s) · IterativeSolvers.jl

IDR(s)

The Induced Dimension Reduction method is a family of simple and fast Krylov subspace algorithms for solving large nonsymmetric linear systems. The idea behind the IDR(s) variant is to generate residuals that are in the nested subspaces of shrinking dimensions.

Usage

IterativeSolvers.idrs!Function
idrs!(x, A, b; s = 8, kwargs...) -> x, [history]

Solve the problem $Ax = b$ approximately with IDR(s), where s is the dimension of the shadow space.

Arguments

  • x: Initial guess, will be updated in-place;
  • A: linear operator;
  • b: right-hand side.

Keywords

  • s::Integer = 8: dimension of the shadow space;
  • Pl::precT: left preconditioner,
  • abstol::Real = zero(real(eltype(b))), reltol::Real = sqrt(eps(real(eltype(b)))): absolute and relative tolerance for the stopping condition |r_k| ≤ max(reltol * |r_0|, abstol), where r_k = A * x_k - b is the residual in the kth iteration;
  • maxiter::Int = size(A, 2): maximum number of iterations;
  • log::Bool: keep track of the residual norm in each iteration;
  • verbose::Bool: print convergence information during the iterations.

Return values

if log is false

  • x: approximate solution.

if log is true

  • x: approximate solution;
  • history: convergence history.
source

Implementation details

The current implementation is based on the MATLAB version by Van Gijzen and Sonneveld. For background see [Sonneveld2008], [VanGijzen2011], the IDR(s) webpage and the IDR chapter in [Meurant2020].

  • Sonneveld2008IDR(s): a family of simple and fast algorithms for solving large nonsymmetric linear systems. P. Sonneveld and M. B. van Gijzen SIAM J. Sci. Comput. Vol. 31, No. 2, pp. 1035–1062, 2008
  • VanGijzen2011Algorithm 913: An Elegant IDR(s) Variant that Efficiently Exploits Bi-orthogonality Properties. M. B. van Gijzen and P. Sonneveld ACM Trans. Math. Software, Vol. 38, No. 1, pp. 5:1-5:19, 2011
  • Meurant2020The IDR family. G. Meurant and J. Duintjer Tebbens. In: Krylov Methods for Nonsymmetric Linear Systems. Springer Series in Computational Mathematics, vol 57. Springer, 2020. doi:10.1007/978-3-030-55251-0_10
+IDR(s) · IterativeSolvers.jl

IDR(s)

The Induced Dimension Reduction method is a family of simple and fast Krylov subspace algorithms for solving large nonsymmetric linear systems. The idea behind the IDR(s) variant is to generate residuals that are in the nested subspaces of shrinking dimensions.

Usage

IterativeSolvers.idrs!Function
idrs!(x, A, b; s = 8, kwargs...) -> x, [history]

Solve the problem $Ax = b$ approximately with IDR(s), where s is the dimension of the shadow space.

Arguments

  • x: Initial guess, will be updated in-place;
  • A: linear operator;
  • b: right-hand side.

Keywords

  • s::Integer = 8: dimension of the shadow space;
  • Pl::precT: left preconditioner,
  • abstol::Real = zero(real(eltype(b))), reltol::Real = sqrt(eps(real(eltype(b)))): absolute and relative tolerance for the stopping condition |r_k| ≤ max(reltol * |r_0|, abstol), where r_k = A * x_k - b is the residual in the kth iteration;
  • maxiter::Int = size(A, 2): maximum number of iterations;
  • log::Bool: keep track of the residual norm in each iteration;
  • verbose::Bool: print convergence information during the iterations.

Return values

if log is false

  • x: approximate solution.

if log is true

  • x: approximate solution;
  • history: convergence history.
source

Implementation details

The current implementation is based on the MATLAB version by Van Gijzen and Sonneveld. For background see [Sonneveld2008], [VanGijzen2011], the IDR(s) webpage and the IDR chapter in [Meurant2020].

  • Sonneveld2008IDR(s): a family of simple and fast algorithms for solving large nonsymmetric linear systems. P. Sonneveld and M. B. van Gijzen SIAM J. Sci. Comput. Vol. 31, No. 2, pp. 1035–1062, 2008
  • VanGijzen2011Algorithm 913: An Elegant IDR(s) Variant that Efficiently Exploits Bi-orthogonality Properties. M. B. van Gijzen and P. Sonneveld ACM Trans. Math. Software, Vol. 38, No. 1, pp. 5:1-5:19, 2011
  • Meurant2020The IDR family. G. Meurant and J. Duintjer Tebbens. In: Krylov Methods for Nonsymmetric Linear Systems. Springer Series in Computational Mathematics, vol 57. Springer, 2020. doi:10.1007/978-3-030-55251-0_10
diff --git a/dev/linear_systems/lsmr/index.html b/dev/linear_systems/lsmr/index.html index 09ff8db6..3c4f4de5 100644 --- a/dev/linear_systems/lsmr/index.html +++ b/dev/linear_systems/lsmr/index.html @@ -1,2 +1,2 @@ -LSMR · IterativeSolvers.jl

LSMR

Least-squares minimal residual

Usage

IterativeSolvers.lsmr!Function
lsmr!(x, A, b; kwargs...) -> x, [history]

Minimizes $\|Ax - b\|^2 + \|λx\|^2$ in the Euclidean norm. If multiple solutions exists the minimum norm solution is returned.

The method is based on the Golub-Kahan bidiagonalization process. It is algebraically equivalent to applying MINRES to the normal equations $(A^*A + λ^2I)x = A^*b$, but has better numerical properties, especially if $A$ is ill-conditioned.

Arguments

  • x: Initial guess, will be updated in-place;
  • A: linear operator;
  • b: right-hand side.

Keywords

  • λ::Number = 0: lambda.
  • atol::Number = 1e-6, btol::Number = 1e-6: stopping tolerances. If both are 1.0e-9 (say), the final residual norm should be accurate to about 9 digits. (The final x will usually have fewer correct digits, depending on cond(A) and the size of damp).
  • conlim::Number = 1e8: stopping tolerance. lsmr terminates if an estimate of cond(A) exceeds conlim. For compatible systems Ax = b, conlim could be as large as 1.0e+12 (say). For least-squares problems, conlim should be less than 1.0e+8. Maximum precision can be obtained by setting
  • atol = btol = conlim = zero, but the number of iterations may then be excessive.
  • maxiter::Int = maximum(size(A)): maximum number of iterations.
  • log::Bool: keep track of the residual norm in each iteration;
  • verbose::Bool: print convergence information during the iterations.

Return values

if log is false

  • x: approximated solution.

if log is true

  • x: approximated solution.
  • ch: convergence history.

ConvergenceHistory keys

  • :atol => ::Real: atol stopping tolerance.
  • :btol => ::Real: btol stopping tolerance.
  • :ctol => ::Real: ctol stopping tolerance.
  • :anorm => ::Real: anorm.
  • :rnorm => ::Real: rnorm.
  • :cnorm => ::Real: cnorm.
  • :resnom => ::Vector: residual norm at each iteration.
source

Implementation details

Adapted from: http://web.stanford.edu/group/SOL/software/lsmr/

+LSMR · IterativeSolvers.jl

LSMR

Least-squares minimal residual

Usage

IterativeSolvers.lsmr!Function
lsmr!(x, A, b; kwargs...) -> x, [history]

Minimizes $\|Ax - b\|^2 + \|λx\|^2$ in the Euclidean norm. If multiple solutions exists the minimum norm solution is returned.

The method is based on the Golub-Kahan bidiagonalization process. It is algebraically equivalent to applying MINRES to the normal equations $(A^*A + λ^2I)x = A^*b$, but has better numerical properties, especially if $A$ is ill-conditioned.

Arguments

  • x: Initial guess, will be updated in-place;
  • A: linear operator;
  • b: right-hand side.

Keywords

  • λ::Number = 0: lambda.
  • atol::Number = 1e-6, btol::Number = 1e-6: stopping tolerances. If both are 1.0e-9 (say), the final residual norm should be accurate to about 9 digits. (The final x will usually have fewer correct digits, depending on cond(A) and the size of damp).
  • conlim::Number = 1e8: stopping tolerance. lsmr terminates if an estimate of cond(A) exceeds conlim. For compatible systems Ax = b, conlim could be as large as 1.0e+12 (say). For least-squares problems, conlim should be less than 1.0e+8. Maximum precision can be obtained by setting
  • atol = btol = conlim = zero, but the number of iterations may then be excessive.
  • maxiter::Int = maximum(size(A)): maximum number of iterations.
  • log::Bool: keep track of the residual norm in each iteration;
  • verbose::Bool: print convergence information during the iterations.

Return values

if log is false

  • x: approximated solution.

if log is true

  • x: approximated solution.
  • ch: convergence history.

ConvergenceHistory keys

  • :atol => ::Real: atol stopping tolerance.
  • :btol => ::Real: btol stopping tolerance.
  • :ctol => ::Real: ctol stopping tolerance.
  • :anorm => ::Real: anorm.
  • :rnorm => ::Real: rnorm.
  • :cnorm => ::Real: cnorm.
  • :resnom => ::Vector: residual norm at each iteration.
source

Implementation details

Adapted from: http://web.stanford.edu/group/SOL/software/lsmr/

diff --git a/dev/linear_systems/lsqr/index.html b/dev/linear_systems/lsqr/index.html index 1269ffa1..d665b319 100644 --- a/dev/linear_systems/lsqr/index.html +++ b/dev/linear_systems/lsqr/index.html @@ -1,2 +1,2 @@ -LSQR · IterativeSolvers.jl

LSQR

Usage

IterativeSolvers.lsqr!Function
lsqr!(x, A, b; kwargs...) -> x, [history]

Minimizes $\|Ax - b\|^2 + \|damp*x\|^2$ in the Euclidean norm. If multiple solutions exists returns the minimal norm solution.

The method is based on the Golub-Kahan bidiagonalization process. It is algebraically equivalent to applying CG to the normal equations $(A^*A + λ^2I)x = A^*b$ but has better numerical properties, especially if A is ill-conditioned.

Arguments

  • x: Initial guess, will be updated in-place;
  • A: linear operator;
  • b: right-hand side.

Keywords

  • damp::Number = 0: damping parameter.
  • atol::Number = 1e-6, btol::Number = 1e-6: stopping tolerances. If both are 1.0e-9 (say), the final residual norm should be accurate to about 9 digits. (The final x will usually have fewer correct digits, depending on cond(A) and the size of damp).
  • conlim::Number = 1e8: stopping tolerance. lsmr terminates if an estimate of cond(A) exceeds conlim. For compatible systems Ax = b, conlim could be as large as 1.0e+12 (say). For least-squares problems, conlim should be less than 1.0e+8. Maximum precision can be obtained by setting atol = btol = conlim = zero, but the number of iterations may then be excessive.
  • maxiter::Int = maximum(size(A)): maximum number of iterations.
  • verbose::Bool = false: print method information.
  • log::Bool = false: output an extra element of type ConvergenceHistory containing extra information of the method execution.

Return values

if log is false

  • x: approximated solution.

if log is true

  • x: approximated solution.
  • ch: convergence history.

ConvergenceHistory keys

  • :atol => ::Real: atol stopping tolerance.
  • :btol => ::Real: btol stopping tolerance.
  • :ctol => ::Real: ctol stopping tolerance.
  • :anorm => ::Real: anorm.
  • :rnorm => ::Real: rnorm.
  • :cnorm => ::Real: cnorm.
  • :resnom => ::Vector: residual norm at each iteration.
source

Implementation details

Adapted from: http://web.stanford.edu/group/SOL/software/lsqr/.

+LSQR · IterativeSolvers.jl

LSQR

Usage

IterativeSolvers.lsqr!Function
lsqr!(x, A, b; kwargs...) -> x, [history]

Minimizes $\|Ax - b\|^2 + \|damp*x\|^2$ in the Euclidean norm. If multiple solutions exists returns the minimal norm solution.

The method is based on the Golub-Kahan bidiagonalization process. It is algebraically equivalent to applying CG to the normal equations $(A^*A + λ^2I)x = A^*b$ but has better numerical properties, especially if A is ill-conditioned.

Arguments

  • x: Initial guess, will be updated in-place;
  • A: linear operator;
  • b: right-hand side.

Keywords

  • damp::Number = 0: damping parameter.
  • atol::Number = 1e-6, btol::Number = 1e-6: stopping tolerances. If both are 1.0e-9 (say), the final residual norm should be accurate to about 9 digits. (The final x will usually have fewer correct digits, depending on cond(A) and the size of damp).
  • conlim::Number = 1e8: stopping tolerance. lsmr terminates if an estimate of cond(A) exceeds conlim. For compatible systems Ax = b, conlim could be as large as 1.0e+12 (say). For least-squares problems, conlim should be less than 1.0e+8. Maximum precision can be obtained by setting atol = btol = conlim = zero, but the number of iterations may then be excessive.
  • maxiter::Int = maximum(size(A)): maximum number of iterations.
  • verbose::Bool = false: print method information.
  • log::Bool = false: output an extra element of type ConvergenceHistory containing extra information of the method execution.

Return values

if log is false

  • x: approximated solution.

if log is true

  • x: approximated solution.
  • ch: convergence history.

ConvergenceHistory keys

  • :atol => ::Real: atol stopping tolerance.
  • :btol => ::Real: btol stopping tolerance.
  • :ctol => ::Real: ctol stopping tolerance.
  • :anorm => ::Real: anorm.
  • :rnorm => ::Real: rnorm.
  • :cnorm => ::Real: cnorm.
  • :resnom => ::Vector: residual norm at each iteration.
source

Implementation details

Adapted from: http://web.stanford.edu/group/SOL/software/lsqr/.

diff --git a/dev/linear_systems/minres/index.html b/dev/linear_systems/minres/index.html index 1dbf1955..d5ef8fe2 100644 --- a/dev/linear_systems/minres/index.html +++ b/dev/linear_systems/minres/index.html @@ -1,2 +1,2 @@ -MINRES · IterativeSolvers.jl

MINRES

MINRES is a short-recurrence version of GMRES for solving $Ax = b$ approximately for $x$ where $A$ is a symmetric, Hermitian, skew-symmetric or skew-Hermitian linear operator and $b$ the right-hand side vector.

Usage

IterativeSolvers.minres!Function
minres!(x, A, b; kwargs...) -> x, [history]

Solve Ax = b for (skew-)Hermitian matrices A using MINRES.

Arguments

  • x: initial guess, will be updated in-place;
  • A: linear operator;
  • b: right-hand side.

Keywords

  • initially_zero::Bool = false: if true assumes that iszero(x) so that one matrix-vector product can be saved when computing the initial residual vector;
  • skew_hermitian::Bool = false: if true assumes that A is skew-symmetric or skew-Hermitian;
  • abstol::Real = zero(real(eltype(b))), reltol::Real = sqrt(eps(real(eltype(b)))): absolute and relative tolerance for the stopping condition |r_k| ≤ max(reltol * |r_0|, abstol), where r_k = A * x_k - b is the residual in the kth iteration
    Note

    The residual is computed only approximately.

  • maxiter::Int = size(A, 2): maximum number of iterations;
  • log::Bool = false: keep track of the residual norm in each iteration;
  • verbose::Bool = false: print convergence information during the iterations.

Return values

if log is false

  • x: approximate solution.

if log is true

  • x: approximate solution;
  • history: convergence history.
source

Implementation details

MINRES exploits the tridiagonal structure of the Hessenberg matrix. Although MINRES is mathematically equivalent to GMRES, it might not be equivalent in finite precision. MINRES updates the solution as

\[x := x_0 + (V R^{-1}) (Q^*\|r_0\|e_1)\]

where $V$ is the orthonormal basis for the Krylov subspace and $QR$ is the QR-decomposition of the Hessenberg matrix. Note that the brackets are placed slightly differently from how GMRES would update the residual.

MINRES computes $V$ and $W = VR^{-1}$ via a three-term recurrence, using only the last column of $R.$ Therefore we pre-allocate only six vectors, save only the last two entries of $Q^*\|r_0\|e_1$ and part of the last column of the Hessenberg matrix.

Real and complex arithmetic

If $A$ is Hermitian, then the Hessenberg matrix will be real. This is exploited in the current implementation.

If $A$ is skew-Hermitian, the diagonal of the Hessenberg matrix will be imaginary, and hence we use complex arithmetic in that case.

Tip

MINRES can be used as an iterator.

+MINRES · IterativeSolvers.jl

MINRES

MINRES is a short-recurrence version of GMRES for solving $Ax = b$ approximately for $x$ where $A$ is a symmetric, Hermitian, skew-symmetric or skew-Hermitian linear operator and $b$ the right-hand side vector.

Usage

IterativeSolvers.minres!Function
minres!(x, A, b; kwargs...) -> x, [history]

Solve Ax = b for (skew-)Hermitian matrices A using MINRES.

Arguments

  • x: initial guess, will be updated in-place;
  • A: linear operator;
  • b: right-hand side.

Keywords

  • initially_zero::Bool = false: if true assumes that iszero(x) so that one matrix-vector product can be saved when computing the initial residual vector;
  • skew_hermitian::Bool = false: if true assumes that A is skew-symmetric or skew-Hermitian;
  • abstol::Real = zero(real(eltype(b))), reltol::Real = sqrt(eps(real(eltype(b)))): absolute and relative tolerance for the stopping condition |r_k| ≤ max(reltol * |r_0|, abstol), where r_k = A * x_k - b is the residual in the kth iteration
    Note

    The residual is computed only approximately.

  • maxiter::Int = size(A, 2): maximum number of iterations;
  • log::Bool = false: keep track of the residual norm in each iteration;
  • verbose::Bool = false: print convergence information during the iterations.

Return values

if log is false

  • x: approximate solution.

if log is true

  • x: approximate solution;
  • history: convergence history.
source

Implementation details

MINRES exploits the tridiagonal structure of the Hessenberg matrix. Although MINRES is mathematically equivalent to GMRES, it might not be equivalent in finite precision. MINRES updates the solution as

\[x := x_0 + (V R^{-1}) (Q^*\|r_0\|e_1)\]

where $V$ is the orthonormal basis for the Krylov subspace and $QR$ is the QR-decomposition of the Hessenberg matrix. Note that the brackets are placed slightly differently from how GMRES would update the residual.

MINRES computes $V$ and $W = VR^{-1}$ via a three-term recurrence, using only the last column of $R.$ Therefore we pre-allocate only six vectors, save only the last two entries of $Q^*\|r_0\|e_1$ and part of the last column of the Hessenberg matrix.

Real and complex arithmetic

If $A$ is Hermitian, then the Hessenberg matrix will be real. This is exploited in the current implementation.

If $A$ is skew-Hermitian, the diagonal of the Hessenberg matrix will be imaginary, and hence we use complex arithmetic in that case.

Tip

MINRES can be used as an iterator.

diff --git a/dev/linear_systems/qmr/index.html b/dev/linear_systems/qmr/index.html index 011a3826..09b8fb5e 100644 --- a/dev/linear_systems/qmr/index.html +++ b/dev/linear_systems/qmr/index.html @@ -1,2 +1,2 @@ -QMR · IterativeSolvers.jl

QMR

QMR is a short-recurrence version of GMRES for solving $Ax = b$ approximately for $x$ where $A$ is a linear operator and $b$ the right-hand side vector. $A$ may be non-symmetric.

Usage

IterativeSolvers.qmr!Function
qmr!(x, A, b; kwargs...) -> x, [history]

Solves the problem $Ax = b$ with the Quasi-Minimal Residual (QMR) method.

Arguments

  • x: Initial guess, will be updated in-place;
  • A: linear operator;
  • b: right-hand side.

Keywords

  • initially_zero::Bool: If true assumes that iszero(x) so that one matrix-vector product can be saved when computing the initial residual vector;
  • maxiter::Int = size(A, 2): maximum number of iterations;
  • abstol::Real = zero(real(eltype(b))), reltol::Real = sqrt(eps(real(eltype(b)))): absolute and relative tolerance for the stopping condition |r_k| ≤ max(reltol * |r_0|, abstol), where r_k = A * x_k - b
  • log::Bool: keep track of the residual norm in each iteration;
  • verbose::Bool: print convergence information during the iteration.

Return values

if log is false

  • x: approximate solution.

if log is true

  • x: approximate solution;

  • history: convergence history.

source

Implementation details

QMR exploits the tridiagonal structure of the Hessenberg matrix. Although QMR is similar to GMRES, where instead of using the Arnoldi process, a pair of biorthogonal vector spaces $V$ and $W$ is constructed via the Lanczos process. It requires that the adjoint of $A$ adjoint(A) be available.

QMR enables the computation of $V$ and $W$ via a three-term recurrence. A three-term recurrence for the projection onto the solution vector can also be constructed from these values, using the portion of the last column of the Hessenberg matrix. Therefore we pre-allocate only eight vectors.

For more detail on the implementation see the original paper [Freund1990] or [Saad2003].

Tip

QMR can be used as an iterator via qmr_iterable!. This makes it possible to access the next, current, and previous Krylov basis vectors during the iteration.

  • Saad2003Saad, Y. (2003). Interactive method for sparse linear system.
  • Freund1990Freund, W. R., & Nachtigal, N. M. (1990). QMR : for a Quasi-Minimal Residual Linear Method Systems. (December).
  • Saad2003Saad, Y. (2003). Interactive method for sparse linear system.
  • Freund1990Freund, W. R., & Nachtigal, N. M. (1990). QMR : for a Quasi-Minimal Residual Linear Method Systems. (December).
+QMR · IterativeSolvers.jl

QMR

QMR is a short-recurrence version of GMRES for solving $Ax = b$ approximately for $x$ where $A$ is a linear operator and $b$ the right-hand side vector. $A$ may be non-symmetric.

Usage

IterativeSolvers.qmr!Function
qmr!(x, A, b; kwargs...) -> x, [history]

Solves the problem $Ax = b$ with the Quasi-Minimal Residual (QMR) method.

Arguments

  • x: Initial guess, will be updated in-place;
  • A: linear operator;
  • b: right-hand side.

Keywords

  • initially_zero::Bool: If true assumes that iszero(x) so that one matrix-vector product can be saved when computing the initial residual vector;
  • maxiter::Int = size(A, 2): maximum number of iterations;
  • abstol::Real = zero(real(eltype(b))), reltol::Real = sqrt(eps(real(eltype(b)))): absolute and relative tolerance for the stopping condition |r_k| ≤ max(reltol * |r_0|, abstol), where r_k = A * x_k - b
  • log::Bool: keep track of the residual norm in each iteration;
  • verbose::Bool: print convergence information during the iteration.

Return values

if log is false

  • x: approximate solution.

if log is true

  • x: approximate solution;

  • history: convergence history.

source

Implementation details

QMR exploits the tridiagonal structure of the Hessenberg matrix. Although QMR is similar to GMRES, where instead of using the Arnoldi process, a pair of biorthogonal vector spaces $V$ and $W$ is constructed via the Lanczos process. It requires that the adjoint of $A$ adjoint(A) be available.

QMR enables the computation of $V$ and $W$ via a three-term recurrence. A three-term recurrence for the projection onto the solution vector can also be constructed from these values, using the portion of the last column of the Hessenberg matrix. Therefore we pre-allocate only eight vectors.

For more detail on the implementation see the original paper [Freund1990] or [Saad2003].

Tip

QMR can be used as an iterator via qmr_iterable!. This makes it possible to access the next, current, and previous Krylov basis vectors during the iteration.

  • Saad2003Saad, Y. (2003). Interactive method for sparse linear system.
  • Freund1990Freund, W. R., & Nachtigal, N. M. (1990). QMR : for a Quasi-Minimal Residual Linear Method Systems. (December).
  • Saad2003Saad, Y. (2003). Interactive method for sparse linear system.
  • Freund1990Freund, W. R., & Nachtigal, N. M. (1990). QMR : for a Quasi-Minimal Residual Linear Method Systems. (December).
diff --git a/dev/linear_systems/stationary/index.html b/dev/linear_systems/stationary/index.html index fecf24fe..653b76de 100644 --- a/dev/linear_systems/stationary/index.html +++ b/dev/linear_systems/stationary/index.html @@ -1,2 +1,2 @@ -Stationary methods · IterativeSolvers.jl

Stationary methods

Stationary methods are typically used as smoothers in multigrid methods, where only very few iterations are applied to get rid of high-frequency components in the error. The implementations of stationary methods have this goal in mind, which means there is no other stopping criterion besides the maximum number of iterations.

CSC versus CSR

Julia stores matrices column-major. In order to avoid cache misses, the implementations of our stationary methods traverse the matrices column-major. This deviates from classical textbook implementations. Also the SOR and SSOR methods cannot be computed efficiently in-place, but require a temporary vector.

When it comes to SparseMatrixCSC, we precompute in all stationary methods an integer array of the indices of the diagonal to avoid expensive searches in each iteration.

Jacobi

IterativeSolvers.jacobi!Function
jacobi!(x, A::AbstractMatrix, b; maxiter=10) -> x

Performs exactly maxiter Jacobi iterations.

Allocates a single temporary vector and traverses A columnwise.

Throws LinearAlgebra.SingularException when the diagonal has a zero. This check is performed once beforehand.

source
jacobi!(x, A::SparseMatrixCSC, b; maxiter=10) -> x

Performs exactly maxiter Jacobi iterations.

Allocates a temporary vector and precomputes the diagonal indices.

Throws LinearAlgebra.SingularException when the diagonal has a zero. This check is performed once beforehand.

source

Gauss-Seidel

IterativeSolvers.gauss_seidel!Function
gauss_seidel!(x, A::AbstractMatrix, b; maxiter=10) -> x

Performs exactly maxiter Gauss-Seidel iterations.

Works fully in-place and traverses A columnwise.

Throws LinearAlgebra.SingularException when the diagonal has a zero. This check is performed once beforehand.

source
gauss_seidel!(x, A::SparseMatrixCSC, b; maxiter=10) -> x

Performs exactly maxiter Gauss-Seidel iterations.

Works fully in-place, but precomputes the diagonal indices.

Throws LinearAlgebra.SingularException when the diagonal has a zero. This check is performed once beforehand.

source

Successive over-relaxation (SOR)

IterativeSolvers.sor!Function
sor!(x, A::AbstractMatrix, b, ω::Real; maxiter=10) -> x

Performs exactly maxiter SOR iterations with relaxation parameter ω.

Allocates a single temporary vector and traverses A columnwise.

Throws LinearAlgebra.SingularException when the diagonal has a zero. This check is performed once beforehand.

source
sor!(x, A::SparseMatrixCSC, b, ω::Real; maxiter=10)

Performs exactly maxiter SOR iterations with relaxation parameter ω.

Allocates a temporary vector and precomputes the diagonal indices.

Throws LinearAlgebra.SingularException when the diagonal has a zero. This check is performed once beforehand.

source

Symmetric successive over-relaxation (SSOR)

IterativeSolvers.ssor!Function
ssor!(x, A::AbstractMatrix, b, ω::Real; maxiter=10) -> x

Performs exactly maxiter SSOR iterations with relaxation parameter ω. Each iteration is basically a forward and backward sweep of SOR.

Allocates a single temporary vector and traverses A columnwise.

Throws LinearAlgebra.SingularException when the diagonal has a zero. This check is performed once beforehand.

source
ssor!(x, A::SparseMatrixCSC, b, ω::Real; maxiter=10)

Performs exactly maxiter SSOR iterations with relaxation parameter ω. Each iteration is basically a forward and backward sweep of SOR.

Allocates a temporary vector and precomputes the diagonal indices.

Throws LinearAlgebra.SingularException when the diagonal has a zero. This check is performed once beforehand.

source
Tip

All stationary methods can be used a iterators.

+Stationary methods · IterativeSolvers.jl

Stationary methods

Stationary methods are typically used as smoothers in multigrid methods, where only very few iterations are applied to get rid of high-frequency components in the error. The implementations of stationary methods have this goal in mind, which means there is no other stopping criterion besides the maximum number of iterations.

CSC versus CSR

Julia stores matrices column-major. In order to avoid cache misses, the implementations of our stationary methods traverse the matrices column-major. This deviates from classical textbook implementations. Also the SOR and SSOR methods cannot be computed efficiently in-place, but require a temporary vector.

When it comes to SparseMatrixCSC, we precompute in all stationary methods an integer array of the indices of the diagonal to avoid expensive searches in each iteration.

Jacobi

IterativeSolvers.jacobi!Function
jacobi!(x, A::AbstractMatrix, b; maxiter=10) -> x

Performs exactly maxiter Jacobi iterations.

Allocates a single temporary vector and traverses A columnwise.

Throws LinearAlgebra.SingularException when the diagonal has a zero. This check is performed once beforehand.

source
jacobi!(x, A::SparseMatrixCSC, b; maxiter=10) -> x

Performs exactly maxiter Jacobi iterations.

Allocates a temporary vector and precomputes the diagonal indices.

Throws LinearAlgebra.SingularException when the diagonal has a zero. This check is performed once beforehand.

source

Gauss-Seidel

IterativeSolvers.gauss_seidel!Function
gauss_seidel!(x, A::AbstractMatrix, b; maxiter=10) -> x

Performs exactly maxiter Gauss-Seidel iterations.

Works fully in-place and traverses A columnwise.

Throws LinearAlgebra.SingularException when the diagonal has a zero. This check is performed once beforehand.

source
gauss_seidel!(x, A::SparseMatrixCSC, b; maxiter=10) -> x

Performs exactly maxiter Gauss-Seidel iterations.

Works fully in-place, but precomputes the diagonal indices.

Throws LinearAlgebra.SingularException when the diagonal has a zero. This check is performed once beforehand.

source

Successive over-relaxation (SOR)

IterativeSolvers.sor!Function
sor!(x, A::AbstractMatrix, b, ω::Real; maxiter=10) -> x

Performs exactly maxiter SOR iterations with relaxation parameter ω.

Allocates a single temporary vector and traverses A columnwise.

Throws LinearAlgebra.SingularException when the diagonal has a zero. This check is performed once beforehand.

source
sor!(x, A::SparseMatrixCSC, b, ω::Real; maxiter=10)

Performs exactly maxiter SOR iterations with relaxation parameter ω.

Allocates a temporary vector and precomputes the diagonal indices.

Throws LinearAlgebra.SingularException when the diagonal has a zero. This check is performed once beforehand.

source

Symmetric successive over-relaxation (SSOR)

IterativeSolvers.ssor!Function
ssor!(x, A::AbstractMatrix, b, ω::Real; maxiter=10) -> x

Performs exactly maxiter SSOR iterations with relaxation parameter ω. Each iteration is basically a forward and backward sweep of SOR.

Allocates a single temporary vector and traverses A columnwise.

Throws LinearAlgebra.SingularException when the diagonal has a zero. This check is performed once beforehand.

source
ssor!(x, A::SparseMatrixCSC, b, ω::Real; maxiter=10)

Performs exactly maxiter SSOR iterations with relaxation parameter ω. Each iteration is basically a forward and backward sweep of SOR.

Allocates a temporary vector and precomputes the diagonal indices.

Throws LinearAlgebra.SingularException when the diagonal has a zero. This check is performed once beforehand.

source
Tip

All stationary methods can be used a iterators.

diff --git a/dev/preconditioning/index.html b/dev/preconditioning/index.html index 88c6a213..5f8f4785 100644 --- a/dev/preconditioning/index.html +++ b/dev/preconditioning/index.html @@ -1,2 +1,2 @@ -Preconditioning · IterativeSolvers.jl

Preconditioning

Many iterative solvers have the option to provide left and right preconditioners (Pl and Pr resp.) in order to speed up convergence or prevent stagnation. They transform a problem $Ax = b$ into a better conditioned system $(P_l^{-1}AP_r^{-1})y = P_l^{-1}b$, where $x = P_r^{-1}y$.

These preconditioners should support the operations

  • ldiv!(y, P, x) computes P \ x in-place of y;
  • ldiv!(P, x) computes P \ x in-place of x;
  • and P \ x.

If no preconditioners are passed to the solver, the method will default to

Pl = Pr = IterativeSolvers.Identity()

Available preconditioners

IterativeSolvers.jl itself does not provide any other preconditioners besides Identity(), but recommends the following external packages:

+Preconditioning · IterativeSolvers.jl

Preconditioning

Many iterative solvers have the option to provide left and right preconditioners (Pl and Pr resp.) in order to speed up convergence or prevent stagnation. They transform a problem $Ax = b$ into a better conditioned system $(P_l^{-1}AP_r^{-1})y = P_l^{-1}b$, where $x = P_r^{-1}y$.

These preconditioners should support the operations

  • ldiv!(y, P, x) computes P \ x in-place of y;
  • ldiv!(P, x) computes P \ x in-place of x;
  • and P \ x.

If no preconditioners are passed to the solver, the method will default to

Pl = Pr = IterativeSolvers.Identity()

Available preconditioners

IterativeSolvers.jl itself does not provide any other preconditioners besides Identity(), but recommends the following external packages:

diff --git a/dev/search/index.html b/dev/search/index.html index 8bfa3b0f..77e41952 100644 --- a/dev/search/index.html +++ b/dev/search/index.html @@ -1,2 +1,2 @@ -Search · IterativeSolvers.jl

Loading search...

    +Search · IterativeSolvers.jl

    Loading search...

      diff --git a/dev/svd/svdl/index.html b/dev/svd/svdl/index.html index f70c7b5e..52615ad4 100644 --- a/dev/svd/svdl/index.html +++ b/dev/svd/svdl/index.html @@ -1,2 +1,2 @@ -SVDL · IterativeSolvers.jl

      Golub-Kahan-Lanczos (SVDL)

      The SVDL method computes a partial, approximate SVD decomposition of a general linear operator $A$.

      Usage

      IterativeSolvers.svdlFunction
      svdl(A) -> Σ, L, [history]

      Compute some singular values (and optionally vectors) using Golub-Kahan-Lanczos bidiagonalization [Golub1965] with thick restarting [Wu2000].

      If log is set to true is given, method will output a tuple X, L, ch. Where ch is a ConvergenceHistory object. Otherwise it will only return X, L.

      Arguments

      • A : The matrix or matrix-like object whose singular values are desired.

      Keywords

      • nsv::Int = 6: number of singular values requested;
      • v0 = random unit vector: starting guess vector in the domain of A. The length of q should be the number of columns in A;
      • k::Int = 2nsv: maximum number of Lanczos vectors to compute before restarting;
      • j::Int = nsv: number of vectors to keep at the end of the restart. We don't recommend j < nsv;
      • maxiter::Int = minimum(size(A)): maximum number of iterations to run;
      • verbose::Bool = false: print information at each iteration;
      • tol::Real = √eps(): maximum absolute error in each desired singular value;
      • reltol::Real=√eps(): maximum error in each desired singular value relative to the estimated norm of the input matrix;
      • method::Symbol=:ritz: restarting algorithm to use. Valid choices are:
        1. :ritz: Thick restart with Ritz values [Wu2000].
        2. :harmonic: Restart with harmonic Ritz values [Baglama2005].
      • vecs::Symbol = :none: singular vectors to return.
        1. :both: Both left and right singular vectors are returned.
        2. :left: Only the left singular vectors are returned.
        3. :right: Only the right singular vectors are returned.
        4. :none: No singular vectors are returned.
      • dolock::Bool=false: If true, locks converged Ritz values, removing them from the Krylov subspace being searched in the next macroiteration;
      • log::Bool = false: output an extra element of type ConvergenceHistory containing extra information of the method execution.

      Return values

      if log is false

      • Σ: list of the desired singular values if vecs == :none (the default), otherwise returns an SVD object with the desired singular vectors filled in;
      • L: computed partial factorizations of A.

      if log is true

      • Σ: list of the desired singular values if vecs == :none (the default),

      otherwise returns an SVD object with the desired singular vectors filled in;

      • L: computed partial factorizations of A;
      • history: convergence history.

      ConvergenceHistory keys

      • :betas => betas: The history of the computed betas.
      • :Bs => Bs: The history of the computed projected matrices.
      • :ritz => ritzvalhist: Ritz values computed at each iteration.
      • :conv => convhist: Convergence data.
      source

      Implementation details

      The implementation of thick restarting follows closely that of SLEPc as described in [Hernandez2008]. Thick restarting can be turned off by setting k = maxiter, but most of the time this is not desirable.

      The singular vectors are computed directly by forming the Ritz vectors from the product of the Lanczos vectors L.P/L.Q and the singular vectors of L.B. Additional accuracy in the singular triples can be obtained using inverse iteration.

      • Golub1965Golub, Gene, and William Kahan. "Calculating the singular values and pseudo-inverse of a matrix." Journal of the Society for Industrial and Applied Mathematics, Series B: Numerical Analysis 2.2 (1965): 205-224.
      • Wu2000Wu, Kesheng, and Horst Simon. "Thick-restart Lanczos method for large symmetric eigenvalue problems." SIAM Journal on Matrix Analysis and Applications 22.2 (2000): 602-616.
      • Hernandez2008Vicente Hernández, José E. Román, and Andrés Tomás. "A robust and efficient parallel SVD solver based on restarted Lanczos bidiagonalization." Electronic Transactions on Numerical Analysis 31 (2008): 68-85.
      +SVDL · IterativeSolvers.jl

      Golub-Kahan-Lanczos (SVDL)

      The SVDL method computes a partial, approximate SVD decomposition of a general linear operator $A$.

      Usage

      IterativeSolvers.svdlFunction
      svdl(A) -> Σ, L, [history]

      Compute some singular values (and optionally vectors) using Golub-Kahan-Lanczos bidiagonalization [Golub1965] with thick restarting [Wu2000].

      If log is set to true is given, method will output a tuple X, L, ch. Where ch is a ConvergenceHistory object. Otherwise it will only return X, L.

      Arguments

      • A : The matrix or matrix-like object whose singular values are desired.

      Keywords

      • nsv::Int = 6: number of singular values requested;
      • v0 = random unit vector: starting guess vector in the domain of A. The length of q should be the number of columns in A;
      • k::Int = 2nsv: maximum number of Lanczos vectors to compute before restarting;
      • j::Int = nsv: number of vectors to keep at the end of the restart. We don't recommend j < nsv;
      • maxiter::Int = minimum(size(A)): maximum number of iterations to run;
      • verbose::Bool = false: print information at each iteration;
      • tol::Real = √eps(): maximum absolute error in each desired singular value;
      • reltol::Real=√eps(): maximum error in each desired singular value relative to the estimated norm of the input matrix;
      • method::Symbol=:ritz: restarting algorithm to use. Valid choices are:
        1. :ritz: Thick restart with Ritz values [Wu2000].
        2. :harmonic: Restart with harmonic Ritz values [Baglama2005].
      • vecs::Symbol = :none: singular vectors to return.
        1. :both: Both left and right singular vectors are returned.
        2. :left: Only the left singular vectors are returned.
        3. :right: Only the right singular vectors are returned.
        4. :none: No singular vectors are returned.
      • dolock::Bool=false: If true, locks converged Ritz values, removing them from the Krylov subspace being searched in the next macroiteration;
      • log::Bool = false: output an extra element of type ConvergenceHistory containing extra information of the method execution.

      Return values

      if log is false

      • Σ: list of the desired singular values if vecs == :none (the default), otherwise returns an SVD object with the desired singular vectors filled in;
      • L: computed partial factorizations of A.

      if log is true

      • Σ: list of the desired singular values if vecs == :none (the default),

      otherwise returns an SVD object with the desired singular vectors filled in;

      • L: computed partial factorizations of A;
      • history: convergence history.

      ConvergenceHistory keys

      • :betas => betas: The history of the computed betas.
      • :Bs => Bs: The history of the computed projected matrices.
      • :ritz => ritzvalhist: Ritz values computed at each iteration.
      • :conv => convhist: Convergence data.
      source

      Implementation details

      The implementation of thick restarting follows closely that of SLEPc as described in [Hernandez2008]. Thick restarting can be turned off by setting k = maxiter, but most of the time this is not desirable.

      The singular vectors are computed directly by forming the Ritz vectors from the product of the Lanczos vectors L.P/L.Q and the singular vectors of L.B. Additional accuracy in the singular triples can be obtained using inverse iteration.

      • Golub1965Golub, Gene, and William Kahan. "Calculating the singular values and pseudo-inverse of a matrix." Journal of the Society for Industrial and Applied Mathematics, Series B: Numerical Analysis 2.2 (1965): 205-224.
      • Wu2000Wu, Kesheng, and Horst Simon. "Thick-restart Lanczos method for large symmetric eigenvalue problems." SIAM Journal on Matrix Analysis and Applications 22.2 (2000): 602-616.
      • Hernandez2008Vicente Hernández, José E. Román, and Andrés Tomás. "A robust and efficient parallel SVD solver based on restarted Lanczos bidiagonalization." Electronic Transactions on Numerical Analysis 31 (2008): 68-85.