(Make sure to take a look at the paper! https://github.com/pinkgopher/MatFun.jl/blob/master/docs/thesis.pdf )
This package provides methods for computing matrix functions. It currently works with Float64 precision only.
Schur-Parlett can be used to compute f(A) for dense matrices. It uses higher-order automatic differentiation (TaylorSeries.jl, specifically). It can be called with:
schurparlett(f, A)
If you want to reuse the Schur decomposition of A, you can also use:
schurparlett(f, T, Q, vals)
The algorithm has a couple of performance improvements, compared to the one described in the paper. The Parlett recurrence is implemented in a cache-oblivious fashion and the algorithm works mostly in real arithmetic, when A is real (in which case, f(conj(x)) == conj(f(x))
is assumed).
Rational Krylov can be used to compute f(A)*b for sparse matrices:
ratkrylov(f, A, b, mmax=100, tol=1e-13, Z=Vector{Complex128}(0))
The poles for the rational Krylov decomposition are given by the AAA rational approximation of f, with parameters mmax, tol and Z. mmax effectively bounds the size of the Krylov space. When the sample set Z is not provided, f will be sampled on the 0-centered disk with radius min(norm(A, 1), norm(A, Inf), vecnorm(A)). If you want to specify the poles manually, do:
ratkrylov(f, A, b, p)
The rational Krylov decomposition can be computed with:
V, K, H = ratkrylov(A, b, p)
Again, when A is real, operations are done in real arithmetic and f(conj(x)) == conj(f(x))
is assumed.
The package also includes the AAA algorithm for rational approximation:
r, pol, res, zer, z, f, w, errvec = aaa(func, Z, tol=1e-13, mmax=100)
The algorithm is as described in the original paper.
Do:
Pkg.add("TaylorSeries")
Pkg.clone("https://github.com/pinkgopher/MatFun.jl.git")
Before using the package, make sure to test it with:
Pkg.test("MatFun")