diff --git a/dev/DCsOPF/index.html b/dev/DCsOPF/index.html index affb251..7a31aa2 100644 --- a/dev/DCsOPF/index.html +++ b/dev/DCsOPF/index.html @@ -9,11 +9,11 @@ Nl, N = size(A, 1), size(A, 2) Bbr = diagm(0 => -(2 .+ 10 * rand(Nl))) # line parameters Ψ = [zeros(Nl) -Bbr * A[:, 2:end] * inv(A[:, 2:end]' * Bbr * A[:, 2:end])] # PTDF matrix
5×4 Matrix{Float64}:
- 0.0  -0.558172  -0.238942  -0.137026
- 0.0  -0.138155  -0.237974  -0.136471
- 0.0  -0.303674  -0.523084  -0.726503
- 0.0  -0.441828   0.238942   0.137026
- 0.0   0.303674   0.523084  -0.273497

Now we can continue the remaining ingredients that specify our systems:

Cp, Cd = [1 0; 0 0; 0 0; 0 1], [0 0; 1 0; 0 1; 0 0] # book-keeping
+ 0.0  -0.780549   -0.291588  -0.143533
+ 0.0  -0.0948995  -0.306346  -0.150798
+ 0.0  -0.124552   -0.402066  -0.705668
+ 0.0  -0.219451    0.291588   0.143533
+ 0.0   0.124552    0.402066  -0.294332

Now we can continue the remaining ingredients that specify our systems:

Cp, Cd = [1 0; 0 0; 0 0; 0 1], [0 0; 1 0; 0 1; 0 0] # book-keeping
 Ng, Nd = size(Cp, 2), size(Cd, 2)
 c = 4 .+ 10 * rand(Ng) # cost function parameters
 λp, λl = 1.6 * ones(Ng), 1.6 * ones(Nl) # lambdas for chance constraint reformulations
@@ -43,4 +43,4 @@
             [1 / λl[i] * (mean(pl[1, :], mop) - plmin[i]); buildSOC(pl[i, :], mop)] in SecondOrderCone())
 @objective(model, Min, sum(mean(p[i, :], mop) * c[i] for i in 1:Ng))
 optimize!(model) # here we go

Let's extract the numerical values of the optimal solution.

@assert termination_status(model)==MOI.OPTIMAL "Model not solved to optimality."
-psol, plsol, obj = value.(p), value.(pl), objective_value(model)

Great, we've solved the problem. How do we now make sense of the solution? For instance, we can look at the moments of the generated power:

p_moments = [[mean(psol[i, :], mop) var(psol[i, :], mop)] for i in 1:Ng]

Similarly, we can study the moments for the branch flows:

pbr_moments = [[mean(plsol[i, :], mop) var(plsol[i, :], mop)] for i in 1:Nl]
+psol, plsol, obj = value.(p), value.(pl), objective_value(model)

Great, we've solved the problem. How do we now make sense of the solution? For instance, we can look at the moments of the generated power:

p_moments = [[mean(psol[i, :], mop) var(psol[i, :], mop)] for i in 1:Ng]

Similarly, we can study the moments for the branch flows:

pbr_moments = [[mean(plsol[i, :], mop) var(plsol[i, :], mop)] for i in 1:Nl]
diff --git a/dev/assets/Manifest.toml b/dev/assets/Manifest.toml index 8e877a6..557ffbb 100644 --- a/dev/assets/Manifest.toml +++ b/dev/assets/Manifest.toml @@ -115,9 +115,9 @@ uuid = "56f22d72-fd6d-98f1-02f0-08ddc0907c33" [[deps.BandedMatrices]] deps = ["ArrayLayouts", "FillArrays", "LinearAlgebra", "PrecompileTools"] -git-tree-sha1 = "67bcff3f50026b6fa952721525d3a04f0570d432" +git-tree-sha1 = "06a2a94d5a4979c36cc7a3c28d70800f448ae5bb" uuid = "aae01518-5342-5314-be14-df237901396f" -version = "1.2.1" +version = "1.3.0" weakdeps = ["SparseArrays"] [deps.BandedMatrices.extensions] @@ -128,9 +128,9 @@ uuid = "2a0f44e3-6c83-55bd-87e4-b1978d98bd5f" [[deps.BenchmarkTools]] deps = ["JSON", "Logging", "Printf", "Profile", "Statistics", "UUIDs"] -git-tree-sha1 = "d9a9701b899b30332bbcb3e1679c41cce81fb0e8" +git-tree-sha1 = "f1f03a9fa24271160ed7e73051fba3c1a759b53f" uuid = "6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf" -version = "1.3.2" +version = "1.4.0" [[deps.BitFlags]] git-tree-sha1 = "2dc09997850d68179b69dafb58ae806167a32b1b" @@ -550,9 +550,9 @@ uuid = "7b1f6079-737a-58dc-b8bc-7a2ca5c1b5ee" [[deps.FillArrays]] deps = ["LinearAlgebra", "Random"] -git-tree-sha1 = "28e4e9c4b7b162398ec8004bdabe9a90c78c122d" +git-tree-sha1 = "fdd015769934644858b4bcc69a03bb06f4e31357" uuid = "1a297f60-69ca-5386-bcde-b61e274b549b" -version = "1.8.0" +version = "1.9.0" weakdeps = ["PDMats", "SparseArrays", "Statistics"] [deps.FillArrays.extensions] @@ -748,10 +748,10 @@ uuid = "18e54dd8-cb9d-406c-a71d-865a43cbb235" version = "0.1.2" [[deps.IntelOpenMP_jll]] -deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"] -git-tree-sha1 = "ad37c091f7d7daf900963171600d7c1c5c3ede32" +deps = ["Artifacts", "JLLWrappers", "Libdl"] +git-tree-sha1 = "31d6adb719886d4e32e38197aae466e98881320b" uuid = "1d5cc7b8-4909-519e-a0f8-d0f5ad9712d0" -version = "2023.2.0+0" +version = "2024.0.0+0" [[deps.InteractiveUtils]] deps = ["Markdown"] @@ -799,9 +799,9 @@ version = "2.1.91+0" [[deps.JuMP]] deps = ["LinearAlgebra", "MacroTools", "MathOptInterface", "MutableArithmetics", "OrderedCollections", "Printf", "SnoopPrecompile", "SparseArrays"] -git-tree-sha1 = "25b2fcda4d455b6f93ac753730d741340ba4a4fe" +git-tree-sha1 = "cd161958e8b47f9696a6b03f563afb4e5fe8f703" uuid = "4076af6c-e467-56ae-b986-b466b2749572" -version = "1.16.0" +version = "1.17.0" [deps.JuMP.extensions] JuMPDimensionalDataExt = "DimensionalData" diff --git a/dev/chi_squared_k1/index.html b/dev/chi_squared_k1/index.html index f41cb3e..0c97d04 100644 --- a/dev/chi_squared_k1/index.html +++ b/dev/chi_squared_k1/index.html @@ -58,300 +58,288 @@ plot!(t, ρ.(t), w = 4) - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/dev/chi_squared_k_greater1/index.html b/dev/chi_squared_k_greater1/index.html index f48f9a2..a76d6b0 100644 --- a/dev/chi_squared_k_greater1/index.html +++ b/dev/chi_squared_k_greater1/index.html @@ -103,155 +103,175 @@ plot!(t, ρ.(t), w = 4) - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/dev/functions/index.html b/dev/functions/index.html index 1644bb8..e65f78a 100644 --- a/dev/functions/index.html +++ b/dev/functions/index.html @@ -3,14 +3,14 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-90474609-3', {'page_path': location.pathname + location.search + location.hash}); -

Functions

Note

The core interface of all essential functions are not dependent on specialized types such as AbstractOrthoPoly. Having said that, for exactly those essential functions, there exist overloaded functions that accept specialized types such as AbstractOrthoPoly as arguments.

Too abstract? For example, the function evaluate that evaluates a polynomial of degree n at points x has the core interface

    evaluate(n::Int,x::Array{<:Real},a::Vector{<:Real},b::Vector{<:Real})

where a and b are the vectors of recurrence coefficients. For simplicity, there also exists the interface

    evaluate(n::Int64,x::Vector{<:Real},op::AbstractOrthoPoly)

So fret not upon the encounter of multiple-dispatched versions of the same thing. It's there to simplify your life.

The idea of this approach is to make it simpler for others to copy and paste code snippets and use them in their own work.

List of all functions in PolyChaos.

Recurrence Coefficients for Monic Orthogonal Polynomials

The functions below provide analytic expressions for the recurrence coefficients of common orthogonal polynomials. All of these provide monic orthogonal polynomials relative to the weights.

Note

The number N of recurrence coefficients has to be positive for all functions below.

PolyChaos.r_scaleFunction
r_scale(c::Real,β::AbstractVector{<:Real},α::AbstractVector{<:Real})

Given the recursion coefficients (α,β) for a system of orthogonal polynomials that are orthogonal with respect to some positive weight $m(t)$, this function returns the recursion coefficients (α_,β_) for the scaled measure $c m(t)$ for some positive $c$.

source
PolyChaos.rm_computeFunction
rm_compute(weight::Function,lb::Real,ub::Real,Npoly::Int=4,Nquad::Int=10;quadrature::Function=clenshaw_curtis)

Given a positive weight function with domain (lb,ub), i.e. a function $w: [lb, ub ] \rightarrow \mathbb{R}_{\geq 0}$, this function creates Npoly recursion coefficients (α,β).

The keyword quadrature specifies what quadrature rule is being used.

source
PolyChaos.rm_logisticFunction
rm_logistic(N::Int)

Creates N recurrence coefficients for monic polynomials that are orthogonal on $(-\infty,\infty)$ relative to $w(t) = \frac{\mathrm{e}^{-t}}{(1 - \mathrm{e}^{-t})^2}$

source
PolyChaos.rm_hermiteFunction
rm_hermite(N::Int,mu::Real)
-rm_hermite(N::Int)

Creates N recurrence coefficients for monic generalized Hermite polynomials that are orthogonal on $(-\infty,\infty)$ relative to $w(t) = |t|^{2 \mu} \mathrm{e}^{-t^2}$

The call rm_hermite(N) is the same as rm_hermite(N,0).

source
PolyChaos.rm_hermite_probFunction
rm_hermite_prob(N::Int)

Creates N recurrence coefficients for monic probabilists' Hermite polynomials that are orthogonal on $(-\infty,\infty)$ relative to $w(t) = \mathrm{e}^{-0.5t^2}$

source
PolyChaos.rm_laguerreFunction
rm_laguerre(N::Int,a::Real)
-rm_laguerre(N::Int)

Creates N recurrence coefficients for monic generalized Laguerre polynomials that are orthogonal on $(0,\infty)$ relative to $w(t) = t^a \mathrm{e}^{-t}$.

The call rm_laguerre(N) is the same as rm_laguerre(N,0).

source
PolyChaos.rm_legendreFunction
rm_legendre(N::Int)

Creates N recurrence coefficients for monic Legendre polynomials that are orthogonal on $(-1,1)$ relative to $w(t) = 1$.

source
PolyChaos.rm_legendre01Function
rm_legendre01(N::Int)

Creates N recurrence coefficients for monic Legendre polynomials that are orthogonal on $(0,1)$ relative to $w(t) = 1$.

source
PolyChaos.rm_jacobiFunction
rm_jacobi(N::Int,a::Real,b::Real)
+

Functions

Note

The core interface of all essential functions are not dependent on specialized types such as AbstractOrthoPoly. Having said that, for exactly those essential functions, there exist overloaded functions that accept specialized types such as AbstractOrthoPoly as arguments.

Too abstract? For example, the function evaluate that evaluates a polynomial of degree n at points x has the core interface

    evaluate(n::Int,x::Array{<:Real},a::Vector{<:Real},b::Vector{<:Real})

where a and b are the vectors of recurrence coefficients. For simplicity, there also exists the interface

    evaluate(n::Int64,x::Vector{<:Real},op::AbstractOrthoPoly)

So fret not upon the encounter of multiple-dispatched versions of the same thing. It's there to simplify your life.

The idea of this approach is to make it simpler for others to copy and paste code snippets and use them in their own work.

List of all functions in PolyChaos.

Recurrence Coefficients for Monic Orthogonal Polynomials

The functions below provide analytic expressions for the recurrence coefficients of common orthogonal polynomials. All of these provide monic orthogonal polynomials relative to the weights.

Note

The number N of recurrence coefficients has to be positive for all functions below.

PolyChaos.r_scaleFunction
r_scale(c::Real,β::AbstractVector{<:Real},α::AbstractVector{<:Real})

Given the recursion coefficients (α,β) for a system of orthogonal polynomials that are orthogonal with respect to some positive weight $m(t)$, this function returns the recursion coefficients (α_,β_) for the scaled measure $c m(t)$ for some positive $c$.

source
PolyChaos.rm_computeFunction
rm_compute(weight::Function,lb::Real,ub::Real,Npoly::Int=4,Nquad::Int=10;quadrature::Function=clenshaw_curtis)

Given a positive weight function with domain (lb,ub), i.e. a function $w: [lb, ub ] \rightarrow \mathbb{R}_{\geq 0}$, this function creates Npoly recursion coefficients (α,β).

The keyword quadrature specifies what quadrature rule is being used.

source
PolyChaos.rm_logisticFunction
rm_logistic(N::Int)

Creates N recurrence coefficients for monic polynomials that are orthogonal on $(-\infty,\infty)$ relative to $w(t) = \frac{\mathrm{e}^{-t}}{(1 - \mathrm{e}^{-t})^2}$

source
PolyChaos.rm_hermiteFunction
rm_hermite(N::Int,mu::Real)
+rm_hermite(N::Int)

Creates N recurrence coefficients for monic generalized Hermite polynomials that are orthogonal on $(-\infty,\infty)$ relative to $w(t) = |t|^{2 \mu} \mathrm{e}^{-t^2}$

The call rm_hermite(N) is the same as rm_hermite(N,0).

source
PolyChaos.rm_hermite_probFunction
rm_hermite_prob(N::Int)

Creates N recurrence coefficients for monic probabilists' Hermite polynomials that are orthogonal on $(-\infty,\infty)$ relative to $w(t) = \mathrm{e}^{-0.5t^2}$

source
PolyChaos.rm_laguerreFunction
rm_laguerre(N::Int,a::Real)
+rm_laguerre(N::Int)

Creates N recurrence coefficients for monic generalized Laguerre polynomials that are orthogonal on $(0,\infty)$ relative to $w(t) = t^a \mathrm{e}^{-t}$.

The call rm_laguerre(N) is the same as rm_laguerre(N,0).

source
PolyChaos.rm_legendreFunction
rm_legendre(N::Int)

Creates N recurrence coefficients for monic Legendre polynomials that are orthogonal on $(-1,1)$ relative to $w(t) = 1$.

source
PolyChaos.rm_legendre01Function
rm_legendre01(N::Int)

Creates N recurrence coefficients for monic Legendre polynomials that are orthogonal on $(0,1)$ relative to $w(t) = 1$.

source
PolyChaos.rm_jacobiFunction
rm_jacobi(N::Int,a::Real,b::Real)
 rm_jacobi(N::Int,a::Real)
-rm_jacobi(N::Int)

Creates N recurrence coefficients for monic Jacobi polynomials that are orthogonal on $(-1,1)$ relative to $w(t) = (1-t)^a (1+t)^b$.

The call rm_jacobi(N,a) is the same as rm_jacobi(N,a,a) and rm_jacobi(N) the same as rm_jacobi(N,0,0).

source
PolyChaos.rm_jacobi01Function
rm_jacobi01(N::Int,a::Real,b::Real)
+rm_jacobi(N::Int)

Creates N recurrence coefficients for monic Jacobi polynomials that are orthogonal on $(-1,1)$ relative to $w(t) = (1-t)^a (1+t)^b$.

The call rm_jacobi(N,a) is the same as rm_jacobi(N,a,a) and rm_jacobi(N) the same as rm_jacobi(N,0,0).

source
PolyChaos.rm_jacobi01Function
rm_jacobi01(N::Int,a::Real,b::Real)
 rm_jacobi01(N::Int,a::Real)
-rm_jacobi01(N::Int)

Creates N recurrence coefficients for monic Jacobi polynomials that are orthogonal on $(0,1)$ relative to $w(t) = (1-t)^a t^b$.

The call rm_jacobi01(N,a) is the same as rm_jacobi01(N,a,a) and rm_jacobi01(N) the same as rm_jacobi01(N,0,0).

source
PolyChaos.rm_meixner_pollaczekFunction
rm_meixner_pollaczek(N::Int,lambda::Real,phi::Real)
-rm_meixner_pollaczek(N::Int,lambda::Real)

Creates N recurrence coefficients for monic Meixner-Pollaczek polynomials with parameters λ and ϕ. These are orthogonal on $[-\infty,\infty]$ relative to the weight function $w(t)=(2 \pi)^{-1} \exp{(2 \phi-\pi)t} |\Gamma(\lambda+ i t)|^2$.

The call rm_meixner_pollaczek(n,lambda) is the same as rm_meixner_pollaczek(n,lambda,pi/2).

source
PolyChaos.stieltjesFunction
stieltjes(N::Int,nodes_::AbstractVector{<:Real},weights_::AbstractVector{<:Real};removezeroweights::Bool=true)

Stieltjes procedure–-Given the nodes and weights, the function generates the firstN recurrence coefficients of the corresponding discrete orthogonal polynomials.

Set the Boolean removezeroweights to true if zero weights should be removed.

source
PolyChaos.lanczosFunction
lanczos(N::Int,nodes::AbstractVector{<:Real},weights::AbstractVector{<:Real};removezeroweights::Bool=true)

Lanczos procedure–-given the nodes and weights, the function generates the first N recurrence coefficients of the corresponding discrete orthogonal polynomials.

Set the Boolean removezeroweights to true if zero weights should be removed.

The script is adapted from the routine RKPW in W.B. Gragg and W.J. Harrod, The numerically stable reconstruction of Jacobi matrices from spectral data, Numer. Math. 44 (1984), 317-335.

source
PolyChaos.mcdiscretizationFunction
mcdiscretization(N::Int,quads::Vector{},discretemeasure::AbstractMatrix{<:Real}=zeros(0,2);discretization::Function=stieltjes,Nmax::Integer=300,ε::Float64=1e-8,gaussquad::Bool=false)

This routine returns $N$ recurrence coefficients of the polynomials that are orthogonal relative to a weight function $w$ that is decomposed as a sum of $m$ weights $w_i$ with domains $[a_i,b_i]$ for $i=1,\dots,m$,

\[w(t) = \sum_{i}^{m} w_i(t) \quad \text{with } \operatorname{dom}(w_i) = [a_i, b_i].\]

For each weight $w_i$ and its domain $[a_i, b_i]$ the function mcdiscretization() expects a quadrature rule of the form nodes::AbstractVector{<:Real}, weights::AbstractVector{<:Real} = myquadi(N::Int) all of which are stacked in the parameter quad quad = [ myquad1, ..., myquadm ] If the weight function has a discrete part (specified by discretemeasure) it is added on to the discretized continuous weight function.

The function mcdiscretization() performs a sequence of discretizations of the given weight $w(t)$, each discretization being followed by an application of the Stieltjes or Lanczos procedure (keyword discretization in [stieltjes, lanczos]) to produce approximations to the desired recurrence coefficients. The function applies to each subinterval $i$ an N-point quadrature rule (the $i$th entry of quad) to discretize the weight function $w_i$ on that subinterval. If the procedure converges to within a prescribed accuracy ε before N reaches its maximum allowed value Nmax. If the function does not converge, the function prompts an error message.

The keyword gaussquad should be set to true if Gauss quadrature rules are available for all $m$ weights $w_i(t)$ with $i = 1, \dots, m$.

For further information, please see W. Gautschi "Orthogonal Polynomials: Approximation and Computation", Section 2.2.4.

source

Show Orthogonal Polynomials

To get a human-readable output of the orthogonal polynomials, there is the function showpoly

PolyChaos.showpolyFunction
showpoly(coeffs::Vector{<:Real};sym::String,digits::Integer)

Show the monic polynomial with coefficients coeffs in a human-readable way. The keyword sym sets the name of the variable, and digits controls the number of shown digits.

julia> using PolyChaos
+rm_jacobi01(N::Int)

Creates N recurrence coefficients for monic Jacobi polynomials that are orthogonal on $(0,1)$ relative to $w(t) = (1-t)^a t^b$.

The call rm_jacobi01(N,a) is the same as rm_jacobi01(N,a,a) and rm_jacobi01(N) the same as rm_jacobi01(N,0,0).

source
PolyChaos.rm_meixner_pollaczekFunction
rm_meixner_pollaczek(N::Int,lambda::Real,phi::Real)
+rm_meixner_pollaczek(N::Int,lambda::Real)

Creates N recurrence coefficients for monic Meixner-Pollaczek polynomials with parameters λ and ϕ. These are orthogonal on $[-\infty,\infty]$ relative to the weight function $w(t)=(2 \pi)^{-1} \exp{(2 \phi-\pi)t} |\Gamma(\lambda+ i t)|^2$.

The call rm_meixner_pollaczek(n,lambda) is the same as rm_meixner_pollaczek(n,lambda,pi/2).

source
PolyChaos.stieltjesFunction
stieltjes(N::Int,nodes_::AbstractVector{<:Real},weights_::AbstractVector{<:Real};removezeroweights::Bool=true)

Stieltjes procedure–-Given the nodes and weights, the function generates the firstN recurrence coefficients of the corresponding discrete orthogonal polynomials.

Set the Boolean removezeroweights to true if zero weights should be removed.

source
PolyChaos.lanczosFunction
lanczos(N::Int,nodes::AbstractVector{<:Real},weights::AbstractVector{<:Real};removezeroweights::Bool=true)

Lanczos procedure–-given the nodes and weights, the function generates the first N recurrence coefficients of the corresponding discrete orthogonal polynomials.

Set the Boolean removezeroweights to true if zero weights should be removed.

The script is adapted from the routine RKPW in W.B. Gragg and W.J. Harrod, The numerically stable reconstruction of Jacobi matrices from spectral data, Numer. Math. 44 (1984), 317-335.

source
PolyChaos.mcdiscretizationFunction
mcdiscretization(N::Int,quads::Vector{},discretemeasure::AbstractMatrix{<:Real}=zeros(0,2);discretization::Function=stieltjes,Nmax::Integer=300,ε::Float64=1e-8,gaussquad::Bool=false)

This routine returns $N$ recurrence coefficients of the polynomials that are orthogonal relative to a weight function $w$ that is decomposed as a sum of $m$ weights $w_i$ with domains $[a_i,b_i]$ for $i=1,\dots,m$,

\[w(t) = \sum_{i}^{m} w_i(t) \quad \text{with } \operatorname{dom}(w_i) = [a_i, b_i].\]

For each weight $w_i$ and its domain $[a_i, b_i]$ the function mcdiscretization() expects a quadrature rule of the form nodes::AbstractVector{<:Real}, weights::AbstractVector{<:Real} = myquadi(N::Int) all of which are stacked in the parameter quad quad = [ myquad1, ..., myquadm ] If the weight function has a discrete part (specified by discretemeasure) it is added on to the discretized continuous weight function.

The function mcdiscretization() performs a sequence of discretizations of the given weight $w(t)$, each discretization being followed by an application of the Stieltjes or Lanczos procedure (keyword discretization in [stieltjes, lanczos]) to produce approximations to the desired recurrence coefficients. The function applies to each subinterval $i$ an N-point quadrature rule (the $i$th entry of quad) to discretize the weight function $w_i$ on that subinterval. If the procedure converges to within a prescribed accuracy ε before N reaches its maximum allowed value Nmax. If the function does not converge, the function prompts an error message.

The keyword gaussquad should be set to true if Gauss quadrature rules are available for all $m$ weights $w_i(t)$ with $i = 1, \dots, m$.

For further information, please see W. Gautschi "Orthogonal Polynomials: Approximation and Computation", Section 2.2.4.

source

Show Orthogonal Polynomials

To get a human-readable output of the orthogonal polynomials, there is the function showpoly

PolyChaos.showpolyFunction
showpoly(coeffs::Vector{<:Real};sym::String,digits::Integer)

Show the monic polynomial with coefficients coeffs in a human-readable way. The keyword sym sets the name of the variable, and digits controls the number of shown digits.

julia> using PolyChaos
 
 julia> showpoly([1.2, 2.3, 3.4456])
 x^3 + 3.45x^2 + 2.3x + 1.2
@@ -44,7 +44,7 @@
 t^2 - 1.0
 t^3 - 3.0t
 t^4 - 6.0t^2 + 3.0
-t^5 - 10.0t^3 + 15.0t

Thanks @pfitzseb for providing this functionality.

source

In case you want to see the entire basis, just use showbasis

In case you want to see the entire basis, just use showbasis

PolyChaos.showbasisFunction
showbasis(α::Vector{<:Real},β::Vector{<:Real};sym::String,digits::Integer)

Show all basis polynomials given the recurrence coefficients α, β. The keyword sym sets the name of the variable, and digits controls the number of shown digits.

julia> using PolyChaos
 
 julia> α, β = rm_hermite(5);
 
@@ -63,8 +63,8 @@
 x
 x^2 - 0.33
 x^3 - 0.6x
-x^4 - 0.86x^2 + 0.09
source

Both of these functions make excessive use of

PolyChaos.rec2coeffFunction
rec2coeff(deg::Int,a::Vector{<:Real},b::Vector{<:Real})
-rec2coeff(a,b) = rec2coeff(length(a),a,b)

Get the coefficients of the orthogonal polynomial of degree up to deg specified by its recurrence coefficients (a,b). The function returns the values $c_i^{(k)}$ from

\[p_k (t) = t^d + \sum_{i=0}^{k-1} c_i t^i,\]

where $k$ runs from 1 to deg.

The call rec2coeff(a,b) outputs all possible recurrence coefficients given (a,b).

source

Evaluate Orthogonal Polynomials

PolyChaos.evaluateFunction

Univariate

evaluate(n::Int,x::Array{<:Real},a::AbstractVector{<:Real},b::AbstractVector{<:Real})
+x^4 - 0.86x^2 + 0.09
source

Both of these functions make excessive use of

PolyChaos.rec2coeffFunction
rec2coeff(deg::Int,a::Vector{<:Real},b::Vector{<:Real})
+rec2coeff(a,b) = rec2coeff(length(a),a,b)

Get the coefficients of the orthogonal polynomial of degree up to deg specified by its recurrence coefficients (a,b). The function returns the values $c_i^{(k)}$ from

\[p_k (t) = t^d + \sum_{i=0}^{k-1} c_i t^i,\]

where $k$ runs from 1 to deg.

The call rec2coeff(a,b) outputs all possible recurrence coefficients given (a,b).

source

Evaluate Orthogonal Polynomials

PolyChaos.evaluateFunction

Univariate

evaluate(n::Int,x::Array{<:Real},a::AbstractVector{<:Real},b::AbstractVector{<:Real})
 evaluate(n::Int,x::Real,a::AbstractVector{<:Real},b::AbstractVector{<:Real})
 evaluate(n::Int,x::AbstractVector{<:Real},op::AbstractOrthoPoly)
 evaluate(n::Int,x::Real,op::AbstractOrthoPoly)

Evaluate the n-th univariate basis polynomial at point(s) x The function is multiple-dispatched to facilitate its use with the composite type AbstractOrthoPoly

If several basis polynomials (stored in ns) are to be evaluated at points x, then call

evaluate(ns::AbstractVector{<:Int},x::AbstractVector{<:Real},op::AbstractOrthoPoly) = evaluate(ns,x,op.α,op.β)
@@ -73,39 +73,39 @@
 evaluate(n::AbstractVector{<:Int},x::AbstractVector{<:Real},a::Vector{<:AbstractVector{<:Real}},b::Vector{<:AbstractVector{<:Real}})
 evaluate(n::AbstractVector{<:Int},x::AbstractMatrix{<:Real},op::MultiOrthoPoly)
 evaluate(n::AbstractVector{<:Int},x::AbstractVector{<:Real},op::MultiOrthoPoly)

Evaluate the n-th p-variate basis polynomial at point(s) x The function is multiply dispatched to facilitate its use with the composite type MultiOrthoPoly

If several basis polynomials are to be evaluated at points x, then call

evaluate(ind::AbstractMatrix{<:Int},x::AbstractMatrix{<:Real},a::Vector{<:AbstractVector{<:Real}},b::Vector{<:AbstractVector{<:Real}})
-evaluate(ind::AbstractMatrix{<:Int},x::AbstractMatrix{<:Real},op::MultiOrthoPoly)

where ind is a matrix of multi-indices.

If all basis polynomials are to be evaluated at points x, then call

evaluate(x::AbstractMatrix{<:Real},mop::MultiOrthoPoly) = evaluate(mop.ind,x,mop)

which returns an array of dimensions (mop.dim,size(x,1)).

Note
  • n is a multi-index
  • length(n) == p, i.e. a p-variate basis polynomial
  • size(x) = (N,p), where N is the number of points
  • size(a)==size(b)=p.
source

Scalar Products

PolyChaos.computeSP2Function
computeSP2(n::Integer,β::AbstractVector{<:Real})
+evaluate(ind::AbstractMatrix{<:Int},x::AbstractMatrix{<:Real},op::MultiOrthoPoly)

where ind is a matrix of multi-indices.

If all basis polynomials are to be evaluated at points x, then call

evaluate(x::AbstractMatrix{<:Real},mop::MultiOrthoPoly) = evaluate(mop.ind,x,mop)

which returns an array of dimensions (mop.dim,size(x,1)).

Note
  • n is a multi-index
  • length(n) == p, i.e. a p-variate basis polynomial
  • size(x) = (N,p), where N is the number of points
  • size(a)==size(b)=p.
source

Scalar Products

PolyChaos.computeSP2Function
computeSP2(n::Integer,β::AbstractVector{<:Real})
 computeSP2(n::Integer,op::AbstractOrthoPoly) = computeSP2(n,op.β)
-computeSP2(op::AbstractOrthoPoly) = computeSP2(op.deg,op.β)

Computes the n regular scalar products aka 2-norms of the orthogonal polynomials, namely

\[\|ϕ_i\|^2 = \langle \phi_i,\phi_i\rangle \quad \forall i \in \{ 0,\dots,n \}.\]

Notice that only the values of β of the recurrence coefficients (α,β) are required. The computation is based on equation (1.3.7) from Gautschi, W. "Orthogonal Polynomials: Computation and Approximation". Whenever there exists an analytical expression for β, this function should be used.

The function is multiple-dispatched to facilitate its use with AbstractOrthoPoly.

source
PolyChaos.computeSPFunction

Univariate

computeSP(a::AbstractVector{<:Integer},α::AbstractVector{<:Real},β::AbstractVector{<:Real},nodes::AbstractVector{<:Real},weights::AbstractVector{<:Real};issymmetric::Bool=false)
+computeSP2(op::AbstractOrthoPoly) = computeSP2(op.deg,op.β)

Computes the n regular scalar products aka 2-norms of the orthogonal polynomials, namely

\[\|ϕ_i\|^2 = \langle \phi_i,\phi_i\rangle \quad \forall i \in \{ 0,\dots,n \}.\]

Notice that only the values of β of the recurrence coefficients (α,β) are required. The computation is based on equation (1.3.7) from Gautschi, W. "Orthogonal Polynomials: Computation and Approximation". Whenever there exists an analytical expression for β, this function should be used.

The function is multiple-dispatched to facilitate its use with AbstractOrthoPoly.

source
PolyChaos.computeSPFunction

Univariate

computeSP(a::AbstractVector{<:Integer},α::AbstractVector{<:Real},β::AbstractVector{<:Real},nodes::AbstractVector{<:Real},weights::AbstractVector{<:Real};issymmetric::Bool=false)
 computeSP(a::AbstractVector{<:Integer},op::AbstractOrthoPoly;issymmetric=issymmetric(op))

Multivariate

computeSP( a::AbstractVector{<:Integer},
            α::AbstractVector{<:AbstractVector{<:Real}},β::AbstractVector{<:AbstractVector{<:Real}},
            nodes::AbstractVector{<:AbstractVector{<:Real}},weights::AbstractVector{<:AbstractVector{<:Real}},
            ind::AbstractMatrix{<:Integer};
            issymmetric::BitArray=falses(length(α)))
 computeSP(a::AbstractVector{<:Integer},op::AbstractVector,ind::AbstractMatrix{<:Integer})
-computeSP(a::AbstractVector{<:Integer},mOP::MultiOrthoPoly)

Computes the scalar product

\[\langle \phi_{a_1},\phi_{a_2},\cdots,\phi_{a_n} \rangle,\]

where n = length(a). This requires to provide the recurrence coefficients (α,β) and the quadrature rule (nodes,weights), as well as the multi-index ind. If provided via the keyword issymmetric, symmetry of the weight function is exploited. All computations of the multivariate scalar products resort back to efficient computations of the univariate scalar products. Mathematically, this follows from Fubini's theorem.

The function is dispatched to facilitate its use with AbstractOrthoPoly and its quadrature rule Quad.

Note
  • Zero entries of $a$ are removed automatically to simplify computations, which follows from

\[\langle \phi_i, \phi_j, \phi_0,\cdots,\phi_0 \rangle = \langle \phi_i, \phi_j \rangle,\]

because \phi_0 = 1.

  • It is checked whether enough quadrature points are supplied to solve the integral exactly.
source

Quadrature Rules

PolyChaos.quadgpFunction
quadgp(weight::Function,lb::Real,ub::Real,N::Int=10;quadrature::Function=clenshaw_curtis,bnd::Float64=Inf)

general purpose quadrature based on Gautschi, "Orthogonal Polynomials: Computation and Approximation", Section 2.2.2, pp. 93-95

Compute the N-point quadrature rule for weight with support (lb, ub). The quadrature rule can be specified by the keyword quadrature. The keyword bnd sets the numerical value for infinity.

source
PolyChaos.gaussFunction
gauss(N::Int,α::AbstractVector{<:Real},β::AbstractVector{<:Real})
+computeSP(a::AbstractVector{<:Integer},mOP::MultiOrthoPoly)

Computes the scalar product

\[\langle \phi_{a_1},\phi_{a_2},\cdots,\phi_{a_n} \rangle,\]

where n = length(a). This requires to provide the recurrence coefficients (α,β) and the quadrature rule (nodes,weights), as well as the multi-index ind. If provided via the keyword issymmetric, symmetry of the weight function is exploited. All computations of the multivariate scalar products resort back to efficient computations of the univariate scalar products. Mathematically, this follows from Fubini's theorem.

The function is dispatched to facilitate its use with AbstractOrthoPoly and its quadrature rule Quad.

Note
  • Zero entries of $a$ are removed automatically to simplify computations, which follows from

\[\langle \phi_i, \phi_j, \phi_0,\cdots,\phi_0 \rangle = \langle \phi_i, \phi_j \rangle,\]

because \phi_0 = 1.

  • It is checked whether enough quadrature points are supplied to solve the integral exactly.
source

Quadrature Rules

PolyChaos.quadgpFunction
quadgp(weight::Function,lb::Real,ub::Real,N::Int=10;quadrature::Function=clenshaw_curtis,bnd::Float64=Inf)

general purpose quadrature based on Gautschi, "Orthogonal Polynomials: Computation and Approximation", Section 2.2.2, pp. 93-95

Compute the N-point quadrature rule for weight with support (lb, ub). The quadrature rule can be specified by the keyword quadrature. The keyword bnd sets the numerical value for infinity.

source
PolyChaos.gaussFunction
gauss(N::Int,α::AbstractVector{<:Real},β::AbstractVector{<:Real})
 gauss(α::AbstractVector{<:Real},β::AbstractVector{<:Real})
 gauss(N::Int,op::Union{OrthoPoly,AbstractCanonicalOrthoPoly})
-gauss(op::Union{OrthoPoly,AbstractCanonicalOrthoPoly})

Gauss quadrature rule, also known as Golub-Welsch algorithm

gauss() generates the N Gauss quadrature nodes and weights for a given weight function. The weight function is represented by the N recurrence coefficients for the monic polynomials orthogonal with respect to the weight function.

Note

The function gauss accepts at most N = length(α) - 1 quadrature points, hence providing at most an (length(α) - 1)-point quadrature rule.

Note

If no N is provided, then N = length(α) - 1.

source
PolyChaos.radauFunction
radau(N::Int,α::AbstractVector{<:Real},β::AbstractVector{<:Real},end0::Real)
+gauss(op::Union{OrthoPoly,AbstractCanonicalOrthoPoly})

Gauss quadrature rule, also known as Golub-Welsch algorithm

gauss() generates the N Gauss quadrature nodes and weights for a given weight function. The weight function is represented by the N recurrence coefficients for the monic polynomials orthogonal with respect to the weight function.

Note

The function gauss accepts at most N = length(α) - 1 quadrature points, hence providing at most an (length(α) - 1)-point quadrature rule.

Note

If no N is provided, then N = length(α) - 1.

source
PolyChaos.radauFunction
radau(N::Int,α::AbstractVector{<:Real},β::AbstractVector{<:Real},end0::Real)
 radau(α::AbstractVector{<:Real},β::AbstractVector{<:Real},end0::Real)
 radau(N::Int,op::Union{OrthoPoly,AbstractCanonicalOrthoPoly},end0::Real)
-radau(op::Union{OrthoPoly,AbstractCanonicalOrthoPoly},end0::Real)

Gauss-Radau quadrature rule. Given a weight function encoded by the recurrence coefficients (α,β)for the associated orthogonal polynomials, the function generates the nodes and weights (N+1)-point Gauss-Radau quadrature rule for the weight function having a prescribed node end0 (typically at one of the end points of the support interval of w, or outside thereof).

Note

The function radau accepts at most N = length(α) - 2 as an input, hence providing at most an (length(α) - 1)-point quadrature rule.

Note

Reference: OPQ: A MATLAB SUITE OF PROGRAMS FOR GENERATING ORTHOGONAL POLYNOMIALS AND RELATED QUADRATURE RULES by Walter Gautschi

source
PolyChaos.lobattoFunction
lobatto(N::Int,α::AbstractVector{<:Real},β::AbstractVector{<:Real},endl::Real,endr::Real)
+radau(op::Union{OrthoPoly,AbstractCanonicalOrthoPoly},end0::Real)

Gauss-Radau quadrature rule. Given a weight function encoded by the recurrence coefficients (α,β)for the associated orthogonal polynomials, the function generates the nodes and weights (N+1)-point Gauss-Radau quadrature rule for the weight function having a prescribed node end0 (typically at one of the end points of the support interval of w, or outside thereof).

Note

The function radau accepts at most N = length(α) - 2 as an input, hence providing at most an (length(α) - 1)-point quadrature rule.

Note

Reference: OPQ: A MATLAB SUITE OF PROGRAMS FOR GENERATING ORTHOGONAL POLYNOMIALS AND RELATED QUADRATURE RULES by Walter Gautschi

source
PolyChaos.lobattoFunction
lobatto(N::Int,α::AbstractVector{<:Real},β::AbstractVector{<:Real},endl::Real,endr::Real)
 lobatto(α::AbstractVector{<:Real},β::AbstractVector{<:Real},endl::Real,endr::Real)
 lobatto(N::Int,op::Union{OrthoPoly,AbstractCanonicalOrthoPoly},endl::Real,endr::Real)
-lobatto(op::Union{OrthoPoly,AbstractCanonicalOrthoPoly},endl::Real,endr::Real)

Gauss-Lobatto quadrature rule. Given a weight function encoded by the recurrence coefficients for the associated orthogonal polynomials, the function generates the nodes and weights of the (N+2)-point Gauss-Lobatto quadrature rule for the weight function, having two prescribed nodes endl, endr (typically the left and right end points of the support interval, or points to the left resp. to the right thereof).

Note

The function radau accepts at most N = length(α) - 3 as an input, hence providing at most an (length(α) - 1)-point quadrature rule.

Note

Reference: OPQ: A MATLAB SUITE OF PROGRAMS FOR GENERATING ORTHOGONAL POLYNOMIALS AND RELATED QUADRATURE RULES by Walter Gautschi

source

Polynomial Chaos

Statistics.meanFunction

Univariate

mean(x::AbstractVector,op::AbstractOrthoPoly)

Multivariate

mean(x::AbstractVector,mop::MultiOrthoPoly)

compute mean of random variable with PCE x

source
Statistics.varFunction

Univariate

var(x::AbstractVector,op::AbstractOrthoPoly)
+lobatto(op::Union{OrthoPoly,AbstractCanonicalOrthoPoly},endl::Real,endr::Real)

Gauss-Lobatto quadrature rule. Given a weight function encoded by the recurrence coefficients for the associated orthogonal polynomials, the function generates the nodes and weights of the (N+2)-point Gauss-Lobatto quadrature rule for the weight function, having two prescribed nodes endl, endr (typically the left and right end points of the support interval, or points to the left resp. to the right thereof).

Note

The function radau accepts at most N = length(α) - 3 as an input, hence providing at most an (length(α) - 1)-point quadrature rule.

Note

Reference: OPQ: A MATLAB SUITE OF PROGRAMS FOR GENERATING ORTHOGONAL POLYNOMIALS AND RELATED QUADRATURE RULES by Walter Gautschi

source

Polynomial Chaos

Statistics.meanFunction

Univariate

mean(x::AbstractVector,op::AbstractOrthoPoly)

Multivariate

mean(x::AbstractVector,mop::MultiOrthoPoly)

compute mean of random variable with PCE x

source
Statistics.varFunction

Univariate

var(x::AbstractVector,op::AbstractOrthoPoly)
 var(x::AbstractVector,t2::Tensor)

Multivariate

var(x::AbstractVector,mop::MultiOrthoPoly)
-var(x::AbstractVector,t2::Tensor)

compute variance of random variable with PCE x

source
Statistics.stdFunction

Univariate

std(x::AbstractVector,op::AbstractOrthoPoly)

Multivariate

std(x::AbstractVector,mop::MultiOrthoPoly)

compute standard deviation of random variable with PCE x

source
PolyChaos.sampleMeasureFunction

Univariate

sampleMeasure(n::Int,name::String,w::Function,dom::Tuple{<:Real,<:Real},symm::Bool,d::Dict;method::String="adaptiverejection")
+var(x::AbstractVector,t2::Tensor)

compute variance of random variable with PCE x

source
Statistics.stdFunction

Univariate

std(x::AbstractVector,op::AbstractOrthoPoly)

Multivariate

std(x::AbstractVector,mop::MultiOrthoPoly)

compute standard deviation of random variable with PCE x

source
PolyChaos.sampleMeasureFunction

Univariate

sampleMeasure(n::Int,name::String,w::Function,dom::Tuple{<:Real,<:Real},symm::Bool,d::Dict;method::String="adaptiverejection")
 sampleMeasure(n::Int,m::Measure;method::String="adaptiverejection")
 sampleMeasure(n::Int,op::AbstractOrthoPoly;method::String="adaptiverejection")

Draw n samples from the measure m described by its

  • name
  • weight function w,
  • domain dom,
  • symmetry property symm,
  • and, if applicable, parameters stored in the dictionary d. By default, an adaptive rejection sampling method is used (from AdaptiveRejectionSampling.jl), unless it is a common random variable for which Distributions.jl is used.

The function is dispatched to accept AbstractOrthoPoly.

Multivariate

sampleMeasure(n::Int,m::ProductMeasure;method::Vector{String}=["adaptiverejection" for i=1:length(m.name)])
-sampleMeasure(n::Int,mop::MultiOrthoPoly;method::Vector{String}=["adaptiverejection" for i=1:length(mop.meas.name)])

Multivariate extension, which provides an array of samples with n rows and as many columns as the multimeasure has univariate measures.

source
PolyChaos.evaluatePCEFunction
evaluatePCE(x::AbstractVector{<:Real},ξ::AbstractVector{<:Real},α::AbstractVector{<:Real},β::AbstractVector{<:Real})

Evaluation of polynomial chaos expansion

\[\mathsf{x} = \sum_{i=0}^{L} x_i \phi_i{\xi_j},\]

where L+1 = length(x) and $x_j$ is the $j$th sample where $j=1,\dots,m$ with m = length(ξ).

source
PolyChaos.samplePCEFunction

Univariate

samplePCE(n::Int,x::AbstractVector{<:Real},op::AbstractOrthoPoly;method::String="adaptiverejection")

Combines sampleMeasure and evaluatePCE, i.e. it first draws n samples from the measure, then evaluates the PCE for those samples.

Multivariate

samplePCE(n::Int,x::AbstractVector{<:Real},mop::MultiOrthoPoly;method::Vector{String}=["adaptiverejection" for i=1:length(mop.meas.name)])
source
PolyChaos.calculateAffinePCEFunction
calculateAffinePCE(α::AbstractVector{<:Real})

Computes the affine PCE coefficients $x_0$ and $x_1$ from recurrence coefficients $lpha$.

source
PolyChaos.convert2affinePCEFunction
convert2affinePCE(mu::Real, sigma::Real, op::AbstractCanonicalOrthoPoly; kind::String)

Computes the affine PCE coefficients $x_0$ and $x_1$ from

\[X = a_1 + a_2 \Xi = x_0 + x_1 \phi_1(\Xi),\]

where $\phi_1(t) = t-\alpha_0$ is the first-order monic basis polynomial.

Works for subtypes of AbstractCanonicalOrthoPoly. The keyword kind in ["lbub", "μσ"] specifies whether p1 and p2 have the meaning of lower/upper bounds or mean/standard deviation.

source

Auxiliary Functions

PolyChaos.nwFunction
nw(q::EmptyQuad)
+sampleMeasure(n::Int,mop::MultiOrthoPoly;method::Vector{String}=["adaptiverejection" for i=1:length(mop.meas.name)])

Multivariate extension, which provides an array of samples with n rows and as many columns as the multimeasure has univariate measures.

source
PolyChaos.evaluatePCEFunction
evaluatePCE(x::AbstractVector{<:Real},ξ::AbstractVector{<:Real},α::AbstractVector{<:Real},β::AbstractVector{<:Real})

Evaluation of polynomial chaos expansion

\[\mathsf{x} = \sum_{i=0}^{L} x_i \phi_i{\xi_j},\]

where L+1 = length(x) and $x_j$ is the $j$th sample where $j=1,\dots,m$ with m = length(ξ).

source
PolyChaos.samplePCEFunction

Univariate

samplePCE(n::Int,x::AbstractVector{<:Real},op::AbstractOrthoPoly;method::String="adaptiverejection")

Combines sampleMeasure and evaluatePCE, i.e. it first draws n samples from the measure, then evaluates the PCE for those samples.

Multivariate

samplePCE(n::Int,x::AbstractVector{<:Real},mop::MultiOrthoPoly;method::Vector{String}=["adaptiverejection" for i=1:length(mop.meas.name)])
source
PolyChaos.calculateAffinePCEFunction
calculateAffinePCE(α::AbstractVector{<:Real})

Computes the affine PCE coefficients $x_0$ and $x_1$ from recurrence coefficients $lpha$.

source
PolyChaos.convert2affinePCEFunction
convert2affinePCE(mu::Real, sigma::Real, op::AbstractCanonicalOrthoPoly; kind::String)

Computes the affine PCE coefficients $x_0$ and $x_1$ from

\[X = a_1 + a_2 \Xi = x_0 + x_1 \phi_1(\Xi),\]

where $\phi_1(t) = t-\alpha_0$ is the first-order monic basis polynomial.

Works for subtypes of AbstractCanonicalOrthoPoly. The keyword kind in ["lbub", "μσ"] specifies whether p1 and p2 have the meaning of lower/upper bounds or mean/standard deviation.

source

Auxiliary Functions

PolyChaos.nwFunction
nw(q::EmptyQuad)
 nw(q::AbstractQuad)
 nw(opq::AbstractOrthoPoly)
 nw(opq::AbstractVector)
-nw(mop::MultiOrthoPoly)

returns nodes and weights in matrix form

source
PolyChaos.coeffsFunction
coeffs(op::AbstractOrthoPoly)
+nw(mop::MultiOrthoPoly)

returns nodes and weights in matrix form

source
PolyChaos.coeffsFunction
coeffs(op::AbstractOrthoPoly)
 coeffs(op::AbstractVector)
-coeffs(mop::MultiOrthoPoly)

returns recurrence coefficients of in matrix form

source
PolyChaos.integrateFunction
integrate(f::Function,nodes::AbstractVector{<:Real},weights::AbstractVector{<:Real})
+coeffs(mop::MultiOrthoPoly)

returns recurrence coefficients of in matrix form

source
PolyChaos.integrateFunction
integrate(f::Function,nodes::AbstractVector{<:Real},weights::AbstractVector{<:Real})
 integrate(f::Function,q::AbstractQuad)
 integrate(f::Function,opq::AbstractOrthoPoly)

integrate function f using quadrature rule specified via nodes, weights. For example $\int_0^1 6x^5 = 1$ can be solved as follows:

julia> opq = Uniform01OrthoPoly(3) # a quadrature rule is added by default
 
 julia> integrate(x -> 6x^5, opq)
-0.9999999999999993
Note
  • function $f$ is assumed to return a scalar.
  • interval of integration is "hidden" in nodes.
source
LinearAlgebra.issymmetricFunction
issymmetric(m::AbstractMeasure)
-issymmetric(op::AbstractOrthoPoly)

Is the measure symmetric (around any point in the domain)?

source
+0.9999999999999993
Note
  • function $f$ is assumed to return a scalar.
  • interval of integration is "hidden" in nodes.
source
LinearAlgebra.issymmetricFunction
issymmetric(m::AbstractMeasure)
+issymmetric(op::AbstractOrthoPoly)

Is the measure symmetric (around any point in the domain)?

source
diff --git a/dev/gaussian_mixture_model/index.html b/dev/gaussian_mixture_model/index.html index a493ca3..8a595c3 100644 --- a/dev/gaussian_mixture_model/index.html +++ b/dev/gaussian_mixture_model/index.html @@ -13,47 +13,47 @@ ylabel!("rho(x)"); - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + +

This looks nice!

What are now the polynomials that are orthogonal relative to this specific density?

using PolyChaos
 deg = 4
 meas = Measure("my_GaussMixture", ρ, (-Inf, Inf), false, Dict(:μ => μ, :σ => σ)) # build measure
@@ -68,4 +68,4 @@
  0.0  0.1875  0.0     0.0        0.0
  0.0  0.0     0.0385  0.0        0.0
  0.0  0.0     0.0     0.0128086  0.0
- 0.0  0.0     0.0     0.0        0.00485189

Great!

+ 0.0 0.0 0.0 0.0 0.00485189

Great!

diff --git a/dev/index.html b/dev/index.html index 4eb5a71..6e76961 100644 --- a/dev/index.html +++ b/dev/index.html @@ -40,7 +40,7 @@ [0c46a032] DifferentialEquations v7.11.0 ⌅ [e30172f5] Documenter v0.27.25 [28b8d3ca] GR v0.72.10 - [4076af6c] JuMP v1.16.0 + [4076af6c] JuMP v1.17.0 [b964fa9f] LaTeXStrings v1.3.1 ⌅ [1ec41992] MosekTools v0.14.0 [91a5bcdd] Plots v1.39.0 @@ -69,8 +69,8 @@ [ec485272] ArnoldiMethod v0.2.0 [4fba245c] ArrayInterface v7.6.1 [4c555306] ArrayLayouts v1.4.3 - [aae01518] BandedMatrices v1.2.1 - [6e4b80f9] BenchmarkTools v1.3.2 + [aae01518] BandedMatrices v1.3.0 + [6e4b80f9] BenchmarkTools v1.4.0 [d1d4a3ce] BitFlags v0.1.8 [62783981] BitTwiddlingConvenienceFunctions v0.1.5 [764a87c0] BoundaryValueDiffEq v5.4.0 @@ -120,7 +120,7 @@ [7034ab61] FastBroadcast v0.2.8 [9aa1b823] FastClosures v0.3.2 [29a986be] FastLapackInterface v2.0.0 - [1a297f60] FillArrays v1.8.0 + [1a297f60] FillArrays v1.9.0 [6a86dc24] FiniteDiff v2.21.1 [53c48c17] FixedPointNumbers v0.8.4 [59287772] Formatting v0.4.2 @@ -147,7 +147,7 @@ [1019f520] JLFzf v0.1.7 [692b3bcd] JLLWrappers v1.5.0 [682c06a0] JSON v0.21.4 - [4076af6c] JuMP v1.16.0 + [4076af6c] JuMP v1.17.0 [ccbc3e58] JumpProcesses v9.8.0 [ef3ab10e] KLU v0.4.1 [ba0b0d4f] Krylov v0.9.4 @@ -274,7 +274,7 @@ [7746bdde] Glib_jll v2.76.5+0 [3b182d85] Graphite2_jll v1.3.14+0 [2e76f6c2] HarfBuzz_jll v2.8.1+1 - [1d5cc7b8] IntelOpenMP_jll v2023.2.0+0 + [1d5cc7b8] IntelOpenMP_jll v2024.0.0+0 [aacddb02] JpegTurbo_jll v2.1.91+0 [c1c5ebd0] LAME_jll v3.100.1+0 [88015f11] LERC_jll v3.0.0+1 @@ -288,7 +288,7 @@ [4b2f31a3] Libmount_jll v2.35.0+0 [89763e89] Libtiff_jll v4.5.1+1 [38a345b3] Libuuid_jll v2.36.0+0 - [856f044c] MKL_jll v2023.2.0+0 +⌅ [856f044c] MKL_jll v2023.2.0+0 [e7412a2a] Ogg_jll v1.3.5+1 [458c3c95] OpenSSL_jll v3.0.12+0 [efe28fd5] OpenSpecFun_jll v0.5.5+0 @@ -392,4 +392,4 @@ [3f19e933] p7zip_jll v17.4.0+0 Info Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated -m`You can also download the manifest file and the -project file. +project file. diff --git a/dev/math/index.html b/dev/math/index.html index 335eed0..df635d1 100644 --- a/dev/math/index.html +++ b/dev/math/index.html @@ -17,4 +17,4 @@ \end{aligned}\]

Note

Within the package, the coefficients (α,β) are the building block to represent (monic) orthogonal polynomials.

Notice that $\beta_0$ is arbitrary. Nevertheless, it is convenient to define it as

\[\beta_0(\mathrm{d}\lambda) = \langle \pi_0, \pi_0 \rangle_{\mathrm{d} \lambda} = \int_{\mathcal{W}} \mathrm{d} \lambda (t),\]

because it allows to compute the norms of the polynomials based on $\beta_k$ alone

\[\| \pi_n \|_{\mathrm{d} \lambda} = \beta_n(\mathrm{d} \lambda) \beta_{n-1}(\mathrm{d} \lambda) \cdots \beta_0(\mathrm{d} \lambda), \quad n = 0,1, \dots\]

Let the support be $\mathcal{W} = [a,b]$ for $0 < a,b < \infty$, then

\[\begin{aligned} & a < \alpha_k(\mathrm{d} \lambda) < b && k = 0,1,2, \dots \\ & 0 < \beta_k(\mathrm{d} \lambda) < \max(a^2, b^2) && k = 1, 2, \dots -\end{aligned}\]

Quadrature Rules

An $n$-point quadrature rule for the measure $\mathrm{d} \lambda t$ is a formula of the form

\[\int_{\mathcal{W}} f(t) \mathrm{d} \lambda(t) = \sum_{\nu = 1}^{n} w_\nu f(\tau_\nu) + R_n(f).\]

The quadrature rule $\{ (\tau_\nu, w_\nu) \}_{\nu=1}^n$ composed of (mutually distinct) nodes $\tau_\nu$ and weights $w_\nu$ provides an approximation to the integral. The respective error is given by $R_n(f)$. If, for polynomials $p \in \mathcal{P}_d$, the error $R_n(p)$ vanishes, the respective quadrature rule is said to have a degree of exactness $d$. Gauss quadrature rule are special quadrature rules that have a degree of exactness $d = 2n - 1$. That means, taking a $n =3$-point quadrature rule, polynomials up to degree 5 can be integrated exactly. The nodes and weights for the Gauss quadrature rules have some remarkable properties:

  • all Gauss nodes are mutually distinct and contained in the interior of the support of $\mathrm{d} \lambda$;
  • the $n$ Gauss nodes are the zeros of $\pi_n$, the monic orthogonal polynomial of degree $n$ relative to the measure $\mathrm{d} \lambda$;
  • all Gauss weights are positive.

The Gauss nodes and weights can be computed using the Golub-Welsch algorithm. This means to solve an eigenvalue problem of a symmetric tridiagonal matrix.

+\end{aligned}\]

Quadrature Rules

An $n$-point quadrature rule for the measure $\mathrm{d} \lambda t$ is a formula of the form

\[\int_{\mathcal{W}} f(t) \mathrm{d} \lambda(t) = \sum_{\nu = 1}^{n} w_\nu f(\tau_\nu) + R_n(f).\]

The quadrature rule $\{ (\tau_\nu, w_\nu) \}_{\nu=1}^n$ composed of (mutually distinct) nodes $\tau_\nu$ and weights $w_\nu$ provides an approximation to the integral. The respective error is given by $R_n(f)$. If, for polynomials $p \in \mathcal{P}_d$, the error $R_n(p)$ vanishes, the respective quadrature rule is said to have a degree of exactness $d$. Gauss quadrature rule are special quadrature rules that have a degree of exactness $d = 2n - 1$. That means, taking a $n =3$-point quadrature rule, polynomials up to degree 5 can be integrated exactly. The nodes and weights for the Gauss quadrature rules have some remarkable properties:

  • all Gauss nodes are mutually distinct and contained in the interior of the support of $\mathrm{d} \lambda$;
  • the $n$ Gauss nodes are the zeros of $\pi_n$, the monic orthogonal polynomial of degree $n$ relative to the measure $\mathrm{d} \lambda$;
  • all Gauss weights are positive.

The Gauss nodes and weights can be computed using the Golub-Welsch algorithm. This means to solve an eigenvalue problem of a symmetric tridiagonal matrix.

diff --git a/dev/multiple_discretization/index.html b/dev/multiple_discretization/index.html index 2e78072..94bd6a9 100644 --- a/dev/multiple_discretization/index.html +++ b/dev/multiple_discretization/index.html @@ -75,89 +75,89 @@ ylabel!("Beta") - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + -

The crosses denote the values of the β recursion coefficients for Chebyshev polynomials; the circles the β recursion coefficients for Legendre polynomials. The interpolating line in between stands for the β recursion coefficients of $w(t; \gamma)$.

+

The crosses denote the values of the β recursion coefficients for Chebyshev polynomials; the circles the β recursion coefficients for Legendre polynomials. The interpolating line in between stands for the β recursion coefficients of $w(t; \gamma)$.

diff --git a/dev/numerical_integration/index.html b/dev/numerical_integration/index.html index 24bdae0..ba063a7 100644 --- a/dev/numerical_integration/index.html +++ b/dev/numerical_integration/index.html @@ -56,4 +56,4 @@ julia> print("Numerical error: $(abs(1 - cos(1) - variant0_revisited))") Numerical error: 1.8818280267396403e-13

Comparison

We see that the different variants provide slightly different results:

julia> 1 - cos(1) .- [variant0 variant1 variant0_revisited]
 1×3 Array{Float64,2}:
- -1.88183e-13  -9.85725e-8  -1.88183e-13

with variant0 and variant0_revisited being the same and more accurate than variant1. The increased accuracy is based on the fact that for variant0 and variant0_revisited the quadrature rules are based on the recursion coefficients of the underlying orthogonal polynomials. The quadrature for variant1 is based on a general-purpose method that can be significantly less accurate, see also the next tutorial.

+ -1.88183e-13 -9.85725e-8 -1.88183e-13

with variant0 and variant0_revisited being the same and more accurate than variant1. The increased accuracy is based on the fact that for variant0 and variant0_revisited the quadrature rules are based on the recursion coefficients of the underlying orthogonal polynomials. The quadrature for variant1 is based on a general-purpose method that can be significantly less accurate, see also the next tutorial.

diff --git a/dev/orthogonal_polynomials_canonical/index.html b/dev/orthogonal_polynomials_canonical/index.html index e991730..ba8e0e7 100644 --- a/dev/orthogonal_polynomials_canonical/index.html +++ b/dev/orthogonal_polynomials_canonical/index.html @@ -131,4 +131,4 @@ 0 1 0 - 1

translates mathematically to

\[\psi_{11}(t) = \pi_0^{(1)}(t_1) \pi_1^{(2)}(t_2) \pi_0^{(3)}(t_3) \pi_1^{(4)}(t_4).\]

Notice that there is an offset by one, because the basis counting starts at 0, but Julia is 1-indexed. The underlying measure of mop is now of type ProductMeasure, and stored in the field measure The weight $w$ can be evaluated as one would expect.

+ 1

translates mathematically to

\[\psi_{11}(t) = \pi_0^{(1)}(t_1) \pi_1^{(2)}(t_2) \pi_0^{(3)}(t_3) \pi_1^{(4)}(t_4).\]

Notice that there is an offset by one, because the basis counting starts at 0, but Julia is 1-indexed. The underlying measure of mop is now of type ProductMeasure, and stored in the field measure The weight $w$ can be evaluated as one would expect.

diff --git a/dev/pce_tutorial/index.html b/dev/pce_tutorial/index.html index 7aee982..e8b731a 100644 --- a/dev/pce_tutorial/index.html +++ b/dev/pce_tutorial/index.html @@ -33,9 +33,9 @@ [evaluate(degree, points, gaussian) for degree in degrees]
4-element Vector{Vector{Float64}}:
  [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
- [-0.952379914633003, 0.11432344180772014, -0.6501886935354688, 2.1344484730659423, 1.4985895201890798, 0.9740528376603825, 0.3430571355413754, -0.9800111524468413, -0.980219888183326, -0.9993704383096359]
- [2.7165471603283784, -2.444223917884117, 1.0235001113432352, -5.981923608090236, -5.748587530735783, -4.947432420087287, -3.254540343919648, 2.8804664687075516, 2.8817105819234357, 2.996223026205737]
- [-12.891003259681058, 17.902477950539033, -1.0512047685479207, 9.473899476645265, 19.390852974167043, 23.07176472085471, 20.80314383697176, -14.106487183880887, -14.115756021577273, -14.971675668902918]

Finding PCE Coefficients

Having constructed the orthogonal bases, the question remains how to find the PCE coefficients for the common random variables. Every random variable can be characterized exactly by two PCE coefficients. For a Gaussian random variable, this is familiar: the mean and the variance suffice to describe a Gaussian random variable entirely. The same is true for any random variable of finite variance given the right basis. The function convert2affinePCE provides the first two PCE coefficients (hence the name affine) for the common random variables.

Gaussian

Given the Gaussian random variable $\mathsf{x} \sim \mathcal{N}(\mu, \sigma^2)$ with $\sigma > 0$, the affine PCE coefficients are

# Gaussian
+ [0.4619211285565463, -0.6947755856726114, 0.45821951625550894, -0.981205897616165, -0.9850798396201481, -0.7130552702664067, -0.9372443124111377, -0.9712027525574681, -0.03757856958862438, -0.1343087957211876]
+ [-3.6343133852192313, 1.2618154571371658, -3.6229129399446034, 2.8875886039814036, 2.910701648906649, 1.3606688995203249, 2.6274041507915773, 2.828045796805075, -1.8482735727531754, -1.4447259645071737]
+ [21.852687398646665, -2.633895202269218, 21.82457990971348, -14.15955702857303, -14.331928629298476, -3.29892008285628, -12.234821054133786, -13.71653920596665, 15.306586894161416, 13.363552669151732]

Finding PCE Coefficients

Having constructed the orthogonal bases, the question remains how to find the PCE coefficients for the common random variables. Every random variable can be characterized exactly by two PCE coefficients. For a Gaussian random variable, this is familiar: the mean and the variance suffice to describe a Gaussian random variable entirely. The same is true for any random variable of finite variance given the right basis. The function convert2affinePCE provides the first two PCE coefficients (hence the name affine) for the common random variables.

Gaussian

Given the Gaussian random variable $\mathsf{x} \sim \mathcal{N}(\mu, \sigma^2)$ with $\sigma > 0$, the affine PCE coefficients are

# Gaussian
 μ, σ = 2.0, 0.2
 pce_gaussian = convert2affinePCE(μ, σ, gaussian)
2-element Vector{Float64}:
  2.0
@@ -57,89 +57,89 @@
 ξ_gaussian = sampleMeasure(N, myops["gaussian"])
 samples_gaussian = evaluatePCE(pce_gaussian, ξ_gaussian, myops["gaussian"])
 # samplePCE(N,pce_gaussian,myops["gaussian"])
1000-element Vector{Float64}:
- 1.9962995622479847
- 2.406004429229373
- 1.9943402517826734
- 1.9371916661891109
- 2.167390279845841
- 1.9926175976046259
- 1.8903062118096048
- 2.1036343482249458
- 2.0537390048784014
- 1.7573864638793806
+ 1.9176120694585086
+ 2.1214466242236805
+ 2.2438741775500954
+ 1.9478678317051576
+ 2.2983502806020097
+ 1.8030803350302609
+ 2.171115875673182
+ 1.9475410612607373
+ 2.1792318141306772
+ 2.3276621031927336
  ⋮
- 2.2269659830119943
- 2.208210410342416
- 1.6992582441025044
- 2.015317908225613
- 1.9034050566635383
- 2.065213016372206
- 1.8423671628761216
- 2.274869964143386
- 1.8000278065259703

Uniform

ξ_uniform = sampleMeasure(N, myops["uniform01"])
+ 1.7343441852091426
+ 1.7677077196505682
+ 2.0595553695111857
+ 1.9324271625652207
+ 1.87976589907614
+ 2.2245947604466894
+ 1.922757387365369
+ 1.8356235524785545
+ 1.8567061995226366

Uniform

ξ_uniform = sampleMeasure(N, myops["uniform01"])
 samples_uniform = evaluatePCE(pce_uniform, ξ_uniform, myops["uniform01"])
 # samples_uniform = samplePCE(N,pce_uniform,myops["uniform01"])
1000-element Vector{Float64}:
- 2.2552201757166475
- 2.1362658579593043
- 1.961941134504637
- 2.1161138411170435
- 2.308922004793409
- 2.119517492052187
- 1.7951432763113573
- 1.9217967768425326
- 1.753826842665235
- 1.9633739902088427
+ 2.288563573274644
+ 1.7508310737029233
+ 2.203430297517599
+ 2.297638324198091
+ 1.765579152089958
+ 2.2419898394958437
+ 2.0576954570692076
+ 2.2226995233016984
+ 1.929699577220418
+ 2.251681021808135
  ⋮
- 2.0521331861129277
- 2.0679140833450846
- 1.6681851010602262
- 1.75902295855054
- 1.9311786783690035
- 1.9584748714259022
- 2.2221394005813737
- 2.269511274010926
- 2.154598018486945

Beta

ξ_beta = sampleMeasure(N, myops["beta01"])
+ 2.021198438964351
+ 1.6783125987336107
+ 2.123222413014787
+ 2.1005647376782046
+ 2.081735091067188
+ 1.9380486181433878
+ 1.699298035565878
+ 2.0389549493577497
+ 2.1630818564132563

Beta

ξ_beta = sampleMeasure(N, myops["beta01"])
 samples_beta = evaluatePCE(pce_beta, ξ_beta, myops["beta01"])
 # samples_beta = samplePCE(N,pce_beta,myops["beta01"])
1000-element Vector{Float64}:
- 1.6847434234070877
- 1.6937146057579968
- 2.1808846749137953
- 1.776326184187671
- 1.8503090602268635
- 1.7413558597938188
- 1.9326831378326284
- 1.8577282860022457
- 1.784895088318576
- 2.0497870908892644
+ 1.812195432114616
+ 2.3888870978642527
+ 1.7777164992648542
+ 1.9544227210245193
+ 1.7990986634815866
+ 1.689930256566495
+ 1.8256615407852412
+ 1.925544126071108
+ 2.1077486655079807
+ 1.8790675123067246
  ⋮
- 2.3411103846142307
- 2.1307295186147743
- 1.7609030946574054
- 2.1768584821134085
- 2.1463955798303664
- 2.039401132718851
- 1.834579539981725
- 2.0853053613569825
- 2.0818386437275267

Logistic

ξ_logistic = sampleMeasure(N, myops["logistic"])
+ 1.9722590360209096
+ 1.7006658713219267
+ 1.8587406131028745
+ 1.6891110315324531
+ 1.7329788762810558
+ 1.9432438495539706
+ 2.005908265230642
+ 2.000499938314576
+ 1.915877478986899

Logistic

ξ_logistic = sampleMeasure(N, myops["logistic"])
 samples_logistic = evaluatePCE(pce_logistic, ξ_logistic, myops["logistic"])
 # samples_logistic = samplePCE(N,pce_logistic,myops["logistic"])
1000-element Vector{Float64}:
- 1.9647553752644638
- 2.2945919389128875
- 1.92780610167665
- 1.9272446207929246
- 2.167827002858176
- 2.3743275643851764
- 1.8737005811512943
- 1.789972615176257
- 2.0114120520333634
- 1.7893316359572533
+ 2.1858193983119336
+ 2.0132354056332757
+ 1.8040576346557975
+ 1.6311667893884776
+ 1.9076109733898639
+ 2.2252355179550607
+ 1.7030016021004588
+ 2.19704579152721
+ 1.9857584438978138
+ 1.8252821180124001
  ⋮
- 1.726712078107279
- 2.261438733628451
- 1.9628683783132423
- 1.9539506190171896
- 2.1906307535313885
- 2.1344650603312996
- 2.176912737827542
- 2.090598300812237
- 1.8411528562239314
+ 2.1080466216346063 + 1.9441876911030351 + 1.4815570009487398 + 2.1356879615783786 + 2.0390011502435965 + 1.950795286277371 + 2.1526354519951085 + 2.0205142908472955 + 1.981161515395095 diff --git a/dev/quadrature_rules/index.html b/dev/quadrature_rules/index.html index cbf87ca..55ba318 100644 --- a/dev/quadrature_rules/index.html +++ b/dev/quadrature_rules/index.html @@ -43,4 +43,4 @@ print("end point:\t\t $(n_cc[end])\n") print("error Clenshaw-Curtis:\t $(int_cc - int_exact)")
first point:		 1.0
 end point:		 -1.0
-error Clenshaw-Curtis:	 0.026213510850746302

As we can see, for the same number of nodes $N$, the quadrature rules based on the recurrence coefficients can greatly outperform the all-purpose quadratures. So, whenever possible, use quadrature rules based on recurrence coefficients of the orthogonal polynomials relative to the underlying measure. Make sure to check out this tutorial too.

+error Clenshaw-Curtis: 0.026213510850746302

As we can see, for the same number of nodes $N$, the quadrature rules based on the recurrence coefficients can greatly outperform the all-purpose quadratures. So, whenever possible, use quadrature rules based on recurrence coefficients of the orthogonal polynomials relative to the underlying measure. Make sure to check out this tutorial too.

diff --git a/dev/random_ode/index.html b/dev/random_ode/index.html index 2b3e266..d3b0e05 100644 --- a/dev/random_ode/index.html +++ b/dev/random_ode/index.html @@ -54,63 +54,63 @@ for aa in asmpl] xmc = hcat(xmc...);
301×5000 Matrix{Float64}:
  2.0       2.0       2.0       2.0       …  2.0       2.0       2.0
- 1.98986   1.98897   1.99001   1.99047      1.99023   1.99071   1.99188
- 1.97977   1.97799   1.98006   1.98098      1.9805    1.98146   1.98379
- 1.96973   1.96708   1.97017   1.97154      1.97083   1.97226   1.97573
- 1.95975   1.95623   1.96032   1.96214      1.9612    1.9631    1.96771
- 1.94981   1.94544   1.95053   1.95279   …  1.95161   1.95398   1.95972
- 1.93992   1.93471   1.94078   1.94348      1.94208   1.94491   1.95176
- 1.93009   1.92403   1.93108   1.93422      1.93259   1.93587   1.94383
- 1.9203    1.91342   1.92143   1.925        1.92315   1.92688   1.93594
- 1.91057   1.90286   1.91183   1.91583      1.91375   1.91793   1.92807
+ 1.99042   1.99074   1.99198   1.9898       1.99008   1.99093   1.98985
+ 1.98089   1.98152   1.98399   1.97965      1.98021   1.98189   1.97975
+ 1.9714    1.97235   1.97604   1.96955      1.97039   1.9729    1.96971
+ 1.96196   1.96322   1.96811   1.9595       1.96061   1.96395   1.95971
+ 1.95257   1.95413   1.96022   1.94951   …  1.95089   1.95504   1.94976
+ 1.94321   1.94508   1.95236   1.93957      1.94121   1.94617   1.93987
+ 1.93391   1.93607   1.94453   1.92967      1.93158   1.93734   1.93003
+ 1.92465   1.92711   1.93673   1.91983      1.922     1.92855   1.92023
+ 1.91543   1.91819   1.92897   1.91004      1.91247   1.9198    1.91049
  ⋮                                       ⋱                      
- 0.453346  0.397648  0.463165  0.495639     0.478509  0.513616  0.609484
- 0.451048  0.395454  0.46085   0.493277     0.476171  0.51123   0.607009
- 0.448761  0.393273  0.458547  0.490926     0.473845  0.508856  0.604543
- 0.446486  0.391103  0.456256  0.488586  …  0.471529  0.506492  0.602088
- 0.444222  0.388946  0.453976  0.486257     0.469225  0.50414   0.599643
- 0.44197   0.3868    0.451707  0.48394      0.466933  0.501798  0.597208
- 0.439729  0.384666  0.44945   0.481633     0.464651  0.499467  0.594782
- 0.437499  0.382544  0.447204  0.479338     0.462381  0.497148  0.592367
- 0.435281  0.380434  0.444969  0.477053  …  0.460122  0.494838  0.589961

Now we can compare the Monte Carlo mean and standard deviation to the expression from PCE for every time instant.

[mean(xmc, dims = 2) - mean_pce std(xmc, dims = 2) - std_pce]
301×2 Matrix{Float64}:
+ 0.492321  0.515859  0.618709  0.449282     0.468213  0.530074  0.452724
+ 0.489963  0.51347   0.616228  0.446991     0.465891  0.527669  0.450427
+ 0.487616  0.511093  0.613757  0.444711     0.46358   0.525275  0.448141
+ 0.485281  0.508727  0.611296  0.442442  …  0.461281  0.522892  0.445867
+ 0.482957  0.506371  0.608845  0.440185     0.458993  0.520519  0.443604
+ 0.480644  0.504027  0.606403  0.43794      0.456716  0.518158  0.441353
+ 0.478342  0.501693  0.603971  0.435706     0.454451  0.515807  0.439113
+ 0.476052  0.499371  0.60155   0.433484     0.452196  0.513466  0.436885
+ 0.473772  0.497059  0.599137  0.431273  …  0.449953  0.511137  0.434668

Now we can compare the Monte Carlo mean and standard deviation to the expression from PCE for every time instant.

[mean(xmc, dims = 2) - mean_pce std(xmc, dims = 2) - std_pce]
301×2 Matrix{Float64}:
  0.0           0.0
- 1.8973e-5    -7.29817e-7
- 3.77561e-5   -1.44728e-6
- 5.63507e-5   -2.15252e-6
- 7.47584e-5   -2.8457e-6
- 9.29804e-5   -3.52691e-6
- 0.000111018  -4.19631e-6
- 0.000128873  -4.85401e-6
- 0.000146547  -5.50013e-6
- 0.00016404   -6.13481e-6
+ 2.06303e-5   -1.13157e-5
+ 4.10437e-5   -2.25074e-5
+ 6.1242e-5    -3.35763e-5
+ 8.12269e-5   -4.45233e-5
+ 0.000101     -5.53496e-5
+ 0.000120563  -6.60561e-5
+ 0.000139918  -7.66438e-5
+ 0.000159066  -8.71138e-5
+ 0.000178009  -9.74672e-5
  ⋮            
- 0.00129409   -6.17468e-6
- 0.00129207   -6.05405e-6
- 0.00129004   -5.93395e-6
- 0.001288     -5.81435e-6
- 0.00128594   -5.69525e-6
- 0.00128388   -5.57665e-6
- 0.0012818    -5.45852e-6
- 0.00127971   -5.34088e-6
- 0.00127761   -5.22374e-6

Clearly, the accuracy of PCE deteriorates over time. Possible remedies are to increase the dimension of PCE, and to tweak the tolerances of the integrator.

Finally, we compare whether the samples follow a log-normal distribution, and compare the result to the analytic mean and standard deviation.

logx_pce = [log.(evaluatePCE(x_, ξ, opq)) for x_ in x]
+ 0.0013049    -0.000710069
+ 0.00130252   -0.000708914
+ 0.00130013   -0.000707754
+ 0.00129772   -0.000706589
+ 0.00129531   -0.000705419
+ 0.00129288   -0.000704244
+ 0.00129044   -0.000703064
+ 0.00128799   -0.000701879
+ 0.00128554   -0.000700689

Clearly, the accuracy of PCE deteriorates over time. Possible remedies are to increase the dimension of PCE, and to tweak the tolerances of the integrator.

Finally, we compare whether the samples follow a log-normal distribution, and compare the result to the analytic mean and standard deviation.

logx_pce = [log.(evaluatePCE(x_, ξ, opq)) for x_ in x]
 [mean.(logx_pce) - (log(x0) .+ μ * t) std.(logx_pce) - σ * t]
301×2 Matrix{Float64}:
  1.66533e-15   1.6655e-15
- 9.53423e-6   -3.68029e-7
- 1.90685e-5   -7.36053e-7
- 2.86027e-5   -1.10407e-6
- 3.81369e-5   -1.4721e-6
- 4.76711e-5   -1.84013e-6
- 5.72054e-5   -2.20816e-6
- 6.67396e-5   -2.5762e-6
- 7.62739e-5   -2.94424e-6
- 8.58081e-5   -3.31226e-6
+ 1.03697e-5   -5.68902e-6
+ 2.07393e-5   -1.1378e-5
+ 3.1109e-5    -1.70671e-5
+ 4.14786e-5   -2.27561e-5
+ 5.18483e-5   -2.84451e-5
+ 6.2218e-5    -3.41341e-5
+ 7.25877e-5   -3.98232e-5
+ 8.29573e-5   -4.55122e-5
+ 9.3327e-5    -5.12012e-5
  ⋮            
- 0.00278427   -0.000107642
- 0.00279381   -0.000108013
- 0.00280335   -0.000108383
- 0.00281288   -0.000108753
- 0.00282242   -0.000109122
- 0.00283195   -0.000109491
- 0.00284149   -0.000109859
- 0.00285102   -0.000110227
- 0.00286056   -0.000110595
+ 0.00302821 -0.00166137 + 0.00303859 -0.00166706 + 0.00304896 -0.00167275 + 0.00305933 -0.00167844 + 0.0030697 -0.00168413 + 0.00308007 -0.00168982 + 0.00309044 -0.00169551 + 0.00310081 -0.0017012 + 0.00311118 -0.00170689 diff --git a/dev/scalar_products/index.html b/dev/scalar_products/index.html index 43f2c8f..c16980f 100644 --- a/dev/scalar_products/index.html +++ b/dev/scalar_products/index.html @@ -180,4 +180,4 @@ 0.0 0.0 0.0 - 0.0 + 0.0 diff --git a/dev/search/index.html b/dev/search/index.html index 050b250..f5d80de 100644 --- a/dev/search/index.html +++ b/dev/search/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-90474609-3', {'page_path': location.pathname + location.search + location.hash}); - + diff --git a/dev/type_hierarchy/index.html b/dev/type_hierarchy/index.html index 290af36..eb40c4a 100644 --- a/dev/type_hierarchy/index.html +++ b/dev/type_hierarchy/index.html @@ -59,4 +59,4 @@ ├─ Uniform01OrthoPoly ├─ Uniform_11OrthoPoly ├─ genHermiteOrthoPoly -└─ genLaguerreOrthoPoly

Their fields follow

NameMeaning
deg::IntMaximum degree
α::Vector{<:Real}Vector of recurrence coefficients
β::Vector{<:Real}Vector of recurrence coefficients
measure::CanonicalMeasureUnderlying canonical measure
quad::AbstractQuadQuadrature rule

Quad

Quadrature rules are intricately related to orthogonal polynomials. An $n$-point quadrature rule is a pair of so-called nodes $t_k$ and weights $w_k$ for $k=1,\dots,n$ that allow to solve integrals relative to the measure

\[\int_\Omega f(t) w(t) \mathrm{d} t \approx \sum_{k=1}^n w_k f(t_k).\]

If the integrand $f$ is polynomial, then the specific Gauss quadrature rules possess the remarkable property that an $n$-point quadrature rule can integrate polynomial integrands $f$ of degree at most $2n-1$ exactly; no approximation error is made.

The fields of Quad are

NameMeaning
name::StringName
Nquad::IntNumber $n$ of quadrature points
nodes::Vector{<:Real}Nodes
weights::Vector{<:Real}Weights

with obvious meanings.

PolyChaos provides the type EmptyQuad that is added in case no quadrature rule is desired.

This tutorial shows the above in action.

Tensor

The last type we need to address is Tensor. It is used to store the results of scalar products. Its fields are

NameMeaning
dim:Dimension $m$ of tensor $\langle \phi_{i_1} \phi_{i_2} \cdots \phi_{i_{m-1}}, \phi_{i_m} \rangle$
T::SparseVector{Float64,Int}Entries of tensor
get::FunctionFunction to get entries from T
op::AbstractOrthoPolyUnderlying univariate orthogonal polynomials

The dimension $m$ of the tensor is the number of terms that appear in the scalar product. Let's assume we set $m = 3$, hence have $\langle \phi_{i} \phi_{j}, \phi_{k} \rangle$, then the concrete entry is obtained as Tensor.get([i,j,k]).

This tutorial shows the above in action.

+└─ genLaguerreOrthoPoly

Their fields follow

NameMeaning
deg::IntMaximum degree
α::Vector{<:Real}Vector of recurrence coefficients
β::Vector{<:Real}Vector of recurrence coefficients
measure::CanonicalMeasureUnderlying canonical measure
quad::AbstractQuadQuadrature rule

Quad

Quadrature rules are intricately related to orthogonal polynomials. An $n$-point quadrature rule is a pair of so-called nodes $t_k$ and weights $w_k$ for $k=1,\dots,n$ that allow to solve integrals relative to the measure

\[\int_\Omega f(t) w(t) \mathrm{d} t \approx \sum_{k=1}^n w_k f(t_k).\]

If the integrand $f$ is polynomial, then the specific Gauss quadrature rules possess the remarkable property that an $n$-point quadrature rule can integrate polynomial integrands $f$ of degree at most $2n-1$ exactly; no approximation error is made.

The fields of Quad are

NameMeaning
name::StringName
Nquad::IntNumber $n$ of quadrature points
nodes::Vector{<:Real}Nodes
weights::Vector{<:Real}Weights

with obvious meanings.

PolyChaos provides the type EmptyQuad that is added in case no quadrature rule is desired.

This tutorial shows the above in action.

Tensor

The last type we need to address is Tensor. It is used to store the results of scalar products. Its fields are

NameMeaning
dim:Dimension $m$ of tensor $\langle \phi_{i_1} \phi_{i_2} \cdots \phi_{i_{m-1}}, \phi_{i_m} \rangle$
T::SparseVector{Float64,Int}Entries of tensor
get::FunctionFunction to get entries from T
op::AbstractOrthoPolyUnderlying univariate orthogonal polynomials

The dimension $m$ of the tensor is the number of terms that appear in the scalar product. Let's assume we set $m = 3$, hence have $\langle \phi_{i} \phi_{j}, \phi_{k} \rangle$, then the concrete entry is obtained as Tensor.get([i,j,k]).

This tutorial shows the above in action.