From a9afb022b3846d4a1746b49401415f50c4682ab1 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Mon, 13 Nov 2023 11:40:49 +0000 Subject: [PATCH] build based on 9d72644 --- previews/PR121/.documenter-siteinfo.json | 2 +- previews/PR121/alternatives.html | 2 +- previews/PR121/contraction.html | 2 +- previews/PR121/index.html | 2 +- previews/PR121/references.html | 2 +- previews/PR121/tensor-network.html | 10 +++++----- previews/PR121/tensors.html | 12 ++++++------ previews/PR121/transformations.html | 4 ++-- previews/PR121/visualization.html | 2 +- 9 files changed, 19 insertions(+), 19 deletions(-) diff --git a/previews/PR121/.documenter-siteinfo.json b/previews/PR121/.documenter-siteinfo.json index e62662c5..a53362ea 100644 --- a/previews/PR121/.documenter-siteinfo.json +++ b/previews/PR121/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.9.3","generation_timestamp":"2023-11-13T09:43:00","documenter_version":"1.1.2"}} \ No newline at end of file +{"documenter":{"julia_version":"1.9.3","generation_timestamp":"2023-11-13T11:40:43","documenter_version":"1.1.2"}} \ No newline at end of file diff --git a/previews/PR121/alternatives.html b/previews/PR121/alternatives.html index f58eed99..9b8cfda5 100644 --- a/previews/PR121/alternatives.html +++ b/previews/PR121/alternatives.html @@ -1,2 +1,2 @@ -Alternatives · Tenet.jl

Alternatives

Tenet is strongly opinionated. We acknowledge that it may not suit all cases (although we try 🙂). If your case doesn't fit Tenet's design, you can try the following libraries:

+Alternatives · Tenet.jl

Alternatives

Tenet is strongly opinionated. We acknowledge that it may not suit all cases (although we try 🙂). If your case doesn't fit Tenet's design, you can try the following libraries:

diff --git a/previews/PR121/contraction.html b/previews/PR121/contraction.html index 60c66963..443e1a2f 100644 --- a/previews/PR121/contraction.html +++ b/previews/PR121/contraction.html @@ -1,2 +1,2 @@ -Contraction · Tenet.jl

Contraction

Contraction path optimization and execution is delegated to the EinExprs library. A EinExpr is a lower-level form of a Tensor Network, in which the contraction path has been laid out as a tree. It is similar to a symbolic expression (i.e. Expr) but in which every node represents an Einstein summation expression (aka einsum).

EinExprs.einexprMethod
einexpr(tn::AbstractTensorNetwork; optimizer = EinExprs.Greedy, output = inds(tn, :open), kwargs...)

Search a contraction path for the given TensorNetwork and return it as a EinExpr.

Keyword Arguments

  • optimizer Contraction path optimizer. Check EinExprs documentation for more info.
  • outputs Indices that won't be contracted. Defaults to open indices.
  • kwargs Options to be passed to the optimizer.

See also: contract.

source
Tenet.contract!Function
contract!(tn::AbstractTensorNetwork, index)

In-place contraction of tensors connected to index.

See also: contract.

source
+Contraction · Tenet.jl

Contraction

Contraction path optimization and execution is delegated to the EinExprs library. A EinExpr is a lower-level form of a Tensor Network, in which the contraction path has been laid out as a tree. It is similar to a symbolic expression (i.e. Expr) but in which every node represents an Einstein summation expression (aka einsum).

EinExprs.einexprMethod
einexpr(tn::AbstractTensorNetwork; optimizer = EinExprs.Greedy, output = inds(tn, :open), kwargs...)

Search a contraction path for the given TensorNetwork and return it as a EinExpr.

Keyword Arguments

  • optimizer Contraction path optimizer. Check EinExprs documentation for more info.
  • outputs Indices that won't be contracted. Defaults to open indices.
  • kwargs Options to be passed to the optimizer.

See also: contract.

source
Tenet.contract!Function
contract!(tn::AbstractTensorNetwork, index)

In-place contraction of tensors connected to index.

See also: contract.

source
diff --git a/previews/PR121/index.html b/previews/PR121/index.html index 1e651711..11f96f14 100644 --- a/previews/PR121/index.html +++ b/previews/PR121/index.html @@ -2,4 +2,4 @@ Home · Tenet.jl

Tenet.jl

BSC-Quantic's Registry

Tenet and some of its dependencies are located in our own Julia registry. In order to download Tenet, add our registry to your Julia installation by using the Pkg mode in a REPL session,

using Pkg
 pkg"registry add https://github.com/bsc-quantic/Registry"

A Julia library for Tensor Networks. Tenet can be executed both at local environments and on large supercomputers. Its goals are,

  • Expressiveness Simple to use 👶
  • Flexibility Extend it to your needs 🔧
  • Performance Goes brr... fast 🏎️

A video of its presentation at JuliaCon 2023 can be seen here:

-

Features

  • Optimized Tensor Network contraction, powered by EinExprs
  • Tensor Network slicing/cuttings
  • Automatic Differentiation of TN contraction, powered by EinExprs and ChainRules
  • 3D visualization of large networks, powered by Makie
+

Features

diff --git a/previews/PR121/references.html b/previews/PR121/references.html index a3fa036b..de88b853 100644 --- a/previews/PR121/references.html +++ b/previews/PR121/references.html @@ -1,2 +1,2 @@ -References · Tenet.jl

References

  • Fishman, M.; White, S. R. and Stoudenmire, E. M. (2022). The ITensor Software Library for Tensor Network Calculations. SciPost Phys. Codebases, 4.
  • Gray, J. (2018), quimb: A python package for quantum information and many-body calculations. Journal of Open Source Software 3, 819.
  • Gray, J. and Kourtis, S. (2021). Hyper-optimized tensor network contraction. Quantum 5, 410.
  • Hauschild, J.; Pollmann, F. and Zaletel, M. (2021). The Tensor Network Python (TeNPy) Library. In: APS March Meeting Abstracts, Vol. 2021; p. R21–006.
  • Ramón Pareja Monturiol, J.; Pérez-García, D. and Pozas-Kerstjens, A. (2023). TensorKrowch: Smooth integration of tensor networks in machine learning, arXiv e-prints, arXiv–2306.
+References · Tenet.jl

References

  • Fishman, M.; White, S. R. and Stoudenmire, E. M. (2022). The ITensor Software Library for Tensor Network Calculations. SciPost Phys. Codebases, 4.
  • Gray, J. (2018), quimb: A python package for quantum information and many-body calculations. Journal of Open Source Software 3, 819.
  • Gray, J. and Kourtis, S. (2021). Hyper-optimized tensor network contraction. Quantum 5, 410.
  • Hauschild, J.; Pollmann, F. and Zaletel, M. (2021). The Tensor Network Python (TeNPy) Library. In: APS March Meeting Abstracts, Vol. 2021; p. R21–006.
  • Ramón Pareja Monturiol, J.; Pérez-García, D. and Pozas-Kerstjens, A. (2023). TensorKrowch: Smooth integration of tensor networks in machine learning, arXiv e-prints, arXiv–2306.
diff --git a/previews/PR121/tensor-network.html b/previews/PR121/tensor-network.html index b33f5717..bae1e40e 100644 --- a/previews/PR121/tensor-network.html +++ b/previews/PR121/tensor-network.html @@ -2,8 +2,8 @@ Tensor Networks · Tenet.jl

Tensor Networks

Tensor Networks (TN) are a graphical notation for representing complex multi-linear functions. For example, the following equation

\[\sum_{ijklmnop} A_{im} B_{ijp} C_{njk} D_{pkl} E_{mno} F_{ol}\]

can be represented visually as

Sketch of a Tensor Network
Sketch of a Tensor Network
-

The graph's nodes represent tensors and edges represent tensor indices.

In Tenet, these objects are represented by the TensorNetwork type.

Tenet.TensorNetworkType
TensorNetwork

Graph of interconnected tensors, representing a multilinear equation. Graph vertices represent tensors and graph edges, tensor indices.

source

Information about a TensorNetwork can be queried with the following functions.

Query information

EinExprs.indsMethod
inds(tn::AbstractTensorNetwork, set = :all)

Return the names of the indices in the TensorNetwork.

Keyword Arguments

  • set

    • :all (default) All indices.
    • :open Indices only mentioned in one tensor.
    • :inner Indices mentioned at least twice.
    • :hyper Indices mentioned at least in three tensors.
source
Base.sizeMethod
size(tn::AbstractTensorNetwork)
-size(tn::AbstractTensorNetwork, index)

Return a mapping from indices to their dimensionalities.

If index is set, return the dimensionality of index. This is equivalent to size(tn)[index].

source
Tenet.tensorsMethod
tensors(tn::AbstractTensorNetwork)

Return a list of the Tensors in the TensorNetwork.

Implementation details

  • As the tensors of a TensorNetwork are stored as keys of the .tensormap dictionary and it uses objectid as hash, order is not stable so it sorts for repeated evaluations.
source

Modification

Add/Remove tensors

Base.push!Method
push!(tn::AbstractTensorNetwork, tensor::Tensor)

Add a new tensor to the Tensor Network.

See also: append!, pop!.

source
Base.append!Method
append!(tn::AbstractTensorNetwork, tensors::AbstractVecOrTuple{<:Tensor})

Add a list of tensors to a TensorNetwork.

See also: push!, merge!.

source
Base.merge!Method
merge!(self::AbstractTensorNetwork, others::AbstractTensorNetwork...)
-merge(self::AbstractTensorNetwork, others::AbstractTensorNetwork...)

Fuse various TensorNetworks into one.

See also: append!.

source
Base.pop!Method
pop!(tn::AbstractTensorNetwork, tensor::Tensor)
-pop!(tn::AbstractTensorNetwork, i::Union{Symbol,AbstractVecOrTuple{Symbol}})

Remove a tensor from the Tensor Network and returns it. If a Tensor is passed, then the first tensor satisfies egality (i.e. or ===) will be removed. If a Symbol or a list of Symbols is passed, then remove and return the tensors that contain all the indices.

See also: push!, delete!.

source

Replace existing elements

Base.replace!Function
replace!(tn::AbstractTensorNetwork, old => new...)
-replace(tn::AbstractTensorNetwork, old => new...)

Replace the element in old with the one in new. Depending on the types of old and new, the following behaviour is expected:

  • If Symbols, it will correspond to a index renaming.
  • If Tensors, first element that satisfies egality ( or ===) will be replaced.
source

Selection

Tenet.selectFunction
select(tn::AbstractTensorNetwork, i)

Return tensors whose indices match with the list of indices i.

source

Miscelaneous

Base.randMethod
rand(TensorNetwork, n::Integer, regularity::Integer; out = 0, dim = 2:9, seed = nothing, globalind = false)

Generate a random tensor network.

Arguments

  • n Number of tensors.
  • regularity Average number of indices per tensor.
  • out Number of open indices.
  • dim Range of dimension sizes.
  • seed If not nothing, seed random generator with this value.
  • globalind Add a global 'broadcast' dimension to every tensor.
source
+

The graph's nodes represent tensors and edges represent tensor indices.

In Tenet, these objects are represented by the TensorNetwork type.

Tenet.TensorNetworkType
TensorNetwork

Graph of interconnected tensors, representing a multilinear equation. Graph vertices represent tensors and graph edges, tensor indices.

source

Information about a TensorNetwork can be queried with the following functions.

Query information

EinExprs.indsMethod
inds(tn::AbstractTensorNetwork, set = :all)

Return the names of the indices in the TensorNetwork.

Keyword Arguments

  • set

    • :all (default) All indices.
    • :open Indices only mentioned in one tensor.
    • :inner Indices mentioned at least twice.
    • :hyper Indices mentioned at least in three tensors.
source
Base.sizeMethod
size(tn::AbstractTensorNetwork)
+size(tn::AbstractTensorNetwork, index)

Return a mapping from indices to their dimensionalities.

If index is set, return the dimensionality of index. This is equivalent to size(tn)[index].

source
Tenet.tensorsMethod
tensors(tn::AbstractTensorNetwork)

Return a list of the Tensors in the TensorNetwork.

Implementation details

  • As the tensors of a TensorNetwork are stored as keys of the .tensormap dictionary and it uses objectid as hash, order is not stable so it sorts for repeated evaluations.
source

Modification

Add/Remove tensors

Base.push!Method
push!(tn::AbstractTensorNetwork, tensor::Tensor)

Add a new tensor to the Tensor Network.

See also: append!, pop!.

source
Base.append!Method
append!(tn::AbstractTensorNetwork, tensors::AbstractVecOrTuple{<:Tensor})

Add a list of tensors to a TensorNetwork.

See also: push!, merge!.

source
Base.merge!Method
merge!(self::AbstractTensorNetwork, others::AbstractTensorNetwork...)
+merge(self::AbstractTensorNetwork, others::AbstractTensorNetwork...)

Fuse various TensorNetworks into one.

See also: append!.

source
Base.pop!Method
pop!(tn::AbstractTensorNetwork, tensor::Tensor)
+pop!(tn::AbstractTensorNetwork, i::Union{Symbol,AbstractVecOrTuple{Symbol}})

Remove a tensor from the Tensor Network and returns it. If a Tensor is passed, then the first tensor satisfies egality (i.e. or ===) will be removed. If a Symbol or a list of Symbols is passed, then remove and return the tensors that contain all the indices.

See also: push!, delete!.

source
Base.delete!Method
delete!(tn::AbstractTensorNetwork, x)

Like pop! but return the TensorNetwork instead.

source

Replace existing elements

Base.replace!Function
replace!(tn::AbstractTensorNetwork, old => new...)
+replace(tn::AbstractTensorNetwork, old => new...)

Replace the element in old with the one in new. Depending on the types of old and new, the following behaviour is expected:

  • If Symbols, it will correspond to a index renaming.
  • If Tensors, first element that satisfies egality ( or ===) will be replaced.
source

Selection

Tenet.selectFunction
select(tn::AbstractTensorNetwork, i)

Return tensors whose indices match with the list of indices i.

source
Base.selectdimFunction
selectdim(tn::AbstractTensorNetwork, index::Symbol, i)

Return a copy of the TensorNetwork where index has been projected to dimension i.

See also: view, slice!.

source
Tenet.slice!Function
slice!(tn::AbstractTensorNetwork, index::Symbol, i)

In-place projection of index on dimension i.

See also: selectdim, view.

source
Base.viewMethod
view(tn::AbstractTensorNetwork, index => i...)

Return a copy of the TensorNetwork where each index has been projected to dimension i. It is equivalent to a recursive call of selectdim.

See also: selectdim, slice!.

source

Miscelaneous

Base.copyMethod
copy(tn::TensorNetwork)

Return a shallow copy of a TensorNetwork.

source
Base.randMethod
rand(TensorNetwork, n::Integer, regularity::Integer; out = 0, dim = 2:9, seed = nothing, globalind = false)

Generate a random tensor network.

Arguments

  • n Number of tensors.
  • regularity Average number of indices per tensor.
  • out Number of open indices.
  • dim Range of dimension sizes.
  • seed If not nothing, seed random generator with this value.
  • globalind Add a global 'broadcast' dimension to every tensor.
source
diff --git a/previews/PR121/tensors.html b/previews/PR121/tensors.html index 0df845cf..570ff0fd 100644 --- a/previews/PR121/tensors.html +++ b/previews/PR121/tensors.html @@ -1,11 +1,11 @@ Tensors · Tenet.jl

Tensors

There are many jokes[1] about how to define a tensor. The definition we are giving here might not be the most correct one, but it is good enough for our use case (don't kill me please, mathematicians). A tensor $T$ of order[2] $n$ is a multilinear[3] application between $n$ vector spaces over a field $\mathcal{F}$.

\[T : \mathcal{F}^{\dim(1)} \times \dots \times \mathcal{F}^{\dim(n)} \mapsto \mathcal{F}\]

In layman's terms, it is a linear function whose inputs are vectors and the output is a scalar number.

\[T(\mathbf{v}^{(1)}, \dots, \mathbf{v}^{(n)}) = c \in \mathcal{F} \qquad\qquad \forall i, \mathbf{v}^{(i)} \in \mathcal{F}^{\dim(i)}\]

Tensor algebra is a higher-order generalization of linear algebra, where scalar numbers can be viewed as order-0 tensors, vectors as order-1 tensors, matrices as order-2 tensors, ...

Letters are used to identify each of the vector spaces the tensor relates to. In computer science, you would intuitively think of tensors as "n-dimensional arrays with named dimensions".

\[T_{ijk} \iff \mathtt{T[i,j,k]}\]

The Tensor type

In Tenet, a tensor is represented by the Tensor type, which wraps an array and a list of symbols. As it subtypes AbstractArray, many array operations can be dispatched to it.

You can create a Tensor by passing an array and a list of Symbols that name indices.

julia> Tᵢⱼₖ = Tensor(rand(3,5,2), (:i,:j,:k))3×5×2 Tensor{Float64, 3, Array{Float64, 3}}:
 [:, :, 1] =
- 0.609889  0.0398292  0.646993  0.989567  0.795252
- 0.173355  0.646879   0.715971  0.954928  0.164061
- 0.34172   0.421674   0.644197  0.838084  0.628716
+ 0.335888  0.449721   0.929963  0.170126   0.96664
+ 0.742137  0.0458273  0.154299  0.0905466  0.0230296
+ 0.536864  0.42156    0.920493  0.187298   0.288305
 
 [:, :, 2] =
- 0.0785597  0.311037  0.150609  0.372961    0.560875
- 0.609785   0.282986  0.179007  0.00697576  0.865948
- 0.612974   0.308084  0.403784  0.680875    0.0183049

The dimensionality or size of each index can be consulted using the size function.

Base.sizeMethod
Base.size(::Tensor[, i])

Return the size of the underlying array or the dimension i (specified by Symbol or Integer).

source
julia> size(Tᵢⱼₖ)(3, 5, 2)
julia> size(Tᵢⱼₖ, :j)5
julia> length(Tᵢⱼₖ)30

Operations

Contraction

Tenet.contractMethod
contract(a::Tensor[, b::Tensor]; dims=nonunique([inds(a)..., inds(b)...]))

Perform tensor contraction operation.

source

Factorizations

LinearAlgebra.svdMethod
LinearAlgebra.svd(tensor::Tensor; left_inds, right_inds, virtualind, kwargs...)

Perform SVD factorization on a tensor.

Keyword arguments

  • left_inds: left indices to be used in the SVD factorization. Defaults to all indices of t except right_inds.
  • right_inds: right indices to be used in the SVD factorization. Defaults to all indices of t except left_inds.
  • virtualind: name of the virtual bond. Defaults to a random Symbol.
source
LinearAlgebra.qrMethod
LinearAlgebra.qr(tensor::Tensor; left_inds, right_inds, virtualind, kwargs...)

Perform QR factorization on a tensor.

Keyword arguments

  • left_inds: left indices to be used in the QR factorization. Defaults to all indices of t except right_inds.
  • right_inds: right indices to be used in the QR factorization. Defaults to all indices of t except left_inds.
  • virtualind: name of the virtual bond. Defaults to a random Symbol.
source
LinearAlgebra.luMethod
LinearAlgebra.lu(tensor::Tensor; left_inds, right_inds, virtualind, kwargs...)

Perform LU factorization on a tensor.

Keyword arguments

  • left_inds: left indices to be used in the LU factorization. Defaults to all indices of t except right_inds.
  • right_inds: right indices to be used in the LU factorization. Defaults to all indices of t except left_inds.
  • virtualind: name of the virtual bond. Defaults to a random Symbol.
source
  • 1For example, recursive definitions like a tensor is whatever that transforms as a tensor.
  • 2The order of a tensor may also be known as rank or dimensionality in other fields. However, these can be missleading, since it has nothing to do with the rank of linear algebra nor with the dimensionality of a vector space. We prefer to use word order.
  • 3Meaning that the relationships between the output and the inputs, and the inputs between them, are linear.
+ 0.1918 0.55884 0.127903 0.420863 0.240143 + 0.923385 0.116579 0.767985 0.770805 0.232229 + 0.0325161 0.616896 0.941829 0.341823 0.0957707

The dimensionality or size of each index can be consulted using the size function.

Base.sizeMethod
Base.size(::Tensor[, i])

Return the size of the underlying array or the dimension i (specified by Symbol or Integer).

source
julia> size(Tᵢⱼₖ)(3, 5, 2)
julia> size(Tᵢⱼₖ, :j)5
julia> length(Tᵢⱼₖ)30

Operations

Contraction

Tenet.contractMethod
contract(a::Tensor[, b::Tensor]; dims=nonunique([inds(a)..., inds(b)...]))

Perform tensor contraction operation.

source

Factorizations

LinearAlgebra.svdMethod
LinearAlgebra.svd(tensor::Tensor; left_inds, right_inds, virtualind, kwargs...)

Perform SVD factorization on a tensor.

Keyword arguments

  • left_inds: left indices to be used in the SVD factorization. Defaults to all indices of t except right_inds.
  • right_inds: right indices to be used in the SVD factorization. Defaults to all indices of t except left_inds.
  • virtualind: name of the virtual bond. Defaults to a random Symbol.
source
LinearAlgebra.qrMethod
LinearAlgebra.qr(tensor::Tensor; left_inds, right_inds, virtualind, kwargs...)

Perform QR factorization on a tensor.

Keyword arguments

  • left_inds: left indices to be used in the QR factorization. Defaults to all indices of t except right_inds.
  • right_inds: right indices to be used in the QR factorization. Defaults to all indices of t except left_inds.
  • virtualind: name of the virtual bond. Defaults to a random Symbol.
source
LinearAlgebra.luMethod
LinearAlgebra.lu(tensor::Tensor; left_inds, right_inds, virtualind, kwargs...)

Perform LU factorization on a tensor.

Keyword arguments

  • left_inds: left indices to be used in the LU factorization. Defaults to all indices of t except right_inds.
  • right_inds: right indices to be used in the LU factorization. Defaults to all indices of t except left_inds.
  • virtualind: name of the virtual bond. Defaults to a random Symbol.
source
diff --git a/previews/PR121/transformations.html b/previews/PR121/transformations.html index 35b8aa37..facd675e 100644 --- a/previews/PR121/transformations.html +++ b/previews/PR121/transformations.html @@ -1,4 +1,4 @@ Transformations · Tenet.jl

Transformations

In tensor network computations, it is good practice to apply various transformations to simplify the network structure, reduce computational cost, or prepare the network for further operations. These transformations modify the network's structure locally by permuting, contracting, factoring or truncating tensors.

A crucial reason why these methods are indispensable lies in their ability to drastically reduce the problem size of the contraction path search and also the contraction. This doesn't necessarily involve reducing the maximum rank of the Tensor Network itself, but more importantly, it reduces the size (or rank) of the involved tensors.

Our approach is based in (Gray and Kourtis, 2021), which can also be found in quimb.

In Tenet, we provide a set of predefined transformations which you can apply to your TensorNetwork using both the transform/transform! functions.

Tenet.transformFunction
transform(tn::AbstractTensorNetwork, config::Transformation)
-transform(tn::AbstractTensorNetwork, configs)

Return a new TensorNetwork where some Transformation has been performed into it.

See also: transform!.

source
Tenet.transform!Function
transform!(tn::AbstractTensorNetwork, config::Transformation)
-transform!(tn::AbstractTensorNetwork, configs)

In-place version of transform.

source

Available transformations

Hyperindex converter

Tenet.HyperindConverterType
HyperindConverter <: Transformation

Convert hyperindices to COPY-tensors, represented by DeltaArrays. This transformation is always used by default when visualizing a TensorNetwork with plot.

source

Diagonal reduction

Tenet.DiagonalReductionType
DiagonalReduction <: Transformation

Reduce the dimension of a Tensor in a TensorNetwork when it has a pair of indices that fulfil a diagonal structure.

Keyword Arguments

  • atol Absolute tolerance. Defaults to 1e-12.
source
Example block output

Anti-diagonal reduction

Tenet.AntiDiagonalGaugingType
AntiDiagonalGauging <: Transformation

Reverse the order of tensor indices that fulfill the anti-diagonal condition. While this transformation doesn't directly enhance computational efficiency, it sets up the TensorNetwork for other operations that do.

Keyword Arguments

  • atol Absolute tolerance. Defaults to 1e-12.
  • skip List of indices to skip. Defaults to [].
source

Rank simplification

Example block output

Column reduction

Tenet.ColumnReductionType
ColumnReduction <: Transformation

Truncate the dimension of a Tensor in a TensorNetwork when it contains columns with all elements smaller than atol.

Keyword Arguments

  • atol Absolute tolerance. Defaults to 1e-12.
  • skip List of indices to skip. Defaults to [].
source
Example block output

Split simplification

Tenet.SplitSimplificationType
SplitSimplification <: Transformation

Reduce the rank of tensors in the TensorNetwork by decomposing them using the Singular Value Decomposition (SVD). Tensors whose factorization do not increase the maximum rank of the network are left decomposed.

Keyword Arguments

  • atol Absolute tolerance. Defaults to 1e-10.
source
Example block output
+transform(tn::AbstractTensorNetwork, configs)

Return a new TensorNetwork where some Transformation has been performed into it.

See also: transform!.

source
Tenet.transform!Function
transform!(tn::AbstractTensorNetwork, config::Transformation)
+transform!(tn::AbstractTensorNetwork, configs)

In-place version of transform.

source

Available transformations

Hyperindex converter

Tenet.HyperindConverterType
HyperindConverter <: Transformation

Convert hyperindices to COPY-tensors, represented by DeltaArrays. This transformation is always used by default when visualizing a TensorNetwork with plot.

source

Diagonal reduction

Tenet.DiagonalReductionType
DiagonalReduction <: Transformation

Reduce the dimension of a Tensor in a TensorNetwork when it has a pair of indices that fulfil a diagonal structure.

Keyword Arguments

  • atol Absolute tolerance. Defaults to 1e-12.
source
Example block output

Anti-diagonal reduction

Tenet.AntiDiagonalGaugingType
AntiDiagonalGauging <: Transformation

Reverse the order of tensor indices that fulfill the anti-diagonal condition. While this transformation doesn't directly enhance computational efficiency, it sets up the TensorNetwork for other operations that do.

Keyword Arguments

  • atol Absolute tolerance. Defaults to 1e-12.
  • skip List of indices to skip. Defaults to [].
source

Rank simplification

Tenet.RankSimplificationType
RankSimplification <: Transformation

Preemptively contract tensors whose result doesn't increase in size.

source
Example block output

Column reduction

Tenet.ColumnReductionType
ColumnReduction <: Transformation

Truncate the dimension of a Tensor in a TensorNetwork when it contains columns with all elements smaller than atol.

Keyword Arguments

  • atol Absolute tolerance. Defaults to 1e-12.
  • skip List of indices to skip. Defaults to [].
source
Example block output

Split simplification

Tenet.SplitSimplificationType
SplitSimplification <: Transformation

Reduce the rank of tensors in the TensorNetwork by decomposing them using the Singular Value Decomposition (SVD). Tensors whose factorization do not increase the maximum rank of the network are left decomposed.

Keyword Arguments

  • atol Absolute tolerance. Defaults to 1e-10.
source
Example block output diff --git a/previews/PR121/visualization.html b/previews/PR121/visualization.html index 14bb7eb6..df3bfa1c 100644 --- a/previews/PR121/visualization.html +++ b/previews/PR121/visualization.html @@ -1,4 +1,4 @@ Visualization · Tenet.jl

Visualization

Tenet provides a Package Extension for Makie support. You can just import a Makie backend and call Makie.plot on a TensorNetwork.

MakieCore.plotMethod
plot(tn::TensorNetwork; kwargs...)
 plot!(f::Union{Figure,GridPosition}, tn::TensorNetwork; kwargs...)
-plot!(ax::Union{Axis,Axis3}, tn::TensorNetwork; kwargs...)

Plot a TensorNetwork as a graph.

Keyword Arguments

  • labels If true, show the labels of the tensor indices. Defaults to false.
  • The rest of kwargs are passed to GraphMakie.graphplot.
source
plot(tn, labels=true)
Example block output
+plot!(ax::Union{Axis,Axis3}, tn::TensorNetwork; kwargs...)

Plot a TensorNetwork as a graph.

Keyword Arguments

source
plot(tn, labels=true)
Example block output