Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement keyword selector #130

Open
wants to merge 57 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 5 commits
Commits
Show all changes
57 commits
Select commit Hold shift + click to select a range
cb4cc22
Rename fwd_alg and rrule_alg in Defaults
pbrehmer Feb 12, 2025
90e1647
Update optimizer Defaults
pbrehmer Feb 12, 2025
99604b9
Update CTMRG Defaults
pbrehmer Feb 12, 2025
d20d04e
Update gradient algorithm defaults
pbrehmer Feb 12, 2025
80961cf
Add `fixedpoint_selector`
pbrehmer Feb 12, 2025
aab50ff
Add leading_boundary selector (for CTMRG)
pbrehmer Feb 13, 2025
3b21f40
Bundle kwargs in fixedpoint selector
pbrehmer Feb 13, 2025
a5606ed
Make kwarg-based leading_boundary and fixedpoint runnable
pbrehmer Feb 13, 2025
ef99d4c
Add docstrings
pbrehmer Feb 13, 2025
46baa3c
Add more docstrings
pbrehmer Feb 14, 2025
e9c35d6
Use kwarg-based methods in tests
pbrehmer Feb 14, 2025
44d9945
Merge branch 'master' into pb-kwarg-selector
pbrehmer Feb 17, 2025
7fb7cfe
Format leading_boundary and fixedpoint docstrings
pbrehmer Feb 17, 2025
4a61855
Implement select_algorithm scheme
pbrehmer Feb 17, 2025
f6e00f7
Fix leading_boundary
pbrehmer Feb 17, 2025
1a7cd36
Fix Heisenberg SU teset
pbrehmer Feb 17, 2025
c638018
Make selector compatible with svd_rrule_alg=nothing and improve alg w…
pbrehmer Feb 17, 2025
974bbc7
Merge branch 'master' into pb-kwarg-selector
leburgel Feb 21, 2025
4fd41cb
Properly merge...
leburgel Feb 21, 2025
0df39ee
Merge branch 'master' into pb-kwarg-selector
leburgel Feb 25, 2025
202e70a
Update src/algorithms/optimization/peps_optimization.jl
pbrehmer Feb 26, 2025
347dc10
Update src/PEPSKit.jl
pbrehmer Feb 26, 2025
bc3fe02
Apply most suggestions
pbrehmer Feb 26, 2025
d4a47d5
Merge branch 'pb-kwarg-selector' of github.com:quantumghent/PEPSKit.j…
pbrehmer Feb 26, 2025
ec2c675
Apply more suggestions
pbrehmer Feb 26, 2025
e7e9a50
Set eager=false in svd_rrule_alg again
pbrehmer Feb 26, 2025
2fd2333
example `select_algorithm` docstring
lkdvos Feb 27, 2025
46a6724
Interpolate docstrings and update `fixedpoint` and `leading_boundary`…
pbrehmer Mar 3, 2025
9d56f11
Move out Defaults and make types concrete
pbrehmer Mar 3, 2025
8aa6ac6
Only use concrete types in Defaults, use select_algorithm to map symb…
pbrehmer Mar 3, 2025
09dfa59
Rename gradient algorithm defaults
pbrehmer Mar 4, 2025
5634a34
Add remaining select_algorithm methods and algorithm Symbols
pbrehmer Mar 4, 2025
c4326ec
Rearrange files and update kwarg constructors
pbrehmer Mar 4, 2025
105a939
Remove Symbol Dicts and make kwarg constructors use select_algorithm
pbrehmer Mar 5, 2025
8f86cb7
Add CTMRGAlgorithm, PEPSOptimize and OptimizationAlgorithm select_alg…
pbrehmer Mar 5, 2025
c5277a6
Fix fixedpoint select_algorithm
pbrehmer Mar 5, 2025
e74f77f
Fix IterSVD docstring
pbrehmer Mar 5, 2025
5e94397
Make runnable
pbrehmer Mar 5, 2025
48c92c0
Fix formatting
pbrehmer Mar 5, 2025
3baea08
Merge branch 'master' into pb-kwarg-selector
pbrehmer Mar 5, 2025
75d1fab
Update fixedpoint docstring
pbrehmer Mar 5, 2025
adf66e9
Rename solver to solver_alg in LinSolver and EigSolver
pbrehmer Mar 6, 2025
c492f13
Fix gradients test
pbrehmer Mar 6, 2025
1bb64e9
Fix more tests
pbrehmer Mar 6, 2025
e1cf081
Update docstrings
pbrehmer Mar 6, 2025
52af682
Fix typo in TruncationScheme select_algorithm
pbrehmer Mar 6, 2025
cd0b781
Hopefully stabilize SU Heisenberg test on Windows
pbrehmer Mar 6, 2025
feef4a6
Rename optimizer to optimizer_alg
pbrehmer Mar 7, 2025
efe4319
Update more docstrings
pbrehmer Mar 7, 2025
1becbb4
Defaults markdown formatting updates
lkdvos Mar 7, 2025
05d8713
Consistently import `leading_boundary`
lkdvos Mar 7, 2025
c7b7d58
Replace if-else with `IdDict` for improved extensibility
lkdvos Mar 7, 2025
59ea7e4
Remove rrule `@reset` in select_algorithm(fixedpoint; ...)
pbrehmer Mar 7, 2025
73922b6
Remove ::Type{Algorithm} syntax from select_algorithm
pbrehmer Mar 7, 2025
6348656
Adapt docstrings
pbrehmer Mar 7, 2025
78f35e7
Adapt tests
pbrehmer Mar 7, 2025
c06c20b
Fix flavors.jl test
pbrehmer Mar 7, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
70 changes: 46 additions & 24 deletions src/PEPSKit.jl
Original file line number Diff line number Diff line change
Expand Up @@ -68,15 +68,21 @@ Module containing default algorithm parameter values and arguments.
- `ctmrg_tol=1e-8`: Tolerance checking singular value and norm convergence
- `ctmrg_maxiter=100`: Maximal number of CTMRG iterations per run
- `ctmrg_miniter=4`: Minimal number of CTMRG carried out
- `ctmrg_alg_type=SimultaneousCTMRG`: Default CTMRG algorithm variant
- `trscheme=FixedSpaceTruncation()`: Truncation scheme for SVDs and other decompositions
- `fwd_alg=TensorKit.SDD()`: SVD algorithm that is used in the forward pass
- `rrule_alg`: Reverse-rule for differentiating that SVD
- `svd_fwd_alg=TensorKit.SDD()`: SVD algorithm that is used in the forward pass
- `svd_rrule_alg`: Reverse-rule for differentiating that SVD

```
rrule_alg = Arnoldi(; tol=ctmrg_tol, krylovdim=48, verbosity=-1)
svd_rrule_alg = Arnoldi(; tol=ctmrg_tol, krylovdim=48, verbosity=-1)
```

- `svd_alg`: Combination of forward and reverse SVD algorithms

```
svd_alg=SVDAdjoint(; fwd_alg=svd_fwd_alg, rrule_alg=svd_rrule_alg)
```

- `svd_alg=SVDAdjoint(; fwd_alg, rrule_alg)`: Combination of `fwd_alg` and `rrule_alg`
- `projector_alg_type=HalfInfiniteProjector`: Default type of projector algorithm
- `projector_alg`: Algorithm to compute CTMRG projectors

Expand All @@ -87,35 +93,43 @@ Module containing default algorithm parameter values and arguments.
- `ctmrg_alg`: Algorithm for performing CTMRG runs

```
ctmrg_alg = SimultaneousCTMRG(
ctmrg_alg = ctmrg_alg_type(
ctmrg_tol, ctmrg_maxiter, ctmrg_miniter, 2, projector_alg
)
```

# Optimization
- `fpgrad_maxiter=30`: Maximal number of iterations for computing the CTMRG fixed-point gradient
- `fpgrad_tol=1e-6`: Convergence tolerance for the fixed-point gradient iteration
- `iterscheme=:fixed`: Scheme for differentiating one CTMRG iteration
- `gradient_alg_tol=1e-6`: Convergence tolerance for the fixed-point gradient iteration
- `gradient_alg_maxiter=30`: Maximal number of iterations for computing the CTMRG fixed-point gradient
- `gradient_alg_iterscheme=:fixed`: Scheme for differentiating one CTMRG iteration
- `gradient_linsolver`: Default linear solver for the `LinSolver` gradient algorithm

```
gradient_linsolver=KrylovKit.BiCGStab(; maxiter=fpgrad_maxiter, tol=fpgrad_tol)
gradient_linsolver=KrylovKit.BiCGStab(; maxiter=gradient_alg_maxiter, tol=gradient_alg_tol)
```

- `gradient_eigsolve`: Default eigsolver for the `EigSolver` gradient algorithm

```
gradient_eigsolver = KrylovKit.Arnoldi(; maxiter=fpgrad_maxiter, tol=fpgrad_tol, eager=true)
gradient_eigsolver = KrylovKit.Arnoldi(; maxiter=gradient_alg_maxiter, tol=gradient_alg_tol, eager=true)
```

- `gradient_alg`: Algorithm to compute the gradient fixed-point

```
gradient_alg = LinSolver(; solver=gradient_linsolver, iterscheme)
gradient_alg = LinSolver(; solver=gradient_linsolver, iterscheme=gradient_alg_iterscheme)
```

- `reuse_env=true`: If `true`, the current optimization step is initialized on the previous environment
- `optimizer=LBFGS(32; maxiter=100, gradtol=1e-4, verbosity=3)`: Default `OptimKit.OptimizerAlgorithm` for PEPS optimization

- `optimizer_tol`: Gradient norm tolerance of the optimizer
- `optimizer_maxiter`: Maximal number of optimization steps
- `lbfgs_memory`: Size of limited memory representation of BFGS Hessian matrix
- `optimizer`: Default `OptimKit.OptimizerAlgorithm` for PEPS optimization

```
optimizer=LBFGS(lbfgs_memory; maxiter=optimizer_maxiter, gradtol=optimizer_tol, verbosity=3)
```

# OhMyThreads scheduler
- `scheduler=Ref{Scheduler}(...)`: Multi-threading scheduler which can be accessed via `set_scheduler!`
Expand All @@ -133,28 +147,36 @@ module Defaults
const ctmrg_tol = 1e-8
const ctmrg_maxiter = 100
const ctmrg_miniter = 4
const ctmrg_alg_type = SimultaneousCTMRG
const sparse = false
const trscheme = FixedSpaceTruncation()
const fwd_alg = TensorKit.SDD()
const rrule_alg = Arnoldi(; tol=ctmrg_tol, krylovdim=48, verbosity=-1)
const svd_alg = SVDAdjoint(; fwd_alg, rrule_alg)
const svd_fwd_alg = TensorKit.SDD()
const svd_rrule_alg = Arnoldi(; tol=ctmrg_tol, krylovdim=48, verbosity=-1)
const svd_alg = SVDAdjoint(; fwd_alg=svd_fwd_alg, rrule_alg=svd_rrule_alg)
const projector_alg_type = HalfInfiniteProjector
const projector_alg = projector_alg_type(; svd_alg, trscheme, verbosity=0)
const ctmrg_alg = SimultaneousCTMRG(
const ctmrg_alg = ctmrg_alg_type(
ctmrg_tol, ctmrg_maxiter, ctmrg_miniter, 2, projector_alg
)

# Optimization
const fpgrad_maxiter = 30
const fpgrad_tol = 1e-6
const gradient_linsolver = KrylovKit.BiCGStab(; maxiter=fpgrad_maxiter, tol=fpgrad_tol)
const gradient_eigsolver = KrylovKit.Arnoldi(;
maxiter=fpgrad_maxiter, tol=fpgrad_tol, eager=true
const gradient_alg_tol = 1e-6
const gradient_alg_maxiter = 30
const gradient_linsolver = BiCGStab(;
maxiter=gradient_alg_maxiter, tol=gradient_alg_tol
)
const iterscheme = :fixed
const gradient_alg = LinSolver(; solver=gradient_linsolver, iterscheme)
const gradient_eigsolver = Arnoldi(;
maxiter=gradient_alg_maxiter, tol=gradient_alg_tol, eager=true
)
const gradient_alg_iterscheme = :fixed
const gradient_alg = LinSolver(; solver=gradient_linsolver, iterscheme=gradient_alg_iterscheme)
const reuse_env = true
const optimizer = LBFGS(32; maxiter=100, gradtol=1e-4, verbosity=3)
const optimizer_tol = 1e-4
const optimizer_maxiter = 100
const lbfgs_memory = 20
const optimizer = LBFGS(
lbfgs_memory; maxiter=optimizer_maxiter, gradtol=optimizer_tol, verbosity=3
)

# OhMyThreads scheduler defaults
const scheduler = Ref{Scheduler}()
Expand Down
36 changes: 18 additions & 18 deletions src/algorithms/optimization/fixed_point_differentiation.jl
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@ abstract type GradMode{F} end
iterscheme(::GradMode{F}) where {F} = F

"""
struct GeomSum(; maxiter=Defaults.fpgrad_maxiter, tol=Defaults.fpgrad_tol,
verbosity=0, iterscheme=Defaults.iterscheme) <: GradMode{iterscheme}
struct GeomSum(; tol=Defaults.gradient_alg_tol, maxiter=Defaults.gradient_alg_maxiter,
verbosity=0, iterscheme=Defaults.gradient_alg_iterscheme) <: GradMode{iterscheme}

Gradient mode for CTMRG using explicit evaluation of the geometric sum.

Expand All @@ -15,22 +15,22 @@ the differentiated iteration consists of a CTMRG iteration and a subsequent gaug
such that `gauge_fix` will also be differentiated everytime a CTMRG derivative is computed.
"""
struct GeomSum{F} <: GradMode{F}
maxiter::Int
tol::Real
maxiter::Int
verbosity::Int
end
function GeomSum(;
maxiter=Defaults.fpgrad_maxiter,
tol=Defaults.fpgrad_tol,
tol=Defaults.gradient_alg_tol,
maxiter=Defaults.gradient_alg_maxiter,
verbosity=0,
iterscheme=Defaults.iterscheme,
iterscheme=Defaults.gradient_alg_iterscheme,
)
return GeomSum{iterscheme}(maxiter, tol, verbosity)
return GeomSum{iterscheme}(tol, maxiter, verbosity)
end

"""
struct ManualIter(; maxiter=Defaults.fpgrad_maxiter, tol=Defaults.fpgrad_tol,
verbosity=0, iterscheme=Defaults.iterscheme) <: GradMode{iterscheme}
struct ManualIter(; tol=Defaults.gradient_alg_tol, maxiter=Defaults.gradient_alg_maxiter,
verbosity=0, iterscheme=Defaults.gradient_alg_iterscheme) <: GradMode{iterscheme}

Gradient mode for CTMRG using manual iteration to solve the linear problem.

Expand All @@ -41,21 +41,21 @@ the differentiated iteration consists of a CTMRG iteration and a subsequent gaug
such that `gauge_fix` will also be differentiated everytime a CTMRG derivative is computed.
"""
struct ManualIter{F} <: GradMode{F}
maxiter::Int
tol::Real
maxiter::Int
verbosity::Int
end
function ManualIter(;
maxiter=Defaults.fpgrad_maxiter,
tol=Defaults.fpgrad_tol,
tol=Defaults.gradient_alg_tol,
maxiter=Defaults.gradient_alg_maxiter,
verbosity=0,
iterscheme=Defaults.iterscheme,
iterscheme=Defaults.gradient_alg_iterscheme,
)
return ManualIter{iterscheme}(maxiter, tol, verbosity)
return ManualIter{iterscheme}(tol, maxiter, verbosity)
end

"""
struct LinSolver(; solver=KrylovKit.GMRES(), iterscheme=Defaults.iterscheme) <: GradMode{iterscheme}
struct LinSolver(; solver=KrylovKit.GMRES(), iterscheme=Defaults.gradient_alg_iterscheme) <: GradMode{iterscheme}

Gradient mode wrapper around `KrylovKit.LinearSolver` for solving the gradient linear
problem using iterative solvers.
Expand All @@ -70,14 +70,14 @@ struct LinSolver{F} <: GradMode{F}
solver::KrylovKit.LinearSolver
end
function LinSolver(;
solver=KrylovKit.BiCGStab(; maxiter=Defaults.fpgrad_maxiter, tol=Defaults.fpgrad_tol),
solver=Defaults.gradient_linsolver,
iterscheme=Defaults.iterscheme,
)
return LinSolver{iterscheme}(solver)
end

"""
struct EigSolver(; solver=KrylovKit.Arnoldi(), iterscheme=Defaults.iterscheme) <: GradMode{iterscheme}
struct EigSolver(; solver=Defaults.gradient_eigsolver, iterscheme=Defaults.gradient_alg_iterscheme) <: GradMode{iterscheme}

Gradient mode wrapper around `KrylovKit.KrylovAlgorithm` for solving the gradient linear
problem as an eigenvalue problem.
Expand All @@ -91,7 +91,7 @@ such that `gauge_fix` will also be differentiated everytime a CTMRG derivative i
struct EigSolver{F} <: GradMode{F}
solver::KrylovKit.KrylovAlgorithm
end
function EigSolver(; solver=Defauls.gradient_eigsolver, iterscheme=Defaults.iterscheme)
function EigSolver(; solver=Defaults.gradient_eigsolver, iterscheme=Defaults.iterscheme)
return EigSolver{iterscheme}(solver)
end

Expand Down
119 changes: 115 additions & 4 deletions src/algorithms/optimization/peps_optimization.jl
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ function PEPSOptimize(;
end

"""
fixedpoint(operator, peps₀::InfinitePEPS, env₀::CTMRGEnv; kwargs...)
fixedpoint(operator, peps₀::InfinitePEPS, env₀::CTMRGEnv; kwargs...) # TODO
fixedpoint(operator, peps₀::InfinitePEPS, env₀::CTMRGEnv, alg::PEPSOptimize;
finalize!=OptimKit._finalize!)

Expand Down Expand Up @@ -71,8 +71,8 @@ information `NamedTuple` which contains the following entries:
"""
function fixedpoint(operator, peps₀::InfinitePEPS, env₀::CTMRGEnv; kwargs...)
throw(error("method not yet implemented"))
alg = fixedpoint_selector(; kwargs...) # TODO: implement fixedpoint_selector
return fixedpoint(operator, peps₀, env₀, alg)
alg, finalize! = fixedpoint_selector(; kwargs...)
return fixedpoint(operator, peps₀, env₀, alg; finalize!)
end
function fixedpoint(
operator,
Expand Down Expand Up @@ -131,7 +131,7 @@ function fixedpoint(
return E, g
end

info = (
info = (;
last_gradient=∂cost,
fg_evaluations=numfg,
costs=convergence_history[:, 1],
Expand All @@ -144,6 +144,117 @@ function fixedpoint(
return peps_final, env_final, cost, info
end

"""
fixedpoint_selector(;
boundary_tol=Defaults.ctmrg_tol,
boundary_miniter=Defaults.ctmrg_maxiter,
boundary_maxiter=Defaults.ctmrg_miniter,
boundary_alg_type=Defaults.ctmrg_alg_type,
trscheme=Defaults.trscheme,
svd_fwd_alg=Defaults.svd_fwd_alg,
svd_rrule_alg=Defaults.svd_rrule_alg,
projector_alg_type=Defaults.projector_alg_type,
iterscheme=Defaults.gradient_alg_iterscheme,
reuse_env=Defaults.reuse_env,
gradient_alg_tol=Defaults.gradient_alg_tol,
gradient_alg_maxiter=Defaults.gradient_alg_maxiter,
gradient_alg_type=typeof(Defaults.gradient_alg),
optimizer_tol=Defaults.optimizer_tol,
optimizer_maxiter=Defaults.optimizer_maxiter,
lbfgs_memory=Defaults.lbfgs_memory,
symmetrization=nothing,
verbosity=1,
(finalize!)=OptimKit._finalize!,
)

Parse optimization keyword arguments onto the corresponding algorithm structs and return
a final `PEPSOptimize` to be used in `fixedpoint`. For a description of the keyword
arguments, see [`fixedpoint`](@ref).
"""
function fixedpoint_selector(;
boundary_tol=Defaults.ctmrg_tol,
boundary_miniter=Defaults.ctmrg_maxiter,
boundary_maxiter=Defaults.ctmrg_miniter,
boundary_alg_type=Defaults.ctmrg_alg_type,
trscheme=Defaults.trscheme,
svd_fwd_alg=Defaults.svd_fwd_alg,
svd_rrule_alg=Defaults.svd_rrule_alg,
projector_alg_type=Defaults.projector_alg_type,
iterscheme=Defaults.gradient_alg_iterscheme,
reuse_env=Defaults.reuse_env,
gradient_alg_tol=Defaults.gradient_alg_tol,
gradient_alg_maxiter=Defaults.gradient_alg_maxiter,
gradient_alg_type=typeof(Defaults.gradient_alg),
optimizer_tol=Defaults.optimizer_tol,
optimizer_maxiter=Defaults.optimizer_maxiter,
lbfgs_memory=Defaults.lbfgs_memory,
symmetrization=nothing,
verbosity=1,
(finalize!)=OptimKit._finalize!,
)
if verbosity ≤ 0 # disable output
optimizer_verbosity = -1
boundary_verbosity = -1
projector_verbosity = -1
gradient_alg_verbosity = -1
svd_rrule_verbosity = -1
elseif verbosity == 1 # output only optimization steps and degeneracy warnings
optimizer_verbosity = 3
boundary_verbosity = -1
projector_verbosity = 1
gradient_alg_verbosity = -1
svd_rrule_verbosity = -1
elseif verbosity == 2 # output optimization and boundary information
optimizer_verbosity = 3
boundary_verbosity = 2
projector_verbosity = 1
gradient_alg_verbosity = -1
svd_rrule_verbosity = -1
elseif verbosity == 3 # verbose debug output
optimizer_verbosity = 3
boundary_verbosity = 3
projector_verbosity = 1
gradient_alg_verbosity = 3
svd_rrule_verbosity = 3
end

svd_alg = SVDAdjoint(; fwd_alg=svd_fwd_alg, rrule_alg=svd_rrule_alg)
projector_alg = projector_alg_type(svd_alg, trscheme, projector_verbosity)
boundary_alg = boundary_alg_type(
boundary_tol, boundary_maxiter, boundary_miniter, boundary_verbosity, projector_alg
)
gradient_alg = if gradient_alg_type <: Union{GeomSum,ManIter}
gradient_alg_type(;
tol=gradient_alg_tol,
maxiter=gradient_alg_maxiter,
verbosity=gradient_alg_verbosity,
iterscheme,
)
elseif gradient_alg_type <: LinSolver
solver = Defaults.gradient_linsolver.solver
@reset solver.maxiter = gradient_alg_maxiter
@reset solver.tol = gradient_alg_tol
@reset solver.verbosity = gradient_alg_verbosity
LinSolver(; solver, iterscheme)
elseif gradient_alg_type <: EigSolver
solver = Defaults.gradient_eigsolver.solver
@reset solver.maxiter = gradient_alg_maxiter
@reset solver.tol = gradient_alg_tol
@reset solver.verbosity = gradient_alg_verbosity
EigSolver(; solver, iterscheme)
end
optimizer = LBFGS(
lbfgs_memory;
gradtol=optimizer_tol,
maxiter=optimizer_maxiter,
verbosity=optimizer_verbosity,
)
optimization_alg = PEPSOptimize(;
boundary_alg, gradient_alg, optimizer, reuse_env, symmetrization
)
return optimization_alg, finalize!
end

# Update PEPS unit cell in non-mutating way
# Note: Both x and η are InfinitePEPS during optimization
function peps_retract(x, η, α)
Expand Down
6 changes: 3 additions & 3 deletions src/utility/svd.jl
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ using TensorKit:
const CRCExt = Base.get_extension(KrylovKit, :KrylovKitChainRulesCoreExt)

"""
struct SVDAdjoint(; fwd_alg=Defaults.fwd_alg, rrule_alg=Defaults.rrule_alg,
struct SVDAdjoint(; fwd_alg=Defaults.svd_fwd_alg, rrule_alg=Defaults.svd_rrule_alg,
broadening=nothing)

Wrapper for a SVD algorithm `fwd_alg` with a defined reverse rule `rrule_alg`.
Expand All @@ -19,8 +19,8 @@ In case of degenerate singular values, one might need a `broadening` scheme whic
removes the divergences from the adjoint.
"""
@kwdef struct SVDAdjoint{F,R,B}
fwd_alg::F = Defaults.fwd_alg
rrule_alg::R = Defaults.rrule_alg
fwd_alg::F = Defaults.svd_fwd_alg
rrule_alg::R = Defaults.svd_rrule_alg
broadening::B = nothing
end # Keep truncation algorithm separate to be able to specify CTMRG dependent information

Expand Down
Loading