Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement keyword selector #130

Open
wants to merge 57 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
57 commits
Select commit Hold shift + click to select a range
cb4cc22
Rename fwd_alg and rrule_alg in Defaults
pbrehmer Feb 12, 2025
90e1647
Update optimizer Defaults
pbrehmer Feb 12, 2025
99604b9
Update CTMRG Defaults
pbrehmer Feb 12, 2025
d20d04e
Update gradient algorithm defaults
pbrehmer Feb 12, 2025
80961cf
Add `fixedpoint_selector`
pbrehmer Feb 12, 2025
aab50ff
Add leading_boundary selector (for CTMRG)
pbrehmer Feb 13, 2025
3b21f40
Bundle kwargs in fixedpoint selector
pbrehmer Feb 13, 2025
a5606ed
Make kwarg-based leading_boundary and fixedpoint runnable
pbrehmer Feb 13, 2025
ef99d4c
Add docstrings
pbrehmer Feb 13, 2025
46baa3c
Add more docstrings
pbrehmer Feb 14, 2025
e9c35d6
Use kwarg-based methods in tests
pbrehmer Feb 14, 2025
44d9945
Merge branch 'master' into pb-kwarg-selector
pbrehmer Feb 17, 2025
7fb7cfe
Format leading_boundary and fixedpoint docstrings
pbrehmer Feb 17, 2025
4a61855
Implement select_algorithm scheme
pbrehmer Feb 17, 2025
f6e00f7
Fix leading_boundary
pbrehmer Feb 17, 2025
1a7cd36
Fix Heisenberg SU teset
pbrehmer Feb 17, 2025
c638018
Make selector compatible with svd_rrule_alg=nothing and improve alg w…
pbrehmer Feb 17, 2025
974bbc7
Merge branch 'master' into pb-kwarg-selector
leburgel Feb 21, 2025
4fd41cb
Properly merge...
leburgel Feb 21, 2025
0df39ee
Merge branch 'master' into pb-kwarg-selector
leburgel Feb 25, 2025
202e70a
Update src/algorithms/optimization/peps_optimization.jl
pbrehmer Feb 26, 2025
347dc10
Update src/PEPSKit.jl
pbrehmer Feb 26, 2025
bc3fe02
Apply most suggestions
pbrehmer Feb 26, 2025
d4a47d5
Merge branch 'pb-kwarg-selector' of github.com:quantumghent/PEPSKit.j…
pbrehmer Feb 26, 2025
ec2c675
Apply more suggestions
pbrehmer Feb 26, 2025
e7e9a50
Set eager=false in svd_rrule_alg again
pbrehmer Feb 26, 2025
2fd2333
example `select_algorithm` docstring
lkdvos Feb 27, 2025
46a6724
Interpolate docstrings and update `fixedpoint` and `leading_boundary`…
pbrehmer Mar 3, 2025
9d56f11
Move out Defaults and make types concrete
pbrehmer Mar 3, 2025
8aa6ac6
Only use concrete types in Defaults, use select_algorithm to map symb…
pbrehmer Mar 3, 2025
09dfa59
Rename gradient algorithm defaults
pbrehmer Mar 4, 2025
5634a34
Add remaining select_algorithm methods and algorithm Symbols
pbrehmer Mar 4, 2025
c4326ec
Rearrange files and update kwarg constructors
pbrehmer Mar 4, 2025
105a939
Remove Symbol Dicts and make kwarg constructors use select_algorithm
pbrehmer Mar 5, 2025
8f86cb7
Add CTMRGAlgorithm, PEPSOptimize and OptimizationAlgorithm select_alg…
pbrehmer Mar 5, 2025
c5277a6
Fix fixedpoint select_algorithm
pbrehmer Mar 5, 2025
e74f77f
Fix IterSVD docstring
pbrehmer Mar 5, 2025
5e94397
Make runnable
pbrehmer Mar 5, 2025
48c92c0
Fix formatting
pbrehmer Mar 5, 2025
3baea08
Merge branch 'master' into pb-kwarg-selector
pbrehmer Mar 5, 2025
75d1fab
Update fixedpoint docstring
pbrehmer Mar 5, 2025
adf66e9
Rename solver to solver_alg in LinSolver and EigSolver
pbrehmer Mar 6, 2025
c492f13
Fix gradients test
pbrehmer Mar 6, 2025
1bb64e9
Fix more tests
pbrehmer Mar 6, 2025
e1cf081
Update docstrings
pbrehmer Mar 6, 2025
52af682
Fix typo in TruncationScheme select_algorithm
pbrehmer Mar 6, 2025
cd0b781
Hopefully stabilize SU Heisenberg test on Windows
pbrehmer Mar 6, 2025
feef4a6
Rename optimizer to optimizer_alg
pbrehmer Mar 7, 2025
efe4319
Update more docstrings
pbrehmer Mar 7, 2025
1becbb4
Defaults markdown formatting updates
lkdvos Mar 7, 2025
05d8713
Consistently import `leading_boundary`
lkdvos Mar 7, 2025
c7b7d58
Replace if-else with `IdDict` for improved extensibility
lkdvos Mar 7, 2025
59ea7e4
Remove rrule `@reset` in select_algorithm(fixedpoint; ...)
pbrehmer Mar 7, 2025
73922b6
Remove ::Type{Algorithm} syntax from select_algorithm
pbrehmer Mar 7, 2025
6348656
Adapt docstrings
pbrehmer Mar 7, 2025
78f35e7
Adapt tests
pbrehmer Mar 7, 2025
c06c20b
Fix flavors.jl test
pbrehmer Mar 7, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ chi = 20
ctm_alg = SimultaneousCTMRG(; tol=1e-10, trscheme=truncdim(chi))
opt_alg = PEPSOptimize(;
boundary_alg=ctm_alg,
optimizer=LBFGS(4; maxiter=100, gradtol=1e-4, verbosity=3),
optimizer_alg=LBFGS(4; maxiter=100, gradtol=1e-4, verbosity=3),
gradient_alg=LinSolver(),
reuse_env=true,
)
Expand Down
2 changes: 1 addition & 1 deletion docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ chi = 20
ctm_alg = SimultaneousCTMRG(; tol=1e-10, trscheme=truncdim(chi))
opt_alg = PEPSOptimize(;
boundary_alg=ctm_alg,
optimizer=LBFGS(4; maxiter=100, gradtol=1e-4, verbosity=3),
optimizer_alg=LBFGS(4; maxiter=100, gradtol=1e-4, verbosity=3),
gradient_alg=LinSolver(),
reuse_env=true,
)
Expand Down
4 changes: 2 additions & 2 deletions examples/heisenberg.jl
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@ H = heisenberg_XYZ(InfiniteSquare(); Jx=-1, Jy=1, Jz=-1)
ctm_alg = SimultaneousCTMRG(; tol=1e-10, verbosity=2)
opt_alg = PEPSOptimize(;
boundary_alg=ctm_alg,
optimizer=LBFGS(4; maxiter=100, gradtol=1e-4, verbosity=3),
gradient_alg=LinSolver(; solver=GMRES(; tol=1e-6, maxiter=100)),
optimizer_alg=LBFGS(4; maxiter=100, gradtol=1e-4, verbosity=3),
gradient_alg=LinSolver(; solver_alg=GMRES(; tol=1e-6, maxiter=100)),
reuse_env=true,
)

Expand Down
139 changes: 139 additions & 0 deletions src/Defaults.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
"""
module Defaults

Module containing default algorithm parameter values and arguments.

## CTMRG

- `ctmrg_tol=$(Defaults.ctmrg_tol)` : Tolerance checking singular value and norm convergence.
- `ctmrg_maxiter=$(Defaults.ctmrg_maxiter)` : Maximal number of CTMRG iterations per run.
- `ctmrg_miniter=$(Defaults.ctmrg_miniter)` : Minimal number of CTMRG carried out.
- `ctmrg_alg=:$(Defaults.ctmrg_alg)` : Default CTMRG algorithm variant.
- `ctmrg_verbosity=$(Defaults.ctmrg_verbosity)` : CTMRG output information verbosity

## SVD forward & reverse

- `trscheme=:$(Defaults.trscheme)` : Truncation scheme for SVDs and other decompositions.
- `svd_fwd_alg=:$(Defaults.svd_fwd_alg)` : SVD algorithm that is used in the forward pass.
- `svd_rrule_tol=$(Defaults.svd_rrule_tol)` : Accuracy of SVD reverse-rule.
- `svd_rrule_min_krylovdim=$(Defaults.svd_rrule_min_krylovdim)` : Minimal Krylov dimension of the reverse-rule algorithm (if it is a Krylov algorithm).
- `svd_rrule_verbosity=$(Defaults.svd_rrule_verbosity)` : SVD gradient output verbosity.
- `svd_rrule_alg=:$(Defaults.svd_rrule_alg)` : Reverse-rule algorithm for the SVD gradient.

## Projectors

- `projector_alg=:$(Defaults.projector_alg)` : Default variant of the CTMRG projector algorithm.
- `projector_verbosity=$(Defaults.projector_verbosity)` : Projector output information verbosity.

## Fixed-point gradient

- `gradient_tol=$(Defaults.gradient_tol)` : Convergence tolerance for the fixed-point gradient iteration.
- `gradient_maxiter=$(Defaults.gradient_maxiter)` : Maximal number of iterations for computing the CTMRG fixed-point gradient.
- `gradient_verbosity=$(Defaults.gradient_verbosity)` : Gradient output information verbosity.
- `gradient_linsolver=:$(Defaults.gradient_linsolver)` : Default linear solver for the `LinSolver` gradient algorithm.
- `gradient_eigsolver=:$(Defaults.gradient_eigsolver)` : Default eigensolver for the `EigSolver` gradient algorithm.
- `gradient_eigsolver_eager=$(Defaults.gradient_eigsolver_eager)` : Enables `EigSolver` algorithm to finish before the full Krylov dimension is reached.
- `gradient_iterscheme=:$(Defaults.gradient_iterscheme)` : Scheme for differentiating one CTMRG iteration.
- `gradient_alg=:$(Defaults.gradient_alg)` : Algorithm variant for computing the gradient fixed-point.

## Optimization

- `reuse_env=$(Defaults.reuse_env)` : If `true`, the current optimization step is initialized on the previous environment, otherwise a random environment is used.
- `optimizer_tol=$(Defaults.optimizer_tol)` : Gradient norm tolerance of the optimizer.
- `optimizer_maxiter=$(Defaults.optimizer_maxiter)` : Maximal number of optimization steps.
- `optimizer_verbosity=$(Defaults.optimizer_verbosity)` : Optimizer output information verbosity.
- `optimizer_alg=:$(Defaults.optimizer_alg)` : Default `OptimKit.OptimizerAlgorithm` for PEPS optimization.
- `lbfgs_memory=$(Defaults.lbfgs_memory)` : Size of limited memory representation of BFGS Hessian matrix.

## OhMyThreads scheduler

- `scheduler=Ref{Scheduler}(...)` : Multi-threading scheduler which can be accessed via `set_scheduler!`.
"""
module Defaults

export set_scheduler!

using OhMyThreads

# CTMRG
const ctmrg_tol = 1e-8
const ctmrg_maxiter = 100
const ctmrg_miniter = 4
const ctmrg_alg = :simultaneous # ∈ {:simultaneous, :sequential}
const ctmrg_verbosity = 2
const sparse = false # TODO: implement sparse CTMRG

# SVD forward & reverse
const trscheme = :fixedspace # ∈ {:fixedspace, :notrunc, :truncerr, :truncspace, :truncbelow}
const svd_fwd_alg = :sdd # ∈ {:sdd, :svd, :iterative}
const svd_rrule_tol = ctmrg_tol
const svd_rrule_min_krylovdim = 48
const svd_rrule_verbosity = -1
const svd_rrule_alg = :arnoldi # ∈ {:gmres, :bicgstab, :arnoldi}
const krylovdim_factor = 1.4

# Projectors
const projector_alg = :halfinfinite # ∈ {:halfinfinite, :fullinfinite}
const projector_verbosity = 0

# Fixed-point gradient
const gradient_tol = 1e-6
const gradient_maxiter = 30
const gradient_verbosity = -1
const gradient_linsolver = :bicgstab # ∈ {:gmres, :bicgstab}
const gradient_eigsolver = :arnoldi
const gradient_eigsolver_eager = true
const gradient_iterscheme = :fixed # ∈ {:fixed, :diffgauge}
const gradient_alg = :linsolver # ∈ {:geomsum, :manualiter, :linsolver, :eigsolver}

# Optimization
const reuse_env = true
const optimizer_tol = 1e-4
const optimizer_maxiter = 100
const optimizer_verbosity = 3
const optimizer_alg = :lbfgs
const lbfgs_memory = 20

# OhMyThreads scheduler defaults
const scheduler = Ref{Scheduler}()

"""
set_scheduler!([scheduler]; kwargs...)

Set `OhMyThreads` multi-threading scheduler parameters.

The function either accepts a `scheduler` as an `OhMyThreads.Scheduler` or
as a symbol where the corresponding parameters are specificed as keyword arguments.
For instance, a static scheduler that uses four tasks with chunking enabled
can be set via
```
set_scheduler!(StaticScheduler(; ntasks=4, chunking=true))
```
or equivalently with
```
set_scheduler!(:static; ntasks=4, chunking=true)
```
For a detailed description of all schedulers and their keyword arguments consult the
[`OhMyThreads` documentation](https://juliafolds2.github.io/OhMyThreads.jl/stable/refs/api/#Schedulers).

If no `scheduler` is passed and only kwargs are provided, the `DynamicScheduler`
constructor is used with the provided kwargs.

To reset the scheduler to its default value, one calls `set_scheduler!` without passing
arguments which then uses the default `DynamicScheduler()`. If the number of used threads is
just one it falls back to `SerialScheduler()`.
"""
function set_scheduler!(sc=OhMyThreads.Implementation.NotGiven(); kwargs...)
if isempty(kwargs) && sc isa OhMyThreads.Implementation.NotGiven
scheduler[] = Threads.nthreads() == 1 ? SerialScheduler() : DynamicScheduler()
else
scheduler[] = OhMyThreads.Implementation._scheduler_from_userinput(sc; kwargs...)

Check warning on line 130 in src/Defaults.jl

View check run for this annotation

Codecov / codecov/patch

src/Defaults.jl#L130

Added line #L130 was not covered by tests
end
return nothing
end

function __init__()
return set_scheduler!()
end

end
147 changes: 4 additions & 143 deletions src/PEPSKit.jl
Original file line number Diff line number Diff line change
@@ -1,18 +1,19 @@
module PEPSKit

using LinearAlgebra, Statistics, Base.Threads, Base.Iterators, Printf
using Base: @kwdef
using Compat
using Accessors: @set, @reset
using VectorInterface
using TensorKit, KrylovKit, MPSKit, OptimKit, TensorOperations
using ChainRulesCore, Zygote
using LoggingExtras
using MPSKit: loginit!, logiter!, logfinish!, logcancel!
import MPSKit: leading_boundary, loginit!, logiter!, logfinish!, logcancel!
using MPSKitModels
using FiniteDifferences
using OhMyThreads: tmap

include("Defaults.jl") # Include first to allow for docstring interpolation with Defaults values

include("utility/util.jl")
include("utility/diffable_threads.jl")
include("utility/svd.jl")
Expand Down Expand Up @@ -65,147 +66,7 @@ include("utility/symmetrization.jl")
include("algorithms/optimization/fixed_point_differentiation.jl")
include("algorithms/optimization/peps_optimization.jl")

"""
module Defaults

Module containing default algorithm parameter values and arguments.

# CTMRG
- `ctmrg_tol=1e-8`: Tolerance checking singular value and norm convergence
- `ctmrg_maxiter=100`: Maximal number of CTMRG iterations per run
- `ctmrg_miniter=4`: Minimal number of CTMRG carried out
- `trscheme=FixedSpaceTruncation()`: Truncation scheme for SVDs and other decompositions
- `fwd_alg=TensorKit.SDD()`: SVD algorithm that is used in the forward pass
- `rrule_alg`: Reverse-rule for differentiating that SVD

```
rrule_alg = Arnoldi(; tol=ctmrg_tol, krylovdim=48, verbosity=-1)
```

- `svd_alg=SVDAdjoint(; fwd_alg, rrule_alg)`: Combination of `fwd_alg` and `rrule_alg`
- `projector_alg_type=HalfInfiniteProjector`: Default type of projector algorithm
- `projector_alg`: Algorithm to compute CTMRG projectors

```
projector_alg = projector_alg_type(; svd_alg, trscheme, verbosity=0)
```

- `ctmrg_alg`: Algorithm for performing CTMRG runs

```
ctmrg_alg = SimultaneousCTMRG(
ctmrg_tol, ctmrg_maxiter, ctmrg_miniter, 2, projector_alg
)
```

# Optimization
- `fpgrad_maxiter=30`: Maximal number of iterations for computing the CTMRG fixed-point gradient
- `fpgrad_tol=1e-6`: Convergence tolerance for the fixed-point gradient iteration
- `iterscheme=:fixed`: Scheme for differentiating one CTMRG iteration
- `gradient_linsolver`: Default linear solver for the `LinSolver` gradient algorithm

```
gradient_linsolver=KrylovKit.BiCGStab(; maxiter=fpgrad_maxiter, tol=fpgrad_tol)
```

- `gradient_eigsolve`: Default eigsolver for the `EigSolver` gradient algorithm

```
gradient_eigsolver = KrylovKit.Arnoldi(; maxiter=fpgrad_maxiter, tol=fpgrad_tol, eager=true)
```

- `gradient_alg`: Algorithm to compute the gradient fixed-point

```
gradient_alg = LinSolver(; solver=gradient_linsolver, iterscheme)
```

- `reuse_env=true`: If `true`, the current optimization step is initialized on the previous environment
- `optimizer=LBFGS(32; maxiter=100, gradtol=1e-4, verbosity=3)`: Default `OptimKit.OptimizerAlgorithm` for PEPS optimization

# OhMyThreads scheduler
- `scheduler=Ref{Scheduler}(...)`: Multi-threading scheduler which can be accessed via `set_scheduler!`
"""
module Defaults
using TensorKit, KrylovKit, OptimKit, OhMyThreads
using PEPSKit:
LinSolver,
FixedSpaceTruncation,
SVDAdjoint,
HalfInfiniteProjector,
SimultaneousCTMRG

# CTMRG
const ctmrg_tol = 1e-8
const ctmrg_maxiter = 100
const ctmrg_miniter = 4
const sparse = false
const trscheme = FixedSpaceTruncation()
const fwd_alg = TensorKit.SDD()
const rrule_alg = Arnoldi(; tol=ctmrg_tol, krylovdim=48, verbosity=-1)
const svd_alg = SVDAdjoint(; fwd_alg, rrule_alg)
const projector_alg_type = HalfInfiniteProjector
const projector_alg = projector_alg_type(; svd_alg, trscheme, verbosity=0)
const ctmrg_alg = SimultaneousCTMRG(
ctmrg_tol, ctmrg_maxiter, ctmrg_miniter, 2, projector_alg
)

# Optimization
const fpgrad_maxiter = 30
const fpgrad_tol = 1e-6
const gradient_linsolver = KrylovKit.BiCGStab(; maxiter=fpgrad_maxiter, tol=fpgrad_tol)
const gradient_eigsolver = KrylovKit.Arnoldi(;
maxiter=fpgrad_maxiter, tol=fpgrad_tol, eager=true
)
const iterscheme = :fixed
const gradient_alg = LinSolver(; solver=gradient_linsolver, iterscheme)
const reuse_env = true
const optimizer = LBFGS(32; maxiter=100, gradtol=1e-4, verbosity=3)

# OhMyThreads scheduler defaults
const scheduler = Ref{Scheduler}()
"""
set_scheduler!([scheduler]; kwargs...)

Set `OhMyThreads` multi-threading scheduler parameters.

The function either accepts a `scheduler` as an `OhMyThreads.Scheduler` or
as a symbol where the corresponding parameters are specificed as keyword arguments.
For instance, a static scheduler that uses four tasks with chunking enabled
can be set via
```
set_scheduler!(StaticScheduler(; ntasks=4, chunking=true))
```
or equivalently with
```
set_scheduler!(:static; ntasks=4, chunking=true)
```
For a detailed description of all schedulers and their keyword arguments consult the
[`OhMyThreads` documentation](https://juliafolds2.github.io/OhMyThreads.jl/stable/refs/api/#Schedulers).

If no `scheduler` is passed and only kwargs are provided, the `DynamicScheduler`
constructor is used with the provided kwargs.

To reset the scheduler to its default value, one calls `set_scheduler!` without passing
arguments which then uses the default `DynamicScheduler()`. If the number of used threads is
just one it falls back to `SerialScheduler()`.
"""
function set_scheduler!(sc=OhMyThreads.Implementation.NotGiven(); kwargs...)
if isempty(kwargs) && sc isa OhMyThreads.Implementation.NotGiven
scheduler[] = Threads.nthreads() == 1 ? SerialScheduler() : DynamicScheduler()
else
scheduler[] = OhMyThreads.Implementation._scheduler_from_userinput(
sc; kwargs...
)
end
return nothing
end
export set_scheduler!

function __init__()
return set_scheduler!()
end
end
include("algorithms/select_algorithm.jl")

using .Defaults: set_scheduler!
export set_scheduler!
Expand Down
Loading
Loading