-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement cluster expansion MPOs #252
base: master
Are you sure you want to change the base?
Conversation
Perhaps also using |
Codecov ReportAttention: Patch coverage is
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Keep in mind that @generated
tends to increase compilation time, since it has to compile new functions for all types.
Having a quick glance at this, it seems to me that the compile time is this large mostly because a lot of the functions are very type-unstable. One thing that might help is to use Val(N)
whenever you are referring to the lengths of the finite mpos. Since you are instantiating them as contracted tensors, this yields AbstractTensorMap{T,S,N,N}
or similar objects, and becomes a problem when N
is not known at compile time.
(which coincidentally also requires more @generated
functions since ncon
is also not type stable 😁 )
This being said, am I right in looking at this and thinking the main missing components in terms of contractions is the instantiation or application of FiniteMPO
with static lengths?
If this is the case, we might just create SFiniteMPO
in analogy to SVector
(StaticArrays.jl), and implement this generically? This has additional benefits as well, since the derivatives would fit in this same framework.
:(return @tensor $t_out := $t_left * $t_right)) | ||
end | ||
|
||
function _make_apply_functions(O::SparseBlockTensorMap, n::Int) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not entirely sure returning an anonymous function is a great idea here. If you could refactor this into a callable struct, or any struct that you make work with KrylovKit.jl would avoid that you have to recompile these functions every time you call this.
nT = (n - 1) ÷ 2 | ||
Al, Ar = make_envs(O, n) | ||
|
||
fun = if iseven(n) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Splitting on the value of n
is again very type-unstable...
lmax = N ÷ 2 # largest level | ||
τ = -im * dt | ||
# spaces | ||
P = physicalspace(H)[1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
P = physicalspace(H)[1] | |
P = physicalspace(H, 1) |
τ = -im * dt | ||
# spaces | ||
P = physicalspace(H)[1] | ||
D = dim(physicalspace(H)[1]) # physical dimension |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
D = dim(physicalspace(H)[1]) # physical dimension | |
D = dim(P) # physical dimension |
# spaces | ||
P = physicalspace(H)[1] | ||
D = dim(physicalspace(H)[1]) # physical dimension | ||
V = BlockTensorKit.oplus([ℂ^(D^2l) for l in 0:lmax]...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems odly specific to non-symmetric tensors. Not having looked at the paper, is it just oplus([fuse(P^(2l)) for l in 0:lmax])
?
V = BlockTensorKit.oplus([ℂ^(D^2l) for l in 0:lmax]...) | ||
|
||
TT = tensormaptype(ComplexSpace, 2, 2, ComplexF64) | ||
O = SparseBlockTensorMap{TT}(undef, V ⊗ P ← P ⊗ V) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this be written with similar
?
# assign the new level to O | ||
O[jnl, 1, 1, jnl] = Onew | ||
elseif iseven(n) # linear problem + svd | ||
U, S, V = tsvd(Onew, (1, 2, 4), (3, 5, 6)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
U, S, V = tsvd(Onew, (1, 2, 4), (3, 5, 6)) | |
U, S, V = tsvd!(Onew, (1, 2, 4), (3, 5, 6)) |
elseif iseven(n) # linear problem + svd | ||
U, S, V = tsvd(Onew, (1, 2, 4), (3, 5, 6)) | ||
# assign the new levels to O | ||
O[jnl - 1, 1, 1, jnl] = permute(U * sqrt(S), ((1, 2), (3, 4))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
O[jnl - 1, 1, 1, jnl] = permute(U * sqrt(S), ((1, 2), (3, 4))) | |
@plansor O[jnl - 1, 1, 1, jnl][-1 -2; -3 -4] := U[-1 -2 -3; 1] * sqrt(S)[1, -4] |
Writing this with @plansor
calls makes it compatible with fermions, and can help with allocations in the future.
function _fully_contract_O(O::SparseBlockTensorMap, n::Int) | ||
# Make projector onto the 0 subspace of the virtual level | ||
Pl = zeros(ComplexF64, ℂ^1 ← space(O, 1)) | ||
Pl[1] = 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This pattern seems wrong (and probably will break once I actually get to fixing that), and probably only works for non-symmetric tensors... If I get this right, you want a trivial unit tensor as the first element, so you should probably just instantiate that tensor. There are some examples of this scattered throughout MPSKit 😉
return be - bo | ||
end | ||
|
||
function _fully_contract_O(O::SparseBlockTensorMap, n::Int) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this function different from the _make_Hamterms
implementation? it seems to me like the exact same contraction as convert(TensorMap, mpo)
unless I'm missing something?
This is my initial implementation for the "Symmetric cluster expansions" introduced in https://doi.org/10.1103/PhysRevA.103.L020402
They are currently only implemented for infinite systems.
The current implementation is still undergoing testing and suffers from a rather large compilation time.
Things that still need to be done: