Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix tensors function for Operator type #78

Closed
wants to merge 2 commits into from
Closed

Conversation

jofrevalles
Copy link
Member

Summary

This PR addresses a bug in the tensors function which caused a MethodError when called with a TensorNetwork of Operator type. The root cause was that the function signature was designed exclusively for the State type. The solution was to extend the function signature to handle both Operator and State types.

In addition to the fix, this PR introduces new tests designed to cover this scenario and prevent regressions.

Example

Here we can see an example of the previous error:

julia> _arrays = [rand(1, 1, 2, 2), rand(1, 1, 2, 2), rand(1, 1, 2, 2)]
3-element Vector{Array{Float64, 4}}:
 [0.5446558537456561;;; 0.43215020981519403;;;; 0.032683161122349036;;; 0.6659303187609796]
 [0.08054317169533542;;; 0.48062834256926257;;;; 0.28742074946598595;;; 0.17849508460251406]
 [0.7073563711310278;;; 0.12764406681391194;;;; 0.34387035001349886;;; 0.8791235663956318]

julia> ψ = MatrixProduct{Operator,Periodic}(_arrays, order = (:l, :r, :i, :o))
TensorNetwork{MatrixProduct{Operator, Periodic}, NamedTuple{(:plug, :interlayer, ), Tuple{Type{<:Plug}, Vector{Bijections.Bijection{Int64, Symbol}}, Union{Nothing, Int64}}}}(#tensors=3, #labels=9)

julia> tensors(ψ, 1)
ERROR: MethodError: no method matching tensors(::Type{Operator}, ::TensorNetwork{MatrixProduct{Operator, Periodic}, NamedTuple{(:plug, :interlayer, ), Tuple{Type{<:Plug}, Vector{Bijections.Bijection{Int64, Symbol}}, Union{Nothing, Int64}}}}, ::Int64)

Closest candidates are:
  tensors(::Type{Operator}, ::TensorNetwork{<:Quantum}, ::Any, ::Symbol)
   @ Tenet ~/.julia/packages/ValSplit/MMCz3/src/ValSplit.jl:141
  tensors(::Type{State}, ::TensorNetwork{<:Quantum}, ::Any)
   @ Tenet ~/git/Tenet.jl/src/Quantum/Quantum.jl:84
  tensors(::TensorNetwork{<:Quantum}, ::Integer, ::Any...)
   @ Tenet ~/git/Tenet.jl/src/Quantum/Quantum.jl:83

Stacktrace:
 [1] tensors(::TensorNetwork{MatrixProduct{Operator, Periodic}, NamedTuple{(:plug, :interlayer, ), Tuple{Type{<:Plug}, Vector{Bijections.Bijection{Int64, Symbol}}, Union{Nothing, Int64}}}}, ::Int64)
   @ Tenet ~/git/Tenet.jl/src/Quantum/Quantum.jl:83
 [2] top-level scope
   @ REPL[9]:1

@codecov
Copy link

codecov bot commented Aug 1, 2023

Codecov Report

Merging #78 (18b9620) into master (9e5349d) will decrease coverage by 0.45%.
The diff coverage is 100.00%.

@@            Coverage Diff             @@
##           master      #78      +/-   ##
==========================================
- Coverage   78.27%   77.82%   -0.45%     
==========================================
  Files           9        9              
  Lines         695      699       +4     
==========================================
  Hits          544      544              
- Misses        151      155       +4     
Files Changed Coverage Δ
src/Quantum/Quantum.jl 73.73% <100.00%> (-2.31%) ⬇️

... and 1 file with indirect coverage changes

@mofeing
Copy link
Member

mofeing commented Aug 1, 2023

There is a reason why tensors is only defined for States and not Operators, being that it is not as easy for Operators when the input and output sites do not match. This is the case of the Spaced Matrix Product Operator, for example.

I'm closing this PR without merging because I don't think this should be the solution, but thanks for reminding that I have yet to fix this.

@jofrevalles do you mind opening an issue?

If you need a quick fix, you can use the interlayer property of Quantum TNs along with the select function. Also comment it with a # TODO refactor this when tensors is implemented for Operators please.

@jofrevalles
Copy link
Member Author

Okay. Since right now MPO and PEPO do only support the same amount of inputs/outputs I thought that was a good idea to implement that. Nevertheless, let's discuss that in an issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants