Skip to content

Incorrect result when using input tensor to store result of contraction involving sum #192

Closed
@leburgel

Description

@leburgel

I encountered what I think is a pretty sneaky bug when assigning the result of a tensor contraction to an existing tensor used in the contraction:

using LinearAlgebra
using TensorOperations

A = randn(ComplexF64, 8, 4, 8)
L = randn(ComplexF64, 8, 8)
R = randn(ComplexF64, 8, 8)
e = randn(ComplexF64)

@tensor B[-1, -2, -3] := L[-1, 1] * A[1, -2, 2] * R[2, -3] + e * A[-1, -2, -3]

@tensor A[-1, -2, -3] = L[-1, 1] * A[1, -2, 2] * R[2, -3] + e * A[-1, -2, -3]

@show norm(B - A) # gives different result

When removing the sum we do get the same result in both cases.

I assume this pattern creates a similar issue as something like

x = randn(ComplexF64, 10)
A = randn(ComplexF64, 10, 10)

@tensor x[-1] = A[-1, 1] * x[1] # throws `ERROR: ArgumentError: output tensor must not be aliased with input tensor`

However, in the first example this is never caught, probably because there is a distinct temporary created in the actual tensor contraction which circumvents the error check.

I don't know if this is a bug that could be fixed or just a pathological pattern that shouldn't be used. But in the latter case this should definitely result in some kind of explicit error being thrown.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions