Skip to content

[FEA] Better error reporting #858

Closed
Closed
@jwmelto

Description

@jwmelto

Is your feature request related to a problem? Please describe.
I ran an FFT and got the obtuse exception thrown:

matxException (matxCufftError: error == CUFFT_SUCCESS)

This occurs at fft/fft_cuda.h:437 in v0.9.0

The exception is raised from the MATX_ASSERT macro, which lacks fidelity to describe the failure condition.

Describe the solution you'd like
The actual error code is the minimum requirement for problem resolution. The error string would be helpful. Something like the ubiquitous cudaCheck macro.

Describe alternatives you've considered
I'm currently at a loss how to get the error code out, without modifying MatX code directly.

Additional context
I can guess at the issue; I created a tensor over host memory and passed that into the fft. I surmise that it's a host/device memory issue.

using HostComplex = std::complex<float>;
using Complex = cuda::std::complex<float>;

auto exec = matx::cudaExecutor{};

auto wipeoff = matx::make_tensor<Complex>( { M, N } );
(wipeoff = /* details out of scope */).run(exec);

std::vector<HostComplex> data = /* get source data */
auto dataP = reinterpret_cast<Complex*>( data.data() );
auto dataT = matx::make_tensor<Complex>( dataP, { N } );

// Here is where it fails
auto vals = matx::make_tensor<Complex>( wipeoff.Shape() );
(vals = matx::fft( (wipeoff * dataT) )).run(exec);

I took a look at v01.1.1 but my project had numerous compilation errors with this version. Further investigation is on-going.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions