Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build does not feature-detect kFP8 #991

Open
jonhoo opened this issue Aug 22, 2024 · 0 comments
Open

Build does not feature-detect kFP8 #991

jonhoo opened this issue Aug 22, 2024 · 0 comments

Comments

@jonhoo
Copy link

jonhoo commented Aug 22, 2024

Description

Note: this duplicates #922, but adds significantly more detail, so I recommend closing that and using this instead.

The nvinfer1::DataType::kFP8 was added between 8.5.3 and 8.6.0 of TensorRT, but is currently used unconditionally by onnx-tensorrt. As such, the library no longer builds on platforms that are still using TensorRT < 8.6.0, such as NVIDIA JetPack version 5, which is still on 8.5.2. On those platforms, onnx-tensorrt now fails to build with:

[ 86%] Building CXX object _deps/onnx_tensorrt-build/CMakeFiles/nvonnxparser_static.dir/NvOnnxParser.cpp.o
In file included from /nix/store/skkb1qrwi5rxjf3f4j4cmf4r4cfq9nnl-source/onnx2trt.hpp:10,
                 from /nix/store/skkb1qrwi5rxjf3f4j4cmf4r4cfq9nnl-source/ImporterContext.hpp:7,
                 from /nix/store/skkb1qrwi5rxjf3f4j4cmf4r4cfq9nnl-source/ModelImporter.hpp:7,
                 from /nix/store/skkb1qrwi5rxjf3f4j4cmf4r4cfq9nnl-source/NvOnnxParser.cpp:6:
/nix/store/skkb1qrwi5rxjf3f4j4cmf4r4cfq9nnl-source/TensorOrWeights.hpp: In member function 'std::string onnx2trt::TensorOrWeights::getType() const':
/nix/store/skkb1qrwi5rxjf3f4j4cmf4r4cfq9nnl-source/TensorOrWeights.hpp:118:38: error: 'kFP8' is not a member of 'nvinfer1::DataType'
  118 |             case nvinfer1::DataType::kFP8: return "FP8";
      |                                      ^~~~
In file included from /nix/store/skkb1qrwi5rxjf3f4j4cmf4r4cfq9nnl-source/onnx2trt_utils.hpp:10,
                 from /nix/store/skkb1qrwi5rxjf3f4j4cmf4r4cfq9nnl-source/ImporterContext.hpp:8,
                 from /nix/store/skkb1qrwi5rxjf3f4j4cmf4r4cfq9nnl-source/ModelImporter.hpp:7,
                 from /nix/store/skkb1qrwi5rxjf3f4j4cmf4r4cfq9nnl-source/NvOnnxParser.cpp:6:
/nix/store/skkb1qrwi5rxjf3f4j4cmf4r4cfq9nnl-source/trt_utils.hpp: In function 'int onnx2trt::getDtypeSize(nvinfer1::DataType)':
/nix/store/skkb1qrwi5rxjf3f4j4cmf4r4cfq9nnl-source/trt_utils.hpp:26:30: error: 'kFP8' is not a member of 'nvinfer1::DataType'
   26 |     case nvinfer1::DataType::kFP8: return 1;
      |                              ^~~~
/nix/store/skkb1qrwi5rxjf3f4j4cmf4r4cfq9nnl-source/trt_utils.hpp: In function 'onnx::TensorProto_DataType onnx2trt::trtDataTypeToONNX(nvinfer1::DataType)':
/nix/store/skkb1qrwi5rxjf3f4j4cmf4r4cfq9nnl-source/trt_utils.hpp:160:30: error: 'kFP8' is not a member of 'nvinfer1::DataType'
  160 |     case nvinfer1::DataType::kFP8: break;
      |                              ^~~~
In file included from /nix/store/skkb1qrwi5rxjf3f4j4cmf4r4cfq9nnl-source/ImporterContext.hpp:8,
                 from /nix/store/skkb1qrwi5rxjf3f4j4cmf4r4cfq9nnl-source/ModelImporter.hpp:7,
                 from /nix/store/skkb1qrwi5rxjf3f4j4cmf4r4cfq9nnl-source/NvOnnxParser.cpp:6:
/nix/store/skkb1qrwi5rxjf3f4j4cmf4r4cfq9nnl-source/onnx2trt_utils.hpp: In function 'std::ostream& nvinfer1::operator<<(std::ostream&, const nvinfer1::DataType&)':
/nix/store/skkb1qrwi5rxjf3f4j4cmf4r4cfq9nnl-source/onnx2trt_utils.hpp:78:30: error: 'kFP8' is not a member of 'nvinfer1::DataType'
   78 |     case nvinfer1::DataType::kFP8: return stream << "float8";
      |                              ^~~~
make[2]: *** [_deps/onnx_tensorrt-build/CMakeFiles/nvonnxparser_static.dir/build.make:76: _deps/onnx_tensorrt-build/CMakeFiles/nvonnxparser_static.dir/NvOnnxParser.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:5213: _deps/onnx_tensorrt-build/CMakeFiles/nvonnxparser_static.dir/all] Error 2

The use of kFP8 should probably be guarded the same way it is in tensorflow itself (tensorflow/tensorflow#60046), namely by utilizing something like

#endif
#if IS_TRT_VERSION_GE(8, 6, 0, 0)
    case nvinfer1::DataType::kFP8:
      return "kFP8";
#endif

Environment

TensorRT Version: 8.5.2
ONNX-TensorRT Version / Branch: 8.6-GA
GPU Type: Jetson AGX Orin
Nvidia Driver Version: JetPack 5.1.2
CUDA Version: 11.4.19
CUDNN Version: 8.6.0
Operating System + Version: JetPack 5.1.2
Python Version (if applicable): 3.12
TensorFlow + TF2ONNX Version (if applicable): 2.16.2

Steps To Reproduce

  • Build onnx-tensorrt against tensorrt 8.5.2
@jonhoo jonhoo changed the title Library does not feature-detect float8 Build does not feature-detect kFP8 Aug 22, 2024
@jonhoo jonhoo mentioned this issue Aug 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant