You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
CUDA 12.8 introduced several new extended floating point types:
NVFP4
__nv_fp4_e2m2
This data format is a 4-bit floating point format with 2 bits for exponent and 1 bit for mantissa. The e2m1 encoding does not support infinity and NaN.
NVFP6
__nv_fp6_e2m3
This data format is a 6-bit floating point format with 2 bits for exponent and 3 bits for mantissa. The e2m3 encoding does not support infinity and NaN.
__nv_fp6_e3m2
This data format is a 6-bit floating point format with 3 bits for exponent and 2 bits for mantissa. The e3m2 encoding does not support infinity and NaN.
NVFP8
__nv_fp8_e8m0
This data format is an 8-bit unsigned floating-point format with 8 bits for exponent and 0 bits for mantissa. The ue8m0 encoding does not support infinity. NaN value is limited to 0xff.
Describe the solution you'd like
Add feature macros for NVFP4 and NVFP6 types, similar to NVFP8
Specialize cuda::std::numeric_limits for these types
Implement overloads in <cuda/std/cmath>
Implement specializations of cuda::std::complex
Describe alternatives you've considered
No response
Additional context
No response
The text was updated successfully, but these errors were encountered:
Is this a duplicate?
Area
libcu++
Is your feature request related to a problem? Please describe.
CUDA 12.8 introduced several new extended floating point types:
__nv_fp4_e2m2
NaN
.__nv_fp6_e2m3
NaN
.__nv_fp6_e3m2
NaN
.__nv_fp8_e8m0
NaN
value is limited to0xff
.Describe the solution you'd like
cuda::std::numeric_limits
for these types<cuda/std/cmath>
cuda::std::complex
Describe alternatives you've considered
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: