You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
LossNLL2d has no correct assert
"test_cross_entropy_loss_2d_out_of_bounds_class_index_xpu_float16",
"test_cross_entropy_loss_2d_out_of_bounds_class_index_xpu_float32", - Jianghang
native_group_norm : RuntimeError: Expected X.is_contiguous(memory_format) to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
"test_GroupNorm_memory_format_xpu", - Resolve the memory format issue of GroupNorm #677
do not aline with cuda fix cuda bias code in test_nn #656 fix it
"test_upsamplingBiMode2d_consistency",
"test_upsamplingBiLinear2d_consistency_interp_size_bug",
…ompatible for XPU (#665)
This fixes issues from #653
The case is verifying an expected assertion log raised in kernel. We
have different log in XPU backend, different keyword (we have no
'CUDA'), different index (we are using work item).
Original test case uses CUDA specific calls, and expected assert message
returned from CUDA. The test cases are now rewritten in `test_nn_xpu.py`
to adapt to XPU specific counterparts.
- "test_cross_entropy_loss_2d_out_of_bounds_class_index_xpu_float16",
- "test_cross_entropy_loss_2d_out_of_bounds_class_index_xpu_float32",
🐛 Describe the bug
#258
these cases are target 2.5
LossNLL2d has no correct assert
"test_cross_entropy_loss_2d_out_of_bounds_class_index_xpu_float16",
"test_cross_entropy_loss_2d_out_of_bounds_class_index_xpu_float32", - Jianghang
native_group_norm : RuntimeError: Expected X.is_contiguous(memory_format) to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
"test_GroupNorm_memory_format_xpu", - Resolve the memory format issue of GroupNorm #677
upsamplingNearest2d: Failed: Unexpected success
"test_upsamplingNearest2d_launch_fail_xpu", - grid size check
do not aline with cuda fix cuda bias code in test_nn #656 fix it
"test_upsamplingBiMode2d_consistency",
"test_upsamplingBiLinear2d_consistency_interp_size_bug",
cause by cuda hard code
"test_device_mask_xpu", fix cuda bias code in test_nn #656 fix it
"test_overwrite_module_params_on_conversion_cpu_device_xpu",
(https://github.com/pytorch/pytorch/blob/1fb498d6e34e0e9b43b2c26dc0a18a4fc3a52605/aten/src/ATen/native/cuda/UpSampleNearest2d.cu#L303) is specially for cuda, keep in skip list
Versions
main
The text was updated successfully, but these errors were encountered: