Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

remaining XXX and FIXME comments #172

Open
ev-br opened this issue Aug 3, 2023 · 1 comment
Open

remaining XXX and FIXME comments #172

ev-br opened this issue Aug 3, 2023 · 1 comment

Comments

@ev-br
Copy link
Collaborator

ev-br commented Aug 3, 2023

$ grep XXX -rn ./torch_np/
./torch_np/testing/utils.py:223:    # XXX: catch ValueError for subclasses of ndarray where iscomplex fail
./torch_np/testing/utils.py:398:    # XXX: catch ValueError for subclasses of ndarray where iscomplex fail
./torch_np/tests/test_function_base.py:32:    @pytest.mark.xfail(reason="XXX: arange(start=0, stop, step=1)")
./torch_np/tests/test_function_base.py:46:    @pytest.mark.xfail(reason="XXX: arange(..., dtype=bool)")
./torch_np/tests/test_ndarray_methods.py:49:        # XXX: move out to dedicated test(s)
./torch_np/tests/test_ndarray_methods.py:67:# XXX : order='C' / 'F'
./torch_np/tests/test_ndarray_methods.py:262:    @pytest.mark.skipif(reason="XXX: need ndarray.chooses")
./torch_np/tests/test_dtype.py:20:        marks=pytest.mark.xfail(reason="XXX: np.dtype() objects not supported"),
./torch_np/tests/test_dtype.py:30:            marks=pytest.mark.xfail(reason="XXX: namespaced dtypes not supported"),
./torch_np/tests/test_dtype.py:38:            marks=pytest.mark.xfail(reason="XXX: np.dtype() objects not supported"),
./torch_np/tests/test_basic.py:24:    # w.bincount,     # XXX: input dtypes
./torch_np/tests/test_ufuncs_basic.py:53:    @pytest.mark.skip(reason="XXX: unary ufuncs ignore the dtype=... parameter")
./torch_np/tests/test_ufuncs_basic.py:60:    @pytest.mark.skip(reason="XXX: unary ufuncs ignore the dtype=... parameter")
./torch_np/tests/test_ufuncs_basic.py:287:            with assert_raises((TypeError, RuntimeError)):  # XXX np.UFuncTypeError
./torch_np/tests/test_ufuncs_basic.py:323:            with assert_raises((TypeError, RuntimeError)):  # XXX np.UFuncTypeError
./torch_np/tests/test_ufuncs_basic.py:351:        with assert_raises((ValueError, RuntimeError)):  # XXX ValueError in numpy
./torch_np/tests/test_xps.py:113:    # dtypes. XXX: Remove the below sanity check and subsequent casting when
./torch_np/tests/test_xps.py:149:@pytest.mark.xfail(reason="XXX: support converting namespaced dtypes")
./torch_np/tests/numpy_tests/lib/test_shape_base_.py:623:    @pytest.mark.skip(reason="XXX: order='F' not implemented")
./torch_np/tests/numpy_tests/lib/test_shape_base_.py:632:    @pytest.mark.xfail(reason="XXX: noop in torch, while numpy raises")
./torch_np/tests/numpy_tests/lib/test_type_check.py:110:        # assert_(not isinstance(out, np.ndarray))  # XXX: 0D tensor, not scalar
./torch_np/tests/numpy_tests/lib/test_type_check.py:124:        # assert_(not isinstance(out, np.ndarray))  # XXX: 0D tensor, not scalar
./torch_np/tests/numpy_tests/lib/test_type_check.py:141:        # assert_(not isinstance(out, np.ndarray))  # XXX: 0D tensor, not scalar
./torch_np/tests/numpy_tests/lib/test_type_check.py:155:        # assert_(not isinstance(out, np.ndarray))  # XXX: 0D tensor, not scalar
./torch_np/tests/numpy_tests/lib/test_nanfunctions.py:429: ## XXX  _v.setflags(write=False)
./torch_np/tests/numpy_tests/core/test_scalar_methods.py:17:@pytest.mark.skip(reason='XXX: scalar.as_integer_ratio not implemented')
./torch_np/tests/numpy_tests/core/test_scalar_methods.py:123:@pytest.mark.skip(reason='XXX: implementation details of the type system differ')
./torch_np/tests/numpy_tests/core/test_shape_base.py:337:        # XXX: a single argument; relies on an ndarray being a sequence
./torch_np/tests/numpy_tests/core/test_dtype.py:67: # XXX: what is 'q'? on my 64-bit ubuntu maching it's int64, same as 'l'
./torch_np/tests/numpy_tests/core/test_dtype.py:137:   ## XXX: out dtypes do not have .descr
./torch_np/tests/numpy_tests/core/test_dtype.py:161:@pytest.mark.skip(reason="XXX: value-based promotions, we don't have.")
./torch_np/tests/numpy_tests/core/test_numeric.py:2097:        # XXX: test modified since there are array scalars
./torch_np/tests/numpy_tests/core/test_numeric.py:2160:        self.orders = {'C': 'c_contiguous'} # XXX: reeenable when implemented, 'F': 'f_contiguous'}
./torch_np/tests/numpy_tests/core/test_indexing.py:407:    @pytest.mark.xfail(reason="XXX: low-prio behaviour to support")
./torch_np/tests/numpy_tests/core/test_indexing.py:561:        reason="XXX: low-prio to support assigning complex values on floating arrays"
./torch_np/tests/numpy_tests/core/test_indexing.py:584:@pytest.mark.xfail(reason="XXX: requires broadcast() and broadcast_to()")
./torch_np/tests/numpy_tests/core/test_scalar_ctors.py:13:    @pytest.mark.xfail(reason='XXX: floats from strings')
./torch_np/tests/numpy_tests/core/test_scalar_ctors.py:21:    @pytest.mark.xfail(reason='XXX: floats from strings')
./torch_np/tests/numpy_tests/core/test_scalarmath.py:624:        # XXX: TypeError from numpy, RuntimeError from torch
./torch_np/tests/numpy_tests/core/test_scalarmath.py:643:        with assert_raises((TypeError, RuntimeError)):    # XXX: TypeError from numpy
./torch_np/tests/numpy_tests/core/test_multiarray.py:548:        # assert_(np.asfortranarray(d).flags.f_contiguous)   # XXX: f ordering
./torch_np/tests/numpy_tests/core/test_multiarray.py:1933:        # bounds check : XXX torch does not raise
./torch_np/tests/numpy_tests/core/test_multiarray.py:3845:    @pytest.mark.xfail(reason="XXX: take(..., mode='clip')")
./torch_np/tests/numpy_tests/core/test_multiarray.py:3852:    @pytest.mark.xfail(reason="XXX: take(..., mode='wrap')")
./torch_np/tests/numpy_tests/core/test_multiarray.py:3860:    @pytest.mark.xfail(reason="XXX: take(mode='wrap')")
./torch_np/tests/numpy_tests/core/test_multiarray.py:7450:    @pytest.mark.xfail(reason="XXX: place()")
./torch_np/tests/numpy_tests/core/test_multiarray.py:7465:    @pytest.mark.xfail(reason="XXX: putmask()")
./torch_np/tests/numpy_tests/core/test_multiarray.py:7487:    @pytest.mark.xfail(reason="XXX: ndarray.flat")
./torch_np/tests/numpy_tests/core/test_multiarray.py:7501:    @pytest.mark.skip(reason="XXX: npy_create_writebackifcopy()")
./torch_np/tests/numpy_tests/core/test_multiarray.py:7520:    @pytest.mark.skip(reason="XXX: npy_create_writebackifcopy()")
./torch_np/tests/numpy_tests/core/test_multiarray.py:7531:    @pytest.mark.skip(reason="XXX: npy_create_writebackifcopy()")
./torch_np/tests/test_reductions.py:73:        # XXX: numpy emits a warning on empty slice
./torch_np/tests/test_reductions.py:104:    @pytest.mark.xfail(reason="XXX: mean(..., where=...) not implemented")
./torch_np/_dtypes.py:168:    "longlong": int64,  # XXX: is this correct?
./torch_np/_dtypes.py:208:    "b": int8,  # XXX: srsly?
./torch_np/_reductions.py:224:        # XXX revisit when the pytorch version has pytorch/pytorch#95166
./torch_np/linalg.py:140:    # XXX: NumPy does this: https://github.com/numpy/numpy/blob/v1.24.0/numpy/linalg/linalg.py#L1744
./torch_np/_funcs_impl.py:154:    # XXX: in numpy 1.24 dstack does not have dtype and casting keywords
./torch_np/_funcs_impl.py:167:    # XXX: in numpy 1.24 column_stack does not have dtype and casting keywords
./torch_np/_funcs_impl.py:308:    # XXX: raises TypeError if start or stop are not scalars
./torch_np/_funcs_impl.py:359:        # XXX: this breaks if start is passed as a kwarg:
./torch_np/_funcs_impl.py:368:    # XXX: default values do not get normalized
./torch_np/_funcs_impl.py:443:    # XXX: fill_value broadcasts
./torch_np/_funcs_impl.py:679:    # XXX: semantic difference: np.flip returns a view, torch.flip copies
./torch_np/_funcs_impl.py:1063:    # XXX: scalar repeats; ArrayLikeOrScalar ?

and

$ grep FIXME -rn ./torch_np/
./torch_np/testing/__init__.py:17:# from .testing import assert_allclose    # FIXME
./torch_np/tests/numpy_tests/linalg/test_linalg.py:957:        #FIXME the 'e' dtype might work in future
./torch_np/tests/numpy_tests/lib/test_function_base.py:27:# FIXME: make from torch_np
./torch_np/tests/numpy_tests/lib/test_function_base.py:1173:#         np.array([0, 2**63, 0]),     # FIXME
./torch_np/tests/numpy_tests/lib/test_shape_base_.py:42:#  FIXME           (np.partition, np.argpartition, dict(kth=2)),
./torch_np/tests/numpy_tests/core/test_multiarray.py:80:# FIXME
./torch_np/tests/numpy_tests/core/test_multiarray.py:4705:        # FIXME:
./torch_np/_dtypes.py:161:# name aliases : FIXME (OS, bitness)
./torch_np/_reductions.py:400:    # FIXME(Mario) Doesn't np.quantile accept a tuple?
@lezcano
Copy link
Collaborator

lezcano commented Aug 3, 2023

Less than I expected. I reckon it should take us less than a day to triage all these into won't fix and fixable. Let's follow up after merge.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants