You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was exploring interoperability of the IntelPython packages and wanted to try to build the stack dpctl / dpnp / numba_dpex with custom sycl libraries.
dpctl seems to be built with interoperability by design, and indeed I can get it to detect a cuda device effortlessly:
In [2]: dpctl.get_devices()
Out[2]: [<dpctl.SyclDevice [backend_type.cuda, device_type.gpu, NVIDIA GeForce GTX 1070] at 0x7fbe8c87b070>]
I imagined it can be the same with dpnp since dpctl and dpnp go hand in hand (dpctl.tensor does not exist unless dpnp works) and the conda build seems to suggest that having custom dpcpp would be possible but I can't get dpnp to build.
So the question here is: is there a way around this, is it planned to be supported in the future ? or maybe there's another way of using dpctl ?
(Following is a feedback about what I thought would be possible:
I'm using latest commits of the sycl and library and the latest release of onetbb and build everything from source, and also using the latest release of onedpl, and mkl from the basekit, and I set DPCPPROOT, MKLROOT, TBBROOT and DPLROOT variables such that the file trees at those paths match what is expected by the install scrips (mimicking what is found in the intel oneapi basekit).
The I try to run python setup.py build_clib but:
the command expects dpcpp to be available to the system regardless of environment variables (e.g CC). I thought maybe symlinking dpcpp to clang could do the job 🤷 .
Then I get the following output:
-- CMAKE_VERSION: 3.24.2
-- CMAKE_GENERATOR: Ninja
-- CMAKE_HOST_SYSTEM_NAME: Linux
-- ========== User controlled variables list ==========
-- DPNP_ONEAPI_ROOT:
-- DPNP_STATIC_LIB_ENABLE: OFF
-- DPNP_DEBUG_ENABLE: OFF
-- DPNP_BACKEND_TESTS: OFF
-- DPNP_INSTALL_STRUCTURED: OFF
-- DPNP_SYCL_QUEUE_MGR_ENABLE: ON
-- |- DPNP_QUEUEMGR_INCLUDE_DIR: /opt/venv/lib/python3.9/site-packages/dpctl/include
-- |- DPNP_QUEUEMGR_LIB_DIR: /opt/venv/lib/python3.9/site-packages/dpctl
-- ======= End of user controlled variables list ======
-- Found MathLib: (include: /opt/customoneapi/onemkl/include, library: /opt/customoneapi/onemkl/lib/intel64)
-- Found DPL: (include: /opt/customoneapi/onedpl/linux/include)
-- CMAKE_SYSTEM: Linux-6.0.2-arch1-1
-- CMAKE_SYSTEM_VERSION: 6.0.2-arch1-1
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- CMAKE_BUILD_TYPE: Release
-- CXX_STANDARD: 17
-- CMAKE_CXX_COMPILER_ID:
-- CMAKE_CXX_COMPILER_VERSION:
-- CMAKE_CXX_COMPILER: /opt/sycl/install/bin/dpcpp
-- CMAKE_LINKER: /usr/bin/ld
-- CMAKE_SOURCE_DIR: /tmp/dpnp/dpnp/backend
-- DPNP_INSTALL_PREFIX: /tmp/dpnp/dpnp
-- CMAKE_VERBOSE_MAKEFILE: ON
-- Configuring done
CMake Error at CMakeLists.txt:226 (add_library):
The install of the dpnp_backend_c target requires changing an RPATH from
the build tree, but this is not supported with the Ninja generator unless
on an ELF-based or XCOFF-based platform. The
CMAKE_BUILD_WITH_INSTALL_RPATH variable may be set to avoid this relinking
step.
and I can't figure how to get past this rpath error unless I use the dpcpp from the oneapi basekit.
Side questions about about mkl/onemkl: it looks like intel mkl and onemkl are not at all similar and intel mkl must be used anyway ?)
The text was updated successfully, but these errors were encountered:
Question to clarify the experiment you report above:
Did you rebuild dpcpp and runtime dependencies of dpnp under /opt/sycl and/opt/customoneapi/ from source to use a cuda backend but dpnp is still attempting the one from the basekit instead?
Did you rebuild dpcpp and runtime dependencies of dpnp under /opt/sycl and/opt/customoneapi/ from source to use a cuda backend
yes at least it's built in a way that cuda devices are detected as sycl devices
but dpnp is still attempting the one from the basekit instead?
I don't know exactly what dpnp is attempting to use or do. There are several layers of configuration between the user facing command (python setup.py build_clib) and the cmake error. I think this error is triggered by the use of a custom dpcpp build (e.g using clang binary from intel/llvm sycl branch).
I was exploring interoperability of the IntelPython packages and wanted to try to build the stack dpctl / dpnp / numba_dpex with custom sycl libraries.
dpctl seems to be built with interoperability by design, and indeed I can get it to detect a cuda device effortlessly:
I imagined it can be the same with dpnp since dpctl and dpnp go hand in hand (
dpctl.tensor
does not exist unlessdpnp
works) and the conda build seems to suggest that having custom dpcpp would be possible but I can't get dpnp to build.So the question here is: is there a way around this, is it planned to be supported in the future ? or maybe there's another way of using
dpctl
?(Following is a feedback about what I thought would be possible:
I'm using latest commits of the sycl and library and the latest release of onetbb and build everything from source, and also using the latest release of onedpl, and mkl from the basekit, and I set
DPCPPROOT
,MKLROOT
,TBBROOT
andDPLROOT
variables such that the file trees at those paths match what is expected by the install scrips (mimicking what is found in the intel oneapi basekit).The I try to run
python setup.py build_clib
but:the command expects
dpcpp
to be available to the system regardless of environment variables (e.gCC
). I thought maybe symlinkingdpcpp
toclang
could do the job 🤷 .Then I get the following output:
and I can't figure how to get past this rpath error unless I use the dpcpp from the oneapi basekit.
The text was updated successfully, but these errors were encountered: