Skip to content

Commit

Permalink
Merge pull request #1097 from IntelPython/docs/release-0.20-rev01
Browse files Browse the repository at this point in the history
More revisions and additions to the documentation
  • Loading branch information
diptorupd authored Jul 25, 2023
2 parents 4d4fc12 + 174b4f7 commit b3d6cdc
Show file tree
Hide file tree
Showing 35 changed files with 765 additions and 392 deletions.
3 changes: 3 additions & 0 deletions docs/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -18,3 +18,6 @@ help:
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

clean:
rm -rf "$(BUILDDIR)"
77 changes: 41 additions & 36 deletions docs/backups/docker.rst
Original file line number Diff line number Diff line change
@@ -1,25 +1,25 @@
.. _docker:
.. include:: ./../ext_links.txt

Docker Support
==============

Numba dpex now delivers docker support.
Dockerfile is capable of building numba-dpex as well as direct dependencies for it:
dpctl and dpnp.
There are several prebuilt images available: for trying numba_dpex and
for building numba-dpex.
Numba dpex now delivers docker support. Dockerfile is capable of building
numba-dpex as well as direct dependencies for it: dpctl and dpnp. There are
several prebuilt images available: for trying numba_dpex and for building
numba-dpex.

Building
--------

Numba dpex ships with multistage Dockerfile, which means there are
different `targets <https://docs.docker.com/build/building/multi-stage/#stop-at-a-specific-build-stage>`_ available for build. The most useful ones:
Numba dpex ships with multistage Dockerfile, which means there are different
`targets
<https://docs.docker.com/build/building/multi-stage/#stop-at-a-specific-build-stage>`_
available for build. The most useful ones:

- runtime
- runtime-gpu
- numba-dpex-builder-runtime
- numba-dpex-builder-runtime-gpu
- ``runtime``
- ``runtime-gpu``
- ``numba-dpex-builder-runtime``
- ``numba-dpex-builder-runtime-gpu``

To build docker image

Expand All @@ -36,28 +36,30 @@ To run docker image
.. note::

If you are building docker image with gpu support it will calls github api to get
latest versions of intel gpu drivers. You may face Github API call limits. To avoid
this, you can pass your github credentials to increase this limit. You can do it
by providing
`build args <https://docs.docker.com/engine/reference/commandline/build/#build-arg>`_
``GITHUB_USER`` and ``GITHUB_PASSWORD``. You can use
`access token <https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token>`
If you are building docker image with gpu support it will calls github api
to get latest versions of intel gpu drivers. You may face Github API call
limits. To avoid this, you can pass your github credentials to increase this
limit. You can do it by providing `build args
<https://docs.docker.com/engine/reference/commandline/build/#build-arg>`_
``GITHUB_USER`` and ``GITHUB_PASSWORD``. You can use `access token
<https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token>`_
instead of the password:

.. note::

In case you are building docker image behind firewall and your internet access
requires proxy, you can provide proxy
`build args <https://docs.docker.com/engine/reference/commandline/build/#build-arg>`_
``http_proxy`` and ``https_proxy``. Please note, that these args must be lowercase.
In case you are building docker image behind firewall and your internet
access requires proxy, you can provide proxy `build args
<https://docs.docker.com/engine/reference/commandline/build/#build-arg>`_
``http_proxy`` and ``https_proxy``. Please note, that these args must be
lowercase.

Dockerfile supports different python versions. To select the one you want, simply set
``PYTHON_VERSION`` build arg. By default docker image is based on official python image
based on slim debian, so the requested python version must be from the available python
docker images. In case you want to build on images on custom image you have to pass
``BASE_IMAGE`` environment variable. Be aware that Dockerfile is based on debian so any
base image should be debian based, like debian or ubuntu.
Dockerfile supports different python versions. To select the one you want,
simply set ``PYTHON_VERSION`` build arg. By default docker image is based on
official python image based on slim debian, so the requested python version must
be from the available python docker images. In case you want to build on images
on custom image you have to pass ``BASE_IMAGE`` environment variable. Be aware
that Dockerfile is based on debian so any base image should be debian based,
like debian or ubuntu.

Build arguments that could be useful:

Expand Down Expand Up @@ -90,8 +92,8 @@ Refer to Dockerfile to see all available
Running prebuilt images
-----------------------

An easy way you can try ``numba_dpex`` is by using prebuilt images.
There are several prebuilt images available:
An easy way you can try ``numba_dpex`` is by using prebuilt images. There are
several prebuilt images available:

- ``runtime`` package that provides runtime experience
.. code-block:: text
Expand All @@ -103,7 +105,8 @@ There are several prebuilt images available:
ghcr.io/intelpython/numba-dpex/builder:<numba_dpex_version>-py<python_version>[-gpu]
- you can also see ``stages`` package, but it is used mostly for building stages.
- you can also see ``stages`` package, but it is used mostly for building
stages.
You can use them to build your own docker that is built on top of one of them.

To try out numba dpex simply run:
Expand All @@ -126,13 +129,15 @@ or
.. note::

If you want to enable GPU you need to pass it within container and use ``*-gpu`` tag.
If you want to enable GPU you need to pass it within container and use
``*-gpu`` tag.

For passing GPU into container on linux use arguments ``--device=/dev/dri``.
However if you are using WSL you need to pass
``--device=/dev/dxg -v /usr/lib/wsl:/usr/lib/wsl`` instead.
However if you are using WSL you need to pass ``--device=/dev/dxg -v
/usr/lib/wsl:/usr/lib/wsl`` instead.

So, for example, if you want to run numba dpex container with GPU support on WSL:
So, for example, if you want to run numba dpex container with GPU support on
WSL:

.. code-block:: bash
Expand Down
File renamed without changes.
File renamed without changes.
3 changes: 3 additions & 0 deletions docs/make.bat
Original file line number Diff line number Diff line change
Expand Up @@ -31,5 +31,8 @@ goto end
:help
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%

:clean
rd /s %BUILDDIR%

:end
popd
2 changes: 2 additions & 0 deletions docs/source/api_reference/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,3 +4,5 @@

API Reference
=============

Coming soon
1 change: 1 addition & 0 deletions docs/source/ext_links.txt
Original file line number Diff line number Diff line change
Expand Up @@ -24,3 +24,4 @@
.. _Data Parallel Extensions for Python*: https://intelpython.github.io/DPEP/main/
.. _Intel VTune Profiler: https://www.intel.com/content/www/us/en/developer/tools/oneapi/vtune-profiler.html
.. _Intel Advisor: https://www.intel.com/content/www/us/en/developer/tools/oneapi/advisor.html
.. _oneMKL: https://www.intel.com/content/www/us/en/docs/oneapi/programming-guide/2023-2/intel-oneapi-math-kernel-library-onemkl.html
58 changes: 42 additions & 16 deletions docs/source/getting_started.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,26 +9,40 @@ Getting Started
===============


Installing pre-built packages
-----------------------------
Installing pre-built conda packages
-----------------------------------

``numba-dpex`` along with its dependencies can be installed using ``conda``.
It is recommended to use conda packages from the ``anaconda.org/intel`` channel
to get the latest production releases. Nighly builds of ``numba-dpex`` are
available on the ``dppy/label/dev`` conda channel.
to get the latest production releases.

.. code-block:: bash
conda create -n numba-dpex-env numba-dpex dpnp dpctl dpcpp-llvm-spirv spirv-tools -c intel -c conda-forge
conda create -n numba-dpex-env \
numba-dpex dpnp dpctl dpcpp-llvm-spirv spirv-tools \
-c intel -c conda-forge
To try out the bleeding edge, the latest packages built from tip of the main
source trunk can be installed from the ``dppy/label/dev`` conda channel.

.. code-block:: bash
conda create -n numba-dpex-env \
numba-dpex dpnp dpctl dpcpp-llvm-spirv spirv-tools \
-c dppy/label/dev -c intel -c conda-forge
Building from source
--------------------

``numba-dpex`` can be built from source using either ``conda-build`` or ``setuptools``.
``numba-dpex`` can be built from source using either ``conda-build`` or
``setuptools``.

Steps to build using ``conda-build``:

1. Create a conda environment
1. Ensure ``conda-build`` is installed in the ``base`` conda environment or
create a new conda environment with ``conda-build`` installed.

.. code-block:: bash
Expand All @@ -45,22 +59,34 @@ Steps to build using ``conda-build``:

.. code-block:: bash
conda install numba-dpex
conda install -c local numba-dpex
Steps to build using ``setup.py``:

As before, a conda environment with all necessary dependencies is the suggested
first step.

.. code-block:: bash
conda create -n numba-dpex-env dpctl dpnp numba spirv-tools dpcpp-llvm-spirv llvmdev pytest -c intel -c conda-forge
# Create a conda environment that hass needed dependencies installed
conda create -n numba-dpex-env \
dpctl dpnp numba spirv-tools dpcpp-llvm-spirv llvmdev pytest \
-c intel -c conda-forge
# Activate the environment
conda activate numba-dpex-env
# Clone the numba-dpex repository
git clone https://github.com/IntelPython/numba-dpex.git
cd numba-dpex
python setup.py develop
Building inside Docker
----------------------

A Dockerfile is provided on the GitHub repository to easily build ``numba-dpex``
A Dockerfile is provided on the GitHub repository to build ``numba-dpex``
as well as its direct dependencies: ``dpctl`` and ``dpnp``. Users can either use
one of the pre-built images on the ``numba-dpex`` GitHub page or use the
bundled Dockerfile to build ``numba-dpex`` from source.
bundled Dockerfile to build ``numba-dpex`` from source. Using the Dockerfile
also ensures that all device drivers and runtime libraries are pre-installed.

Building
~~~~~~~~
Expand All @@ -69,10 +95,10 @@ Numba dpex ships with multistage Dockerfile, which means there are
different `targets <https://docs.docker.com/build/building/multi-stage/#stop-at-a-specific-build-stage>`_
available for build. The most useful ones:

- runtime
- runtime-gpu
- numba-dpex-builder-runtime
- numba-dpex-builder-runtime-gpu
- ``runtime``
- ``runtime-gpu``
- ``numba-dpex-builder-runtime``
- ``numba-dpex-builder-runtime-gpu``

To build docker image

Expand All @@ -96,7 +122,7 @@ To run docker image
``GITHUB_USER`` and ``GITHUB_PASSWORD``
`build args <https://docs.docker.com/engine/reference/commandline/build/#build-arg>`_
to increase the call limit. A GitHub
`access token <https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token>`
`access token <https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token>`_
can also be used instead of the password.

.. note::
Expand Down
52 changes: 27 additions & 25 deletions docs/source/overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,23 +15,23 @@ implementation of `NumPy*`_'s API using the `SYCL*`_ language.
.. the same time automatically running such code parallelly on various types of
.. architecture.
``numba-dpex`` is developed as part of `Intel AI Analytics Toolkit`_ and
is distributed with the `Intel Distribution for Python*`_. The extension is
available on Anaconda cloud and as a Docker image on GitHub. Please refer the
:doc:`getting_started` page to learn more.
``numba-dpex`` is an open-source project and can be installed as part of `Intel
AI Analytics Toolkit`_ or the `Intel Distribution for Python*`_. The package is
also available on Anaconda cloud and as a Docker image on GitHub. Please refer
the :doc:`getting_started` page to learn more.

Main Features
-------------

Portable Kernel Programming
~~~~~~~~~~~~~~~~~~~~~~~~~~~

The ``numba-dpex`` kernel API has a design and API similar to Numba's
The ``numba-dpex`` kernel programming API has a design similar to Numba's
``cuda.jit`` sub-module. The API is modeled after the `SYCL*`_ language and uses
the `DPC++`_ SYCL runtime. Currently, compilation of kernels is supported for
SPIR-V-based OpenCL and `oneAPI Level Zero`_ devices CPU and GPU devices. In the
future, the API can be extended to other architectures that are supported by
DPC++.
future, compilation support for other types of hardware that are supported by
DPC++ will be added.

The following example illustrates a vector addition kernel written with
``numba-dpex`` kernel API.
Expand All @@ -56,31 +56,33 @@ The following example illustrates a vector addition kernel written with
print(c)
In the above example, three arrays are allocated on a default ``gpu`` device
using the ``dpnp`` library. These arrays are then passed as input arguments to
the kernel function. The compilation target and the subsequent execution of the
kernel is determined completely by the input arguments and follow the
using the ``dpnp`` library. The arrays are then passed as input arguments to the
kernel function. The compilation target and the subsequent execution of the
kernel is determined by the input arguments and follow the
"compute-follows-data" programming model as specified in the `Python* Array API
Standard`_. To change the execution target to a CPU, the device keyword needs to
be changed to ``cpu`` when allocating the ``dpnp`` arrays. It is also possible
to leave the ``device`` keyword undefined and let the ``dpnp`` library select a
default device based on environment flag settings. Refer the
:doc:`user_guide/kernel_programming/index` for further details.

``dpnp`` compilation support
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

``numba-dpex`` extends Numba's type system and compilation pipeline to compile
``dpnp`` functions and expressions in the same way as NumPy. Unlike Numba's
NumPy compilation that is serial by default, ``numba-dpex`` always compiles
``dpnp`` expressions into data-parallel kernels and executes them in parallel.
The ``dpnp`` compilation feature is provided using a decorator ``dpjit`` that
behaves identically to ``numba.njit(parallel=True)`` with the addition of
``dpnp`` compilation and kernel offloading. Offloading by ``numba-dpex`` is not
just restricted to CPUs and supports all devices that are presently supported by
the kernel API. ``dpjit`` allows using NumPy and ``dpnp`` expressions in the
same function. All NumPy compilation and parallelization is done via the default
Numba code-generation pipeline, whereas ``dpnp`` expressions are compiled using
the ``numba-dpex`` pipeline.
``dpjit`` decorator
~~~~~~~~~~~~~~~~~~~

The ``numba-dpex`` package provides a new decorator ``dpjit`` that extends
Numba's ``njit`` decorator. The new decorator is equivalent to
``numba.njit(parallel=True)``, but additionally supports compiling ``dpnp``
functions, ``prange`` loops, and array expressions that use ``dpnp.ndarray``
objects.

Unlike Numba's NumPy parallelization that only supports CPUs, ``dpnp``
expressions are first converted to data-parallel kernels and can then be
`offloaded` to different types of devices. As ``dpnp`` implements the same API
as NumPy*, an existing ``numba.njit`` decorated function that uses
``numpy.ndarray`` may be refactored to use ``dpnp.ndarray`` and decorated with
``dpjit``. Such a refactoring can allow the parallel regions to be offloaded
to a supported GPU device, providing users an additional option to execute their
code parallelly.

The vector addition example depicted using the kernel API can also be
expressed in several different ways using ``dpjit``.
Expand Down
Loading

0 comments on commit b3d6cdc

Please sign in to comment.