Skip to content

sources for daal4py - a convenient Python API to DAAL

License

Notifications You must be signed in to change notification settings

napetrov/daal4py

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Intel(R) Extension for Scikit-learn*

Installation   |   Documentation   |   Examples   |   Support   |   FAQ   

Build Status Coverity Scan Build Status Join the community on GitHub Discussions PyPI Version Conda Version python version scikit-learn supported versions

With Intel(R) Extension for Scikit-learn you can accelerate your Scikit-learn applications and still have full conformance with all Scikit-Learn APIs and algorithms. This is a free software AI accelerator that brings over 10-100X acceleration across a variety of applications. And you do not even need to change the existing code!

How it works?

Intel(R) Extension for Scikit-learn offers you a way to accelerate existing scikit-learn code. The acceleration is achieved through patching: replacing the stock scikit-learn algorithms with their optimized versions provided by the extension.

One of the ways to patch scikit-learn is by modifying the code. First, you import an additional Python package (sklearnex) and enable optimizations via sklearnex.patch_sklearn(). Then import scikit-learn estimators:

  • Enable Intel CPU optimizations

    import numpy as np
    from sklearnex import patch_sklearn
    patch_sklearn()
    
    from sklearn.cluster import DBSCAN
    
    X = np.array([[1., 2.], [2., 2.], [2., 3.],
                [8., 7.], [8., 8.], [25., 80.]], dtype=np.float32)
    clustering = DBSCAN(eps=3, min_samples=2).fit(X)
  • Enable Intel GPU optimizations

    import numpy as np
    import dpctl
    from sklearnex import patch_sklearn, config_context
    patch_sklearn()
    
    from sklearn.cluster import DBSCAN
    
    X = np.array([[1., 2.], [2., 2.], [2., 3.],
                [8., 7.], [8., 8.], [25., 80.]], dtype=np.float32)
    with config_context(target_offload="gpu:0"):
        clustering = DBSCAN(eps=3, min_samples=2).fit(X)

👀 Read about other ways to patch scikit-learn and other methods for offloading to GPU devices. Check out available notebooks for more examples.

This software acceleration is achieved through the use of vector instructions, IA hardware-specific memory optimizations, threading, and optimizations for all upcoming Intel platforms at launch time.

Supported Algorithms

❗ The patching only affects selected algorithms and their parameters.

You may still use algorithms and parameters not supported by Intel(R) Extension for Scikit-learn in your code. You will not get an error if you do this. When you use algorithms or parameters not supported by the extension, the package fallbacks into original stock version of scikit-learn.

🚀 Acceleration

Configurations:

  • HW: c5.24xlarge AWS EC2 Instance using an Intel Xeon Platinum 8275CL with 2 sockets and 24 cores per socket
  • SW: scikit-learn version 0.24.2, scikit-learn-intelex version 2021.2.3, Python 3.8

Benchmarks code

🛠 Installation

System Requirements   |    Install via pip or conda   |   Build from sources

Intel(R) Extension for Scikit-learn is available at the Python Package Index, on Anaconda Cloud in Conda-Forge channel and in Intel channel. You can also build the extension from sources.

The extension is also available as a part of Intel® AI Analytics Toolkit (AI Kit). If you already have AI Kit installed, you do not need to install the extension.

Installation via pip package manager is recommended by default:

pip install scikit-learn-intelex

🔗 Important Links

👀 Follow us on Medium

We publish blogs on Medium, so follow us to learn tips and tricks for more efficient data analysis with the help of Intel(R) Extension for Scikit-learn. Here are our latest blogs:

❔ FAQ

[See answers to frequently asked questions]

❓ Are all algorithms affected by patching?

No. The patching only affects selected algorithms and their parameters.

❓ What happens if I use parameters not supported by the extension?

In cases when unsupported parameters are used, the package fallbacks into original stock version of scikit-learn. You will not get an error.

❓ What happens if I run algorithms not supported by the extension?

If you use algorithms for which no optimizations are available, their original version from the stock scikit-learn is used.

❓ Can I see which implementation of the algorithm is currently used?

Yes. To find out which implementation of the algorithm is currently used (Intel(R) Extension for Scikit-learn or original Scikit-learn), use the verbose mode.

❓ How much faster scikit-learn is after the patching?

We compare the performance of Intel(R) Extension for Scikit-Learn to other frameworks in Machine Learning Benchmarks. Read our blogs on Medium if you are interested in the detailed comparison.

❓ What if the patching does not cover my scenario?

If the patching does not cover your scenarios, submit an issue on GitHub with the description of what you would want to have.

💬 Support

Report issues, ask questions, and provide suggestions using:

You may reach out to project maintainers privately at [email protected]

oneAPI

Intel(R) Extension for Scikit-learn is part of oneAPI and Intel® AI Analytics Toolkit (AI Kit).

daal4py and oneDAL

The acceleration is achieved through the use of the Intel(R) oneAPI Data Analytics Library (oneDAL). Learn more:


⚠️Intel(R) Extension for Scikit-learn contains scikit-learn patching functionality that was originally available in daal4py package. All future updates for the patches will be available only in Intel(R) Extension for Scikit-learn. We recommend you to use scikit-learn-intelex package instead of daal4py. You can learn more about daal4py in daal4py documentation.


About

sources for daal4py - a convenient Python API to DAAL

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 77.7%
  • C++ 18.7%
  • Cython 2.2%
  • Shell 0.6%
  • CMake 0.4%
  • C 0.2%
  • Batchfile 0.2%