forked from oneapi-src/oneDNN
-
Notifications
You must be signed in to change notification settings - Fork 0
/
README.binary.in
124 lines (96 loc) · 4.85 KB
/
README.binary.in
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
oneAPI Deep Neural Network Library (oneDNN)
===========================================
oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform
performance library of basic building blocks for deep learning applications.
oneDNN is part of oneAPI (https://oneapi.io).
The library is optimized for Intel(R) Architecture Processors
and Intel Graphics.
oneDNN is intended for deep learning applications and framework
developers interested in improving application performance on CPUs and GPUs.
This package contains oneDNN v@PROJECT_VERSION@ (@DNNL_VERSION_HASH@).
You can find information about the latest version and release notes
at the oneDNN Github (https://github.com/oneapi-src/oneDNN/releases).
Documentation
-------------
* Developer guide
(https://oneapi-src.github.io/oneDNN/v@DNNL_VERSION_MAJOR@.@DNNL_VERSION_MINOR@)
explains the programming model, supported functionality, and implementation
details, and includes annotated examples.
* API reference
(https://oneapi-src.github.io/oneDNN/v@DNNL_VERSION_MAJOR@.@DNNL_VERSION_MINOR@/modules.html)
provides a comprehensive reference of the library API.
System Requirements
-------------------
oneDNN supports systems based on Intel 64 or AMD64 architectures.
The library is optimized for the following CPUs:
* Intel Atom(R) processor (at least Intel SSE4.1 support is required)
* Intel Core(TM) processor (at least Intel SSE4.1 support is required)
* Intel Xeon(R) processor E3, E5, and E7 family (formerly Sandy Bridge,
Ivy Bridge, Haswell, and Broadwell)
* Intel Xeon Scalable processor (formerly Skylake, Cascade Lake, Cooper
Lake, Ice Lake, Sapphire Rapids, and Emerald Rapids)
* Intel Xeon CPU Max Series (formerly Sapphire Rapids HBM)
* Intel Core Ultra processors (formerly Meteor Lake, Arrow Lake,
and Lunar Lake)
* Intel Xeon 6 processors (formerly Sierra Forest and Granite Rapids)
oneDNN detects the instruction set architecture (ISA) at runtime and uses
just-in-time (JIT) code generation to deploy the code optimized
for the latest supported ISA. Future ISAs may have initial support in the
library disabled by default and require the use of run-time controls to enable
them. See CPU dispatcher control
(https://oneapi-src.github.io/oneDNN/dev_guide_cpu_dispatcher_control.html)
for more details.
The library is optimized for the following GPUs:
* Intel Graphics for 11th-14th Generation Intel Core Processors
* Intel Iris Xe MAX Graphics (formerly DG1)
* Intel Arc(TM) graphics (formerly Alchemist)
* Intel Data Center GPU Flex Series (formerly Arctic Sound)
* Intel Data Center GPU Max Series (formerly Ponte Vecchio)
* Intel Graphics and Intel Arc graphics for Intel Core Ultra processors
(formerly Meteor Lake, Arrow Lake and Lunar Lake)
* future Intel Arc graphics (code name Battlemage)
Support
-------
Submit questions, feature requests, and bug reports on the
GitHub issues page (https://github.com/oneapi-src/oneDNN/issues).
License
-------
oneDNN is licensed under Apache License Version 2.0. Refer to the "LICENSE"
file for the full license text and copyright notice.
This distribution includes third party software governed by separate license
terms.
3-clause BSD license:
* Xbyak (https://github.com/herumi/xbyak)
* Instrumentation and Tracing Technology API (ITT API)
(https://github.com/intel/ittapi)
* CMake (https://github.com/Kitware/CMake)
Boost Software License, Version 1.0:
* Boost C++ Libraries (https://www.boost.org/)
MIT License:
* Intel Graphics Compute Runtime for oneAPI Level Zero and OpenCL Driver
(https://github.com/intel/compute-runtime)
* Intel Graphics Compiler (https://github.com/intel/intel-graphics-compiler)
* oneAPI Level Zero (https://github.com/oneapi-src/level-zero)
* Intel Metrics Discovery Application Programming Interface
(https://github.com/intel/metrics-discovery)
This third party software, even if included with the distribution of
the Intel software, may be governed by separate license terms, including
without limitation, third party license terms, other Intel software license
terms, and open source software license terms. These separate license terms
govern your use of the third party programs as set forth in the
"THIRD-PARTY-PROGRAMS" file.
# Security
Security Policy (https://github.com/oneapi-src/oneDNN/blob/main/SECURITY.md)
outlines our guidelines and procedures for ensuring the highest level
of Security and trust for our users who consume oneDNN.
# Trademark Information
Intel, the Intel logo, Arc, Intel Atom, Intel Core, Iris,
OpenVINO, the OpenVINO logo, Pentium, VTune, and Xeon are trademarks
of Intel Corporation or its subsidiaries.
\* Other names and brands may be claimed as the property of others.
Microsoft, Windows, and the Windows logo are trademarks, or registered
trademarks of Microsoft Corporation in the United States and/or other
countries.
OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission
by Khronos.
(C) Intel Corporation