You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Aug 5, 2022. It is now read-only.
Copy file name to clipboardExpand all lines: README.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,7 @@ to use more than one thread per core. When less than required cores are specifie
40
40
limit execution of OpenMP threads to specified cores only.
41
41
42
42
## Best performance solution
43
-
Please read [release notes](https://github.com/intel/caffe/blob/master/docs/release_notes.md) for our recommendations and configuration to achieve best performance on Intel CPUs.
43
+
Please read [our Wiki](https://github.com/intel/caffe/wiki/Recommendations-to-achieve-best-performance) for our recommendations and configuration to achieve best performance on Intel CPUs.
44
44
45
45
## Multinode Training
46
46
Intel® Distribution of Caffe* multi-node allows you to execute deep neural network training on multiple machines.
All contributions by the University of California:
4
+
Copyright (c) 2014, 2015, The Regents of the University of California (Regents)
5
+
All rights reserved.
6
+
7
+
All other contributions:
8
+
Copyright (c) 2014, 2015, the respective contributors
9
+
All rights reserved.
10
+
For the list of contributors go to https://github.com/BVLC/caffe/blob/master/CONTRIBUTORS.md
11
+
12
+
Redistribution and use in source and binary forms, with or without
13
+
modification, are permitted provided that the following conditions are met:
14
+
15
+
* Redistributions of source code must retain the above copyright notice,
16
+
this list of conditions and the following disclaimer.
17
+
* Redistributions in binary form must reproduce the above copyright
18
+
notice, this list of conditions and the following disclaimer in the
19
+
documentation and/or other materials provided with the distribution.
20
+
* Neither the name of Intel Corporation nor the names of its contributors
21
+
may be used to endorse or promote products derived from this software
22
+
without specific prior written permission.
23
+
24
+
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
25
+
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
26
+
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
27
+
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
28
+
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
29
+
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
30
+
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
31
+
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
32
+
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
33
+
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
34
+
---
37
35
38
36
---
39
37
title: Release Notes
@@ -216,9 +214,14 @@ Berkeley Vision runs Caffe with K40s, K20s, and Titans including models at Image
216
214
There is an unofficial Windows port of Caffe at [niuzhiheng/caffe:windows](https://github.com/niuzhiheng/caffe). Thanks [@niuzhiheng](https://github.com/niuzhiheng)!
217
215
218
216
# Change log
217
+
03-11-2016
218
+
* integration with MKL2017 update1 (providing better performance solution)
219
+
* minor changes to provide optimal performance on default prototxt files describing topologies (for AlexNet, GoogleNet v2).
220
+
* fixed Dockerfiles - for Ubuntu and Centos.
221
+
219
222
1-09-2016
220
223
* added RNN support
221
-
* moved form MKL2017 beta update 1 engine to MKL2017 (providing better performance solution)
224
+
* moved form MKL2017 beta update 1 engine to MKL2017
222
225
* added official support for ResNet50, GoogleNet v2, VGG-19. (List of currenlty supported topologies: AlexNet, GoogleNet, GoogleNet v2, ResNet50, VGG-19)
223
226
* added official support for multinode on GoogleNet with MKL2017 engine
224
227
* added DataLayer optimizations
@@ -234,12 +237,15 @@ Workaround: For older processors use MKL2017 GEMM engine: set USE_MKL2017_AS_DEF
234
237
Workaround: Use GEMM engine in normalization layer (in prototxt file set `engine:=caffe` for that layer) for topologies that use LRN within channel like cifar.
235
238
236
239
* Performance results may be lower when Data Layer is provided in txt files (uncompressed list of jpg files)
237
-
Workaround: We recommend to always use LMDB Data Layer
240
+
Workaround: We recommend to always use compressed LMDB Data Layer
238
241
239
242
* LeNet, Cifar, Squeeznet currently are not optimized in terms of performance in Intel MKL2017
240
243
Workaround: better performance results might be achieved with GEMM engine: `set USE_MKL2017_AS_DEFAULT_ENGINE := 0` in `Makefile.config`.
241
244
242
-
* We observe convergence problems with some publicly presented hyper parameters (recommended for GPUs) for Googlenet and ResNet50. For CPU tuning of hyper parameters might be needed.
245
+
* We observe convergence problems with some publicly presented hyper parameters (recommended for GPUs) for Googlenet, ResNet50, VGG-19. For CPU tuning of hyper parameters might be needed.
246
+
247
+
* MKL2017 doesn't allow access to mean & variance statistics in batch normalization layer which prohibits their accumulation (in global-stats mode). This is affecting batch 1 scoring accuracy with topologies using batch normalization layer (resnet50, googlenet v2).
248
+
Workaround: use batch 32 or higher for accuracy measurements.
243
249
244
250
245
251
# Recommendations to achieve best performance
@@ -296,4 +302,4 @@ In folder `/examples/imagenet/` we provide scripts and instructions `readme.md`
296
302
Caffe is released under the [BSD 2-Clause license](https://github.com/BVLC/caffe/blob/master/LICENSE). The BVLC reference models are released for unrestricted use.
297
303
298
304
***
299
-
*Other names and brands may be claimed as the property of others
305
+
*Other names and brands may be claimed as the property of others
0 commit comments