Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deployment bug-fix #1036

Open
wants to merge 148 commits into
base: devel
Choose a base branch
from
Open
Changes from 1 commit
Commits
Show all changes
148 commits
Select commit Hold shift + click to select a range
10ba7ae
Added AdaGrad and AdaDelta solvers. First implementation of solver in…
Jan 27, 2016
88d4b63
Merge branch 'master' into new-solvers
Jan 27, 2016
26e7172
Added nnsolvers unit test, to check the convergence of new solvers.
Jan 27, 2016
2b6f2ed
Added solver interface to cnn_train_dag.
Jan 27, 2016
dc6f749
Improved nnsolvers unit test, added DAG tests.
Jan 27, 2016
85cc7f7
Merge branch 'master' into new-solvers
Jan 27, 2016
e723803
Fixed nnsolvers in GPU mode.
Jan 27, 2016
6d4c944
Added RMSProp. Tweaked the epsilon position in AdaGrad (literature va…
Jan 28, 2016
7fef669
Merge branch 'master' into new-solvers
May 6, 2016
0c429d8
documentation, backwards-compatible momentum option
May 6, 2016
1c21b66
update unit tests
May 6, 2016
bbe444d
different default hyper-params for each solver, same as in torch. imp…
May 8, 2016
394f639
renamed momentum and state to solverState in cnn_train/dag; fixed uni…
May 11, 2016
b3598e0
remove unneded code
May 11, 2016
0ec8114
add a moving average option to AdaGrad, which avoids the monotonic de…
May 12, 2016
09c771e
AdaGrad: running sum with decay instead of moving average (which does…
May 12, 2016
fb0b626
correct default hyperparam
May 12, 2016
f866715
fix options for custom solvers
May 13, 2016
8088635
Merge branch 'master-bb' into devel
lenck Jul 13, 2016
9ad2008
Merge branch 'devel' into new-solvers
lenck Jul 13, 2016
0711d24
Revamped the custom solvers. Properly merged to the new cnn_train.
lenck Jul 13, 2016
8ccdc96
Merge branch 'master' of bitbucket.org:ovl/matconvnet
Sep 17, 2016
05c36ab
Makefile: bug fix: make sure that specified CUDA libraries are linked…
vedaldi Sep 16, 2016
5bdb22a
fast r-cnn: adds region of interest pooling
Sep 25, 2016
3eb8102
fast r-cnn: adds demo
Sep 25, 2016
041e8a9
fast r-cnn: adds import script
Sep 25, 2016
d579313
cnn_train_*: skips void updates
Sep 25, 2016
86b7a1c
Xcode: sync
vedaldi Sep 25, 2016
c2cb3a9
build: adds `-D_FORCE_INLINES` flag
vedaldi Sep 25, 2016
733b083
fast-rcnn: cleanup
vedaldi Sep 25, 2016
b098d06
beta22->beta23
vedaldi Sep 25, 2016
b62b96d
roipooling: fix recent compute capability
vedaldi Sep 25, 2016
e791bae
Merge branch 'master' of bitbucket.org:ovl/matconvnet
Sep 26, 2016
7990df5
Fix names
Sep 26, 2016
ab371b2
Add xVOCap
Sep 26, 2016
0be8088
Change eval to FRCNN style
Sep 27, 2016
eedfc01
Add comments to eval
Sep 27, 2016
2d7bcee
Fix bbox regression
Sep 29, 2016
64e9b45
Remove ap_auc
Sep 29, 2016
b9415b3
switch to torch resnet sampling factor
vedaldi Sep 27, 2016
86f3af7
Merge branch 'master' of bitbucket.org:ovl/matconvnet
Sep 29, 2016
e0004ac
minor typo fixes to vl_imreadjpeg docs
albanie Sep 30, 2016
3336632
Incorporate bbox meanstd
Sep 30, 2016
afe1be7
Clean up
Sep 30, 2016
8262097
Fix bbox reg
Sep 30, 2016
df7d476
more on default models
vedaldi Sep 26, 2016
5372113
fixed model filename: *-pascal07-*.mat
thelinuxmaniac Sep 28, 2016
1846751
Fix a minor bug in roipool
Sep 28, 2016
b428184
Minor improvement
Sep 28, 2016
72389fe
Demo shows on original
Sep 29, 2016
381905a
fixed missing *-pascal07-*.mat
thelinuxmaniac Sep 29, 2016
cf1c043
fast-rcnn: fixes evaluation script paths
vedaldi Sep 29, 2016
bff8e8c
utils/import-caffe.py: save input sizes in metadata
vedaldi Sep 29, 2016
6337e3f
dagnn.ROIPooling: adds missing `getOuptutSizes` method
vedaldi Sep 29, 2016
6ebaa6e
dagnn.getVarSizes: correctly expand empty variable sizes to [0 0 0 0]
vedaldi Sep 29, 2016
7ca8f06
model2dot.m: works with Fast R-CNN by using new metadata
vedaldi Sep 29, 2016
17f802f
doc/pretrained.md: adds Fast R-CNN performance
vedaldi Sep 29, 2016
9b9f1bb
pretrained.md: fixes
vedaldi Sep 29, 2016
6aba90b
utils/import-caffe.py: fix normalization metadata
vedaldi Sep 30, 2016
f9154a1
DagNN.fromSimpleNN: deals properly with `net.meta.inputs`
vedaldi Sep 30, 2016
1d3dae9
models2dot.m: compatible with latest models
vedaldi Sep 30, 2016
f0f1d4c
fast-rcnn: updates pretrained models
vedaldi Sep 30, 2016
4ef4e52
mode2dot.m: bugfix
vedaldi Sep 30, 2016
01242b3
doc: pretrained.md: fixes
vedaldi Sep 30, 2016
452f087
fast_rcnn_evaluate.m: fix bug introduced with last merge
vedaldi Sep 30, 2016
4ce2871
doc: pretrained.md: updates MD5s
vedaldi Sep 30, 2016
32bf09c
Merge remote-tracking branch 'origin/master' into smallTypos
albanie Oct 3, 2016
e347c75
fixed small typos in lrn normalization cuda
albanie Oct 3, 2016
35a23bd
COPYING: fixes
vedaldi Oct 4, 2016
26eed46
Merge remote-tracking branch 'origin/master' into smallTypos
albanie Oct 4, 2016
a55ceb2
fixes small typo in vl_nnpool
albanie Oct 4, 2016
ad3ed7a
more minor doc typo fixes to vl_nnpool
albanie Oct 4, 2016
c820607
fixed pooling comment in doc
albanie Oct 4, 2016
ed214c5
rename a parameter in dagnn
Oct 5, 2016
2f4a815
fixes summations in spatial normalization derivatives
albanie Oct 6, 2016
3a9afa6
Merge branch 'smallTypos' of https://bitbucket.org/ovl/matconvnet int…
albanie Oct 6, 2016
effc875
added opts.postEpochFn=@myfunc option to cnn_train, which may be used…
Oct 24, 2016
ff513ed
fix vl_nnloss doc ('logisticlog' -> 'logistic')
Nov 8, 2016
3ac854b
Fixed #722 #743
lenck Nov 11, 2016
a5db5e9
Fix for linux compilation.
lenck Nov 11, 2016
563edcd
Added more info about the linux dependencies.
lenck Nov 11, 2016
f90ed2d
Fixes #694, libjpeg not aborted once a libjpeg error found.
lenck Nov 22, 2016
829798d
Fixes #744.
lenck Nov 22, 2016
7ee8d8a
Fixes once more the issue #552.
lenck Nov 22, 2016
d248531
Merge branch 'master' into master-bb
lenck Nov 22, 2016
efa978b
Fixed the `vl_imreadjpeg.m` help string.
lenck Nov 24, 2016
e53837a
Merge branch 'master'
Nov 30, 2016
c8d650d
fix some errors introduced when merging
Nov 30, 2016
ecf6e00
fix subtle bug in Nesterov momentum (by using standard instead of in-…
Nov 30, 2016
784967a
further improvements to solvers
Nov 30, 2016
661b149
formatting
Nov 30, 2016
110092b
fix plotDiagnostics for solvers with non-numeric state
Nov 30, 2016
5b4af28
same random seed for consistent unit tests
Nov 30, 2016
7b1712a
ADAM solver
aravindhm Nov 30, 2016
d8c173d
Unit tests Adam solver
aravindhm Dec 1, 2016
4e2f662
Merge branch 'master'
Dec 2, 2016
4adcf54
Merged in new-solvers (pull request #21)
Dec 2, 2016
237b5dd
Update semilogy and plot
oya163 Dec 9, 2016
ef68d3d
Update cnn_mnist_experiment
oya163 Dec 9, 2016
5cb496f
Minor
Dec 18, 2016
4d73baf
Merge branch 'master' of bitbucket.org:ovl/matconvnet
Dec 18, 2016
48c7aa9
added dilation to caffe Conv import (fixes #816)
albanie Dec 18, 2016
663d545
Merge pull request #805 from oya163/patch-1
lenck Dec 19, 2016
0a235f3
Fixed spelling mistake.
lenck Jan 18, 2017
f3f3a7f
Added a fix for the new version vl_imreadjpeg.
lenck Jan 18, 2017
3bd1306
Small improvements of cnn_train:
lenck Jan 18, 2017
c0381d0
Merge branch 'master' of github.com:vlfeat/matconvnet
lenck Jan 18, 2017
d6518b5
Merge branch 'master' into master-bb
lenck Jan 18, 2017
0f6412c
Bugfix.
lenck Jan 18, 2017
c038636
Merged in caffe-dilate-fix (pull request #26)
lenck Jan 18, 2017
b570d83
fixed mnist bnorm initialization, where the previous conv layer's bia…
Jan 20, 2017
4eec6fa
Merge branch 'master' of bitbucket.org:ovl/matconvnet
Feb 8, 2017
b4c57a0
Fix bug in support for empty train/val sets
Feb 9, 2017
a9798bf
fix bug that caused setting a solver to be ignored
Feb 13, 2017
5681a0b
fix random seed in mnist unit tests, for more reproducible tests
Feb 13, 2017
1a6a0e7
Merged in smallTypos (pull request #23)
albanie Feb 15, 2017
1505d50
Fixed a bug introduced in b4c57a0, isequal(~,nan) is always false.
lenck Feb 22, 2017
bf310fc
Added epochSize parameter to dagnn too.
lenck Mar 13, 2017
19e5057
Merge branch 'master' of ssh://bitbucket.org/ovl/matconvnet
lenck Mar 13, 2017
32aee3b
fixed small issue ignoring doubles on MNIST unit test
Mar 13, 2017
69d07fd
Small bugfix.
lenck Mar 15, 2017
ece81ef
Merge branch 'master' of ssh://bitbucket.org/ovl/matconvnet
lenck Mar 15, 2017
48ef227
Added one more output to vl_imreadjpeg.cu verbose.
lenck Mar 16, 2017
cfc2c47
Added a 'display' parameter to dagnn.Loss. Moved accummulation to a s…
lenck Mar 16, 2017
e46fb0a
Added a PDist wrapper...
lenck Mar 16, 2017
c04415d
refactoring.
lenck Mar 17, 2017
269d9bf
Changed a bit the PDist interface - aggregates outputs by default.
lenck Mar 17, 2017
198faec
Merged in loss-improvements (pull request #30)
lenck Mar 17, 2017
fbe2a45
Fixed interface quirk of vl_nnloss and vl_nnrelu (different from othe…
Mar 22, 2017
2695827
Fixed bug in vl_taccum when argument b is a scalar
Mar 22, 2017
c19748a
Allow redirecting vl_testnn to different test suite dirs
Mar 22, 2017
677e51f
Removed unnecessary try/catch.
lenck Mar 23, 2017
4987ded
Small bug fixes for loss.
lenck Mar 23, 2017
5dba39e
Merge branch 'master' of ssh://bitbucket.org/ovl/matconvnet
lenck Mar 23, 2017
5cdfa2f
Fix of the numerical derivatives for the vl_nnpdist.
lenck Mar 23, 2017
8420b7e
Merged in toy-data-example (pull request #28)
Mar 23, 2017
23e9d96
Merged in vl_argparse-improvements (pull request #31)
Mar 23, 2017
049e258
Merged in docFix (pull request #25)
albanie Mar 23, 2017
03442ce
prepare for beta24
vedaldi Mar 23, 2017
2e5d60e
doc: fixes
vedaldi Mar 23, 2017
3387ca9
Fixed compilation warning on MSVC/GPU compilation.
lenck Mar 24, 2017
24b8182
Commented out unused bool variable which causes "Unused" warnings.
lenck Mar 24, 2017
1b5cc3f
doc: minor fixes
vedaldi Mar 24, 2017
0461a96
Fixed the nnroipool bug for older MSVC compilers.
lenck Mar 24, 2017
0ea06fd
Merge branch 'master' of bitbucket.org:ovl/matconvnet into master-bb
lenck Mar 24, 2017
0c70ada
xtest/nnconv: reduce memory usage for a test
vedaldi Mar 24, 2017
6221423
doc: typo
vedaldi Mar 24, 2017
f4ea5c3
Biases update bug-fix in mergeBatchNorm
sosiris Aug 15, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
AdaGrad: running sum with decay instead of moving average (which does…
…n't quite work)
Joao Henriques committed May 12, 2016

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
commit 09c771e0f67cf4f8616cc23a7354bbfc91901a1c
19 changes: 7 additions & 12 deletions examples/solver_adagrad.m
Original file line number Diff line number Diff line change
@@ -11,19 +11,18 @@
% `epsilon`:: 1e-10
% Small additive constant to regularize variance estimate.
%
% `rho`:: []
% `rho`:: 1
% Moving average window for variance update, between 0 and 1 (larger
% values result in slower/more stable updating). This has the same
% effect as RHO for AdaDelta and RMSProp. However, because it is not
% part of standard AdaGrad, it is disabled by default (RHO is empty).
% values result in slower/more stable updating). This is similar to
% RHO in AdaDelta and RMSProp. Standard AdaGrad is obtained with a RHO
% value of 1 (use total average instead of a moving average).
%
% A possibly undesirable effect of standard AdaGrad is that the update
% will monotonically decrease to 0, until training eventually stops. This
% is because the AdaGrad update is inversely proportional to the total
% variance of the gradients seen so far.
% By setting RHO to non-empty, a moving average window of the variance
% is used instead of the total variance, so the update does not
% monotonically decrease to 0.
% With RHO smaller than 1, a moving average is used instead. This
% prevents the final update from monotonically decreasing to 0.

% Copyright (C) 2016 Joao F. Henriques.
% All rights reserved.
@@ -35,10 +34,6 @@
g_sqr = 0 ;
end

if isempty(opts.rho) % standard, accumulate total variance
g_sqr = g_sqr + grad.^2 ;
else % moving average window, similar to RMSProp/AdaDelta
g_sqr = g_sqr * rho + grad.^2 * (1 - rho) ;
end
g_sqr = g_sqr * opts.rho + grad.^2 ;

weights = weights - lr * grad ./ (sqrt(g_sqr) + opts.epsilon) ;