Skip to content

Commit

Permalink
Update learning rate for DS2.
Browse files Browse the repository at this point in the history
In order to get better accuracy for the 3 layer GRU with 2560 hidden
units, the learning rate has been lowered from 0.00075 to 0.0001. This
allows the network to acheive a WER of around 34.

Further hyperparameter tuning needs to occur to get lower WER.
  • Loading branch information
alugupta committed Aug 29, 2018
1 parent 68e5ecf commit 9c6b8b3
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion speech_recognition/pytorch/params.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@
# Training parameters
epochs = 10 # Number of training epochs
learning_anneal = 1.1 # Annealing applied to learning rate every epoch
lr = 0.00075 # initial learning rate
lr = 0.0001 # initial learning rate
momentum = 0.9 # momentum
max_norm = 400 # Norm cutoff to prevent explosion of gradients
l2 = 0 # L2 regularization
Expand Down

0 comments on commit 9c6b8b3

Please sign in to comment.