Description
Hi,
I have read through your implementation of deep voice 3, this is really a very clean one. Have you got any good results yet?
And I have some doubts maybe you could help me clear.
-
'modules.py', line 24. Why do we need to make the first row of the embedding matrix to 0 vector?
-
'modules.py', line 270. I checked the paper, but I did not find the details about the 'scale' option...
-
'moduels.py', line 338, 343. In the paper, It says, 'For a single speaker, ωs is set to one for the decoder and fixed for the encoder to the ratio of output timesteps to input timesteps'. So maybe to the queries, position_rate should be 1, and for keys, position_rate should be hp.T_y/hp.T_x?
-
'moduels.py', line 384. I think this line is performing context normalization, and maybe the denominator should be square root of the total input time step, something like sqrt(tf.to_float(val.get_shape()[1]))?
-
'synthesis.py', line 38. Maybe the total time step should be hp.T_y//hp.r?
Thanks