-
Notifications
You must be signed in to change notification settings - Fork 432
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bugfix: Delay pattern mask is applied twice #110
base: main
Are you sure you want to change the base?
Changes from 3 commits
12e455a
6124a46
cc3b190
fbaf621
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Large diffs are not rendered by default.
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -3387,7 +3387,8 @@ def generate( | |
) | ||
|
||
# build the delay pattern mask for offsetting each codebook prediction by 1 (this behaviour is specific to Parler-TTS) | ||
input_ids, decoder_delay_pattern_mask = self.decoder.build_delay_pattern_mask( | ||
# but don't overwrite the input_ids tensor with the delay pattern mask. We perform that later | ||
_, decoder_delay_pattern_mask = self.decoder.build_delay_pattern_mask( | ||
Comment on lines
+3390
to
+3391
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. As pointed out, this is a redundant operation that has no impact on the results! There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hmm, I think this line does indeed change the results when using enrolled tokens. Perhaps your setup is working because it is slightly different as you've described below. I shall try this and get back to you There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ok, so my testing shows that this fix is required to get the right audio when doing the enrolment. Here is an example audio file generated with and without the fix: |
||
input_ids, | ||
bos_token_id=generation_config._bos_token_tensor, | ||
pad_token_id=generation_config._pad_token_tensor, | ||
|
@@ -3442,6 +3443,7 @@ def generate( | |
generation_config=generation_config, | ||
synced_gpus=synced_gpus, | ||
streamer=streamer, | ||
logits_warper=None, | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You should keep the logits_warper, I'm not sure why you removed it! There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I didn't remove it! Originally, |
||
**model_kwargs, | ||
) | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've been able to test this with the following code, which also requires a small modification of the DAC code (adding
main_input_name = "input_values"
as a class attribute ofDACModel
) :I found that Parler has difficulty generalizing to unseen speakers (meaning using a speaker that has not been seen during training or that has not been generated by Parler), so there's no actual edge of using it for voice cloning. However, from my experiment, it's working quite well with Parler generation!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @ylacombe , I tried the above code sample with both the mini and large model but the audio file generated is noisy and inconsistent. I've used the input audio generated through ParlerTTS itself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a clean snippet! When calling
model.generate(...)
Is there a preference for usinginput_values=input_values
? I was originally doing something along the lines ofdecoder_input_ids=input_values.squeeze().long()
.