-
Notifications
You must be signed in to change notification settings - Fork 320
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
uis-rnn can't work for long utterances dataset? #50
Comments
Hi, We haven't tested uis-rnn on AMI. We found the audio quality of this dataset not good enough so we didn't use it. About the poor performance on AMI, it's likely due to the nature of LSTM/GRU not being able to handle ultra long sequences.
@AnzCol I think you can answer these better than me. |
Hi @wq2012 ,
|
I'm not familiar with that :(
Step c) is necessary to complete the probability distribution. |
Yes, you use P(X,Y,Z), a generative approach. Other researches use discriminative approach P(Y|X) = P(Y|Z,X) * P(Z|X) = SAP * SCD. I think generative approach P(X,Y,Z) is nearly optimal when you can train it on extremely big dataset as Transformer based algorithms such as BERT, GPT2
In unsupervised way, i found that your Spectral Cluster algorithm works quite good in many audios. |
It's a good point. I think that's an interesting direction for future efforts.
Indeed, spectral clustering is by far the best unsupervised approach that we found. The only drawback is that it's a bit sensitive to its parameters. So we usually tune the parameters for specific domains that we want to deploy the system to. |
@wq2012 |
@wrongbattery Sorry I didn't keep any of those logs. But I can usually see the loss function decreasing and finally converging. We never had any success on AMI dataset. The acoustic condition of AMI is really different from our typical training data for training Voice Activity Detector, speaker recognition model, and UIS-RNN. Specifically, the volume of AMI dataset is really low. VAD has super large false reject. ICSI is a great dataset. I don't remember whether we tried to predict on it (very likely not). But we tried to train on it and predict on other datasets, and it worked pretty well. |
this is my log file training on ICSI dataset. the loss just get stuck nearly around -750-> -720. I also implement based on your code allowing number of clusters as input, but the result on some youtube audios are no good. what is your max size of sequence length? |
It's weird that the loss becomes NAN at some point:
Not sure what is going on. I didn't try to run diarization experiments on YouTube data, since I don't have any well annotated YouTube datasets. But I've heard other teams complaining that diarization on YouTube is super difficult. Personally I haven't heard of any success stories on diarization with YouTube yet. The experiments we carried out are mostly on audios <5 minutes. |
I think your model converge quite fast after a few iterations. If we know the oracle number of speakers before hand, does spectral cluster far better than uis-rnn? Do you agree with this? |
I don't know. We currently don't have a good implementation in uis-rnn to limit the number of speakers. We haven't tried much in this direction. Also, if you know the number of speakers before hand, it is no longer the STANDARD speaker diarization problem. Comparing uis-rnn and spectral clustering in this case might not be very fair. Besides, the performance of uis-rnn significantly depends on the quality of training data.
It could be true. But I wouldn't be too assertive to say this. Our current uis-rnn implementation is more of a prototype than a product. It's not mature. There are still lots of spaces to improve, and for other researchers to contribute. |
Thanks. Your idea about each network for each speaker is quite interesting. i'm trying to solve online diarization problem on production environment |
Hello, |
Dear @BarCodeReader, |
Hi @wrongbattery, |
Hi! I've divided interviews from ICSI into approx 5 minute wavs and tried to use d-vectors from https://github.com/CorentinJ/Real-Time-Voice-Cloning for training uis-rnn. But I have the same problem: loss becomes Nan at some point. Can you tell me what kind of d-vectors have you used? |
@innarid
|
@innarid |
@wq2012
Even if I trained on the first 5 sequence of the train_sequences, the loss becomes nan from the first row:
Do you know how to solve the problem? Update: Issue solved! |
Describe the question
In Diarization task, i train on AMI train-dev set and ICSI corpus , i test on AMI test set. Both datasets include audios of 3-5 speakers in 50-70 minutes. My d embedding trains on Voxceleb1,2 with EER = 4.55%. I train uirnn with window size .24ms, overlap 50%, segment size .4ms. The result is poor on both train and test set.
I also read all your code about uirnn, i don't understand 1> why do you split up the original utterances and concatenate them by speaker and then use that input for training? 2> Why doese the input ignore which audio the utterance belongs to, just merge all utterances in 1 single audio? .This process seems completely different to inference process and also reduce the capacity of using batch size if one speaker talk too much.
For 1 hour audio, the output has 20-30 speakers instead of 3-5 speakers no matter the smaller of crp_alpha is.
My background
Have I read the
README.md
file?Have I searched for similar questions from closed issues?
Have I tried to find the answers in the paper Fully Supervised Speaker Diarization?
Have I tried to find the answers in the reference Speaker Diarization with LSTM?
Have I tried to find the answers in the reference Generalized End-to-End Loss for Speaker Verification?
The text was updated successfully, but these errors were encountered: