Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AgglomerativeClustering not honoring num_cluster parameter #1525

Closed
olvb opened this issue Nov 2, 2023 · 2 comments · Fixed by #1531
Closed

AgglomerativeClustering not honoring num_cluster parameter #1525

olvb opened this issue Nov 2, 2023 · 2 comments · Fixed by #1531

Comments

@olvb
Copy link
Contributor

olvb commented Nov 2, 2023

I have a very easy test case on which the "pyannote/speaker-diarization-3.0" pipeline is failing: a short audio file with 2 very different voices speaking one turn each, with a 1 sec silence in between, that I pass to SpeakerDiarization.apply() with both min_speakers and max_speakers set to 2. The pipeline detects 2 speech segments but only one speaker.

Funnily enough, calling SpeakerDiarization.apply() without min_speakers and max_speakers gives the expected result (2 speech segments with 2 speakers).

I managed to narrow the issue down to the clustering stage:

import numpy as np
from pyannote.audio.pipelines.clustering import AgglomerativeClustering

clustering = AgglomerativeClustering().instantiate(
    {
        "method": "centroid",
        "min_cluster_size": 0,
        "threshold": 0.0,
    }
)

# 2 embeddings different enough
embeddings = np.asarray([[1.0, 1.0, 1.0, 1.0], [1.0, 2.0, 1.0, 2.0]])

# call without num_clusters
clusters = clustering.cluster(
    embeddings=embeddings, min_clusters=2, max_clusters=2, num_clusters=None
)
# succeeds
assert clusters.tolist() == [0, 1]

# call with num_clusters=2
clusters = clustering.cluster(
    embeddings=embeddings, min_clusters=2, max_clusters=2, num_clusters=2
)
# fails (we get [0, 0])
assert clusters.tolist() == [0, 1]

I won't pretend to understand everything that's going on in AgglomerativeClustering.cluster() but the problem seems to arise in the branch begining at https://github.com/pyannote/pyannote-audio/blob/develop/pyannote/audio/pipelines/clustering.py#L389. Before this step, we have the expected number of clusters, but we try anyway to match the target number of clusters even though we don't need to. Changing the condition to num_clusters is not None and num_large_clusters != num_clusters does the trick here but I don't know if there is a deeper underlying issue in the algorithm.

Copy link

github-actions bot commented Nov 2, 2023

Thank you for your issue.You might want to check the FAQ if you haven't done so already.

Feel free to close this issue if you found an answer in the FAQ.

If your issue is a feature request, please read this first and update your request accordingly, if needed.

If your issue is a bug report, please provide a minimum reproducible example as a link to a self-contained Google Colab notebook containing everthing needed to reproduce the bug:

  • installation
  • data preparation
  • model download
  • etc.

Providing an MRE will increase your chance of getting an answer from the community (either maintainers or other power users).

Companies relying on pyannote.audio in production may contact me via email regarding:

  • paid scientific consulting around speaker diarization and speech processing in general;
  • custom models and tailored features (via the local tech transfer office).

This is an automated reply, generated by FAQtory

@hbredin
Copy link
Member

hbredin commented Nov 5, 2023

Changing the condition to num_clusters is not None and num_large_clusters != num_clusters does the trick here but I don't know if there is a deeper underlying issue in the algorithm.

Thanks, I think this should do the trick. Can you contribute this change via a PR?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants