Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Outlier scores - possible bug in GLOSH computation #628

Open
azizkayumov opened this issue Mar 20, 2024 · 1 comment
Open

Outlier scores - possible bug in GLOSH computation #628

azizkayumov opened this issue Mar 20, 2024 · 1 comment

Comments

@azizkayumov
Copy link

azizkayumov commented Mar 20, 2024

I am curious if GLOSH implementation in this repository correctly follows the paper's definition of "outlierness".
According to HDBSCAN* paper (R. J. G. B. Campello et al. 2015, page 25):

In order to compute GLOSH in Equation (8), one needs only the first (last) cluster to which object xi belongs bottom-up (top-down) through the hierarchy, the lowest radius at which xi still belongs to this cluster (and below which xi is labeled as noise), ε(xi), and the lowest radius at which this cluster or any of its subclusters still exist (and below which all its objects are labeled as noise), εmax(xi).

Looking at the max_lambdas function for computing εmax(xi), I think the original paper's explanation (the bold italic text above) is not correctly interpreted. It seems the max_lambdas function only considers the death of a parent cluster (not considering the latest death of its subclusters).

To reproduce this issue, please run the following code:

import hdbscan
import numpy as np
import matplotlib.pyplot as plt


# Step 1. Generate 3 clusters of random data and some uniform noise
data = []
np.random.seed(1)
for i in range(3):
    data.extend(np.random.randn(100, 2) * 0.5 + np.random.randn(1, 2) * 3)
data.extend(np.random.rand(100, 2) * 20 - 10)

# Step 2. Cluster the data
k = 15
clusterer = hdbscan.HDBSCAN(alpha=1.0, approx_min_span_tree=False,
    gen_min_span_tree=True,
    metric='euclidean', min_cluster_size=k, min_samples=k, match_reference_implementation=True)
clusterer.fit(data)

# Step 3. Plot the outlier scores
outlier_scores = clusterer.outlier_scores_
plt.scatter([x[0] for x in data], [x[1] for x in data], s=25, c=outlier_scores, cmap='viridis')
plt.colorbar()
plt.title("Outlier scores")
plt.show()

This should show the following plot:

Figure_1

As you can see from the plot, the outlier scores assigned to the data points between clusters (please find the yellow points between the clusters!) do not seem to look "natural" outliers compared to other outliers. From my own understanding of the paper, we want the far away outlier points to have higher scores (just like looking at a topographical map) and the points between clusters to have lower outlier scores.
I think GLOSH is supposed to give us this instead:

clusterable2_fixed

It seems a fix of GLOSH may also be of help for #116. I would like to PR but I seem to have issues with building the code for now. Please let me know if there is something I might be missing.

@azizkayumov
Copy link
Author

For future readers and users of outlier scores in Python HDBSCAN, let me report the following simplest test case that Python HDBSCAN fails to correctly compute outlier scores:
pyhdbscan
As you can see from the plot, the point ranked 1 should be ranked second, while the point ranked 2 should have been ranked first.

This happens due to the bug related to how max lambda values of clusters are calculated as I reported above.
To reproduce the plot, please run the following:

import hdbscan
import numpy as np
import matplotlib.pyplot as plt


# Step 1: Example data
data = [
    # cluster 1 (formed at eps = √2)
    [1, 1],
    [1, 2],
    [2, 1],
    [2, 2],
    # cluster 2 (formed at eps = √2)
    [4, 1],
    [4, 2],
    [5, 1],
    [5, 2],
    # cluster 3 (formed at eps = √2)
    [9, 1],
    [9, 2],
    [10, 1],
    [10, 2],
    [11, 1],
    [11, 2],
    [2, 5], # outlier1: cd = √13, joins cluster1 and cluster2 at eps = √13
    [10, 8], # outlier2: cd = √37, joins the root cluster at eps = √37
]

# Then the outlier scores should be:
# glosh(outlier1) = 1 - √2 / √13 = 0.60776772972
# glosh(outlier2) = 1 - √2 / √37 = 0.76750472251

# But, Python HDBSCAN gives the following outlier scores:
# glosh(outlier1) = 1 - 2 / √13 = 0.44529980377 (cluster1 and cluster2 join at eps = 2, ignoring cluster1 and cluster2 are both formed at eps = √2)
# glosh(outlier2) = 1 - 4 / √37 = 0.34240405077 (cluster3 and cluster1 && cluster2 join at eps = 4)

# Step 2: Compute the outlier scores
k = 4
clusterer = hdbscan.HDBSCAN(
    alpha=1.0, 
    approx_min_span_tree=False,
    gen_min_span_tree=True,
    metric='euclidean', 
    min_cluster_size=k, 
    min_samples=k,
    allow_single_cluster=False,
    match_reference_implementation=True)
clusterer.fit(data)
mst = clusterer.single_linkage_tree_.to_numpy()
mst_weight = sum([x[2] for x in mst])
print("MST weight: ", mst_weight) # Should be 30.83044942

# Step 3. Plot the data and the outlier scores
outlier_scores = clusterer.outlier_scores_
plt.scatter([x[0] for x in data], [x[1] for x in data], s=25, c=outlier_scores, cmap='viridis')
plt.colorbar()

# Print outlier scores
for (i, score) in enumerate(outlier_scores):
    print(f"Outlier {i+1} score: {score}")

# Step 4: Assign rankings and plot top outliers
indices = [i for i in range(len(outlier_scores))]
indices.sort(key=lambda x: outlier_scores[x])
ranks = indices[-30:]
ranks = reversed(ranks)
for i, idx in enumerate(ranks):
    plt.text(data[idx][0], data[idx][1], str(i+1), fontsize=10, color='black')
plt.title("PyHDBSCAN: outlier scores & rankings")
plt.axis('equal')
plt.show()

As the project is being moved to sklearn's main library, it would be good to fix this misinterpretation of GLOSH altogether or at least warn the users to not rely on outlier_scores for their data analysis.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant