Skip to content

Latest commit

 

History

History
216 lines (173 loc) · 9.28 KB

readme.md

File metadata and controls

216 lines (173 loc) · 9.28 KB

PyLoOP (LoOP: Local Outlier Probabilities)

LoOP is a local density based outlier detection method by Kriegel, Kröger, Schubert, and Zimek which provides outlier scores in the range of [0,1] that are directly interpretable as the probability of a sample being an outlier.

Release License

The outlier score of each sample is called the Local Outlier Probability. It measures the local deviation of density of a given sample with respect to its neighbors as Local Outlier Factor (LOF), but provides normalized outlier scores in the range [0,1]. These outlier scores are directly interpretable as a probability of an object being an outlier. Since Local Outlier Probabilities provides scores in the range [0,1], practitioners are free to interpret the results according to the application.

Like LOF, it is local in that the anomaly score depends on how isolated the sample is with respect to the surrounding neighborhood. Locality is given by k-nearest neighbors, whose distance is used to estimate the local density. By comparing the local density of a sample to the local densities of its neighbors, one can identify samples that lie in regions of lower density compared to their neighbors and thus identify samples that may be outliers according to their Local Outlier Probability.

The authors' 2009 paper detailing LoOP's theory, formulation, and application is provided by Ludwig-Maximilians University Munich - Institute for Informatics; LoOP: Local Outlier Probabilities.

Implementation

This Python 3 implementation uses Numpy and the formulas outlined in LoOP: Local Outlier Probabilities to calculate the Local Outlier Probability of each sample.

Prerequisites

  • Python 3.5.2 or greater
  • Numpy 1.12.0 or greater
  • Pandas 0.19.2 or greater (optional)

Quick Start

First include loop.py in the same directory as the Python file you're working with. Then:

from loop import LocalOutlierProbability
scores = LocalOutlierProbability(data).fit()
print(scores)

where data is a NxM (N rows, M columns) set of data as either a Pandas DataFrame or Numpy array.

LoOP sets the extent parameter (in range [0,1]) n_neighbors (must be greater than 0) parameters with the default values of 0.997 and 10, respectively. You're free to set these parameters on your own as below:

from loop import LocalOutlierProbability
scores = LocalOutlierProbability(data, extent=0.95, n_neighbors=20).fit()
print(scores)

The extent parameter controls the sensitivity of the scoring in practice, with values closer to 0 as having higher sensitivity. The n_neighbors parameter defines the number of neighbors to consider about each sample (neighborhood size) when determining its Local Outlier Probability with respect to the density of the sample's defined neighborhood.

This implementation of LoOP also includes an optional cluster_labels parameter. This is useful in cases where regions of varying density occur within the same set of data. When using cluster_labels, the Local Outlier Probability of a sample is calculated with respect to its cluster assignment.

from loop import LocalOutlierProbability
from sklearn.cluster import DBSCAN
db = DBSCAN(eps=0.6, min_samples=50).fit(data)
scores = LocalOutlierProbability(data, extent=0.95, n_neighbors=20, cluster_labels=db.labels_).fit()
print(scores)

NOTE: Unless your data is all the same scale, it may be a good idea to normalize your data with z-scores or another normalization scheme prior to using LoOP, especially when working with multiple dimensions of varying scale. Users must also appropriately handle missing values prior to using LoOP, as LoOP does not support Pandas DataFrames or Numpy arrays with missing values. While LoOP will execute with missing values, any observations with missing values will be returned with empty outlier scores (nan) in the final result.

Iris Data Example

We'll be using the well-known Iris dataset to show LoOP's capabilities. There's a few things you'll need for this example beyond the standard prerequisites listed above:

  • matplotlib 2.0.0 or greater
  • PyDataset 0.2.0 or greater
  • scikit-learn 0.18.1 or greater

First, let's import the packages and libraries we will need for this example.

from loop import LocalOutlierProbability
import pandas as pd
from pydataset import data
import numpy as np
from sklearn.cluster import DBSCAN
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D

Now let's create two sets of Iris data for scoring; one with clustering and the other without.

# import the data and remove any non-numeric columns
iris = pd.DataFrame(data('iris'))
iris = pd.DataFrame(iris.drop('Species', 1))

Next, let's cluster the data using DBSCAN and generate two sets of scores. On both cases, we wil use the default values for both extent (0.997) and n_neighbors (10).

db = DBSCAN(eps=0.9, min_samples=10).fit(iris)
scores_noclust = LocalOutlierProbability(iris).fit()
scores_clust = LocalOutlierProbability(iris, cluster_labels=db.labels_).fit()

Organize the data into two separate Pandas DataFrames.

iris_clust = pd.DataFrame(iris.copy())
iris_clust['scores'] = scores_clust
iris_clust['labels'] = labels
iris['scores'] = scores_noclust

And finally, let's visualize the scores provided by LoOP in both cases (with and without clustering).

fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(iris['Sepal.Width'], iris['Petal.Width'], iris['Sepal.Length'], 
c=iris['scores'], cmap='seismic', s=50)
ax.set_xlabel('Sepal.Width')
ax.set_ylabel('Petal.Width')
ax.set_zlabel('Sepal.Length')
plt.show()
plt.clf()
plt.cla()
plt.close()

fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(iris_clust['Sepal.Width'], iris_clust['Petal.Width'], iris_clust['Sepal.Length'], 
c=iris_clust['scores'], cmap='seismic', s=50)
ax.set_xlabel('Sepal.Width')
ax.set_ylabel('Petal.Width')
ax.set_zlabel('Sepal.Length')
plt.show()
plt.clf()
plt.cla()
plt.close()

fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(iris_clust['Sepal.Width'], iris_clust['Petal.Width'], iris_clust['Sepal.Length'], 
c=iris_clust['labels'], cmap='Set1', s=50)
ax.set_xlabel('Sepal.Width')
ax.set_ylabel('Petal.Width')
ax.set_zlabel('Sepal.Length')
plt.show()
plt.clf()
plt.cla()
plt.close()

Your results should look like the following:

LoOP Scores without Clustering LoOP Scores without Clustering

LoOP Scores with Clustering LoOP Scores with Clustering

DBSCAN Cluster Assignments DBSCAN Cluster Assignments

Note the differences between using LoOP with and without clustering. In the example without clustering, samples are scored according to the distribution of the entire data set. In the example with clustering, each sample is scored according to the distribution of each cluster. Which approach is suitable depends on the use case.

NOTE: Data was not normalized in this example, but it's probably a good idea to do so in practice.

Contributing

If you would like to contribute, please fork the repository and make any changes locally prior to submitting a pull request. Some next steps:

  • Organizational improvements to the code base.
  • Publish PyLoOP on the Python Package Index.
  • Characterizing the computational complexity and making improvements to overall speed.
  • Outlining some known disadvantages in the readme.
  • Outlining some useful applications in the readme.
  • Introducing modifications that allow LoOP to be applied more effectively to noisy data without losing the ability to effectively identify outliers or anomalies in the data.

Versioning

Semantic versioning is used for this project. If contributing, please conform to semantic versioning guidelines when submitting a pull request.

License

This project is licensed under the Apache 2.0 license.

References

  1. Aggarwal C., Hinneburg A., Keim D. On the Surprising Behavior of Distance Metrics in High Dimensional Space. ICDT Conference (2001). PDF.
  2. Breunig M., Kriegel H.-P., Ng R., Sander, J. LOF: Identifying Density-based Local Outliers. ACM SIGMOD International Conference on Management of Data (2000). PDF.
  3. Kriegel H., Kröger P., Schubert E., Zimek A. LoOP: Local Outlier Probabilities. 18th ACM conference on Information and knowledge management, CIKM (2009). PDF.

Acknowledgements