[toc]
The Laplacian operator
These finite differences are computed along the principal directions (
where
This method has the benefit of being reasonably fast to evaluate, however this kernel is not isotropic. While suitable for slowly varying functions, it becomes unstable for quickly varying ones, especially when
As an illustration, let us try it on Vermeer's Art of Painting, scanned at 2281×1920 px.2:
See the Annex below for the preparation of the image into a luminance map. Here is the absolute value of the Laplacian computed on luminance along the principal directions with the kernel
Notice the mosaiced pattern in smooth areas, especially on the bright background wall. To evaluate the rotation-invariance of the discrete Laplacian, we use the following scheme :
graph TD;
L([Luminance]) --> R1["Rotation +45°"] & D1[Laplacian]
R1 --> D2[Laplacian]
D2 --> R2["Rotation -45°"]
R2 & D1 --> diff["-"]
diff --> error["L² norm"]
error --> Z([error])
The rotated output get correlated against the non-rotated one to match the pixel locality between both outputs, and the edge pixels (first and last row and column) are discarded to remove boundary effects. The rotations use cubic spline interpolation with a spline prefilter. The
For reproduction, here is the Python code used for this comparison (utility functions and imports are given in the Annex) :
K = np.array([[0., 1., 0.],
[1., -4., 1.],
[0., 1., 0.]])
grad_direct = signal.convolve2d(im, K, boundary='fill', mode='valid')
im_rot = rotate(im, 45)
grad_indirect = signal.convolve2d(im_rot, K, boundary='fill', mode='valid')
grad_indirect = rotate(grad_indirect, -45)
grad_indirect = correlate(grad_direct, grad_indirect)
diff = grad_indirect - grad_direct
print(np.linalg.norm(diff))
plt.figure(1)
plt.axis('off')
plt.imshow(apply_gamma(diff**2), cmap="Greys_r")
plt.savefig("anisotropic-laplacian-error.jpg", bbox_inches="tight", pil_kwargs={'optimize': True})
plt.figure(2)
plt.axis('off')
plt.imshow(apply_gamma(np.abs(grad_direct)), cmap="Greys_r")
plt.savefig("anisotropic-laplacian.jpg", bbox_inches="tight", pil_kwargs={'optimize': True})
The core idea of the isotropic second order finite differences is to regularise the anisotropic finite difference on the nearest neighbours with the contribution of the next nearest neighbours. Using Taylor-Young expansions, one can correct the discrete Laplacian to introduce the contribution of the next neighbours.3
In 1988, Oono and Puri4 proposed :
$$
\Delta u = u * \frac{1}{h^2}
\begin{bmatrix}
0.25 & 0.5 & 0.25 \
0.5 & -3 & 0.5 \
0.25 & 0.5 & 0.25
\end{bmatrix}
$$
Another kernel, somewhat similar, known as the Mehrstellen discretisation, had been proposed in 1966 :5
$$
\Delta u = u *
\begin{bmatrix}
1/6 & 2/3 & 1/6 \
2/3 & -10/3 & 2/3 \
1/6 & 2/3 & 1/6
\end{bmatrix}
$$
Re-using the same image as above, here is the output of the isotropic Laplacian (the code stays the same except for the values of the kernel K
) :
Oono & Puri (1986) | Mehrstellen (1966) | |
---|---|---|
Laplacian output | ||
Quadratic error map between rotated and non-rotated variants | ||
|
118 | 128 |
The K
is updated.
The second order isotropic finite differences have a truncation error
For reproduction, here are the Python-formatted matrices, so you just have to copy-paste them :
K_h_1 = np.array([[-1/120, 0, -1/15, 0, -1/120],
[ 0, 2/15, 16/15, 2/15, 0],
[ -1/15, 16/15, -9/2, 16/15, -1/15],
[ 0, 2/15, 16/15, 2/15, 0],
[-1/120, 0, -1/15, 0, -1/120]])
K_h_2 = np.array([[ 0, -1/30, -1/60, -1/30, 0],
[-1/30, 4/15, 13/15, 4/15, -1/30],
[-1/60, 13/15, -21/5, 13/15, -1/60],
[-1/30, 4/15, 13/15, 4/15, -1/30],
[ 0, -1/30, -1/60, -1/30, 0]])
The
Many implementations assume
Let us compute the Gaussian-based Laplacian for a standard deviation
Gaussian( |
Gaussian( |
|
---|---|---|
Laplacian output | ||
Quadratic error map between rotated and non-rotated variants | ||
|
31 | 35 |
The Gaussian function being radial by nature, it comes at no surprise that we get the lowest rotational error here. However, we are quite far from a true Laplacian, because the relative
Here is the Python code for reproduction :
sigma = 1.0553651328015339
grad_direct = -im + gaussian_filter(im, sigma, mode="constant", order=0, truncate=4)
im_rot = rotate(im, 45)
grad_indirect = -im_rot + gaussian_filter(im_rot, sigma, mode="constant", order=0, truncate=4)
grad_indirect = rotate(grad_indirect, -45)
grad_indirect = correlate(grad_direct, grad_indirect)
diff = grad_indirect[1:-1, 1:-1] - grad_direct[1:-1, 1:-1]
print(np.linalg.norm(diff))
plt.figure(1)
plt.axis('off')
plt.imshow(apply_gamma(diff**2), cmap="Greys_r")
plt.savefig("gaussian-laplacian-error.jpg", bbox_inches="tight", pil_kwargs={'optimize': True})
plt.figure(2)
plt.axis('off')
plt.imshow(apply_gamma(np.abs(grad_direct)), cmap="Greys_r")
plt.savefig("gaussian-laplacian.jpg", bbox_inches="tight", pil_kwargs={'optimize': True})
Since we don't have a ground-truth discrete Laplacian to compare against, the most we can do is to compare the methods between each other.
Matrix of covariance between all methods (all values are
Anisotropic 2^nd^ order | Isotropic 2^nd^ order Oono & Puri | Isotropic 4^th^ order $ \eqref{high-order-2}$ | Difference with Gaussian( |
Difference with Gaussian($ 2 \sigma$) | |
---|---|---|---|---|---|
Anisotropic 2^nd^ order | 7.63 | 6.15 | 8.32 | 1.86 | 2.20 |
Isotropic Oono & Puri | 6.15 | 5.13 | 6.83 | 1.57 | 1.90 |
Isotropic 4^th^ order |
8.32 | 6.83 | 9.18 | 2.06 | 2.43 |
Difference with Gaussian( |
1.86 | 1.57 | 2.06 | 0.50 | 0.64 |
Difference with Gaussian( |
2.20 | 1.90 | 2.43 | 0.64 | 0.92 |
Matrix of
Anisotropic 2^nd^ order | Isotropic 2^nd^ order Oono & Puri | Isotropic 4^th^ order $ \eqref{high-order-2}$ | Difference with Gaussian( |
Difference with Gaussian($ 2 \sigma$) | |
---|---|---|---|---|---|
Anisotropic 2^nd^ order | 0 | 44 | 28 | 139 | 135 |
Isotropic 2^nd^ Oono & Puri | 44 | 0 | 53 | 104 | 99 |
Isotropic 4^th^ order $ \eqref{high-order-2}$ | 28 | 53 | 0 | 156 | 151 |
Difference with Gaussian( |
139 | 104 | 156 | 0 | 25 |
Difference with Gaussian($ 2 \sigma$) | 135 | 99 | 151 | 25 | 0 |
We notice all the discrete Laplacian methods have between each other a relative
The 4^th^ order isotropic discretisation has the best correlation with the others as well as the highest variance, which is a good sign regarding its accuracy, but the fact that it lies the closest from the anisotropic discretisation and has one of the worst rotational invariance raises questions regarding its real isotropic nature.
However, regarding the Gaussian method, the error
The rest of this article will be about refining the Gaussian approach to better match discrete Laplacians while retaining the low rotational error.
Let us redefine the Laplacian from the difference of a Gaussian
The result of this method, reusing $\sigma = 1.0553651328015339 $ for compatibility with the B spline approach, is :
The quadratic error mask gives :
Now, let us study the influence of varying the
The Laplacian error is computed as the
- the 2^nd^ order anisotropic discretisation,
- the 2^nd^ order isotropic discretisation from Oono & Puri,
- the 4^th^ order isotropic discretisation given in
$\eqref{high-order-2}$ .
Then, the global error is the
The variance is given as a metric of the filter sensitivity, since the Laplacian operator acts as a "local anomaly" detector. We have seen above, in the conclusion about the state of the art, that the most accurate method (4^th^ order isotropic) yields the highest variance, which seems a useful property of a such high-pass filter. However, the peak variance is correlated with the peak rotational error, which could be explained by the highest sensitivity being reached when no low-pass effect arises, and the lack of low-pass filtering is what makes this filter very sensible to discretisation discrepancies as well. The peak variance is also correlated with the peak Laplacian error, which would suggest the Gaussian is not isotropic either at this value.
So it appears that, rather than trying to match our scaled difference with a Gaussian method against other Laplace discretisations, our end goal should be to maximise the variance at the output of the filter while keeping the rotational error under control, for which the Gaussian framework gives us an obvious trade-off setting in the form of the
The graph above shows 4 particular locii :
- the local maximum of all curves, variance and errors, centred around
$\sigma = 0.562267$ , - two local minima of the Laplacian error function, at
$\sigma = 0.395$ and$\sigma = 0.895$ , - the point where the Laplacian error is equal to the rotational error, at
$\sigma = 1.0518535$ .
Let us analyse them in detail.
Matrix of covariance (all values are
Anisotropic 2^nd^ order | Isotropic 2^nd^ order Oono & Puri | Isotropic 4^th^ order $ \eqref{high-order-2}$ | Scaled difference with |
Scaled difference with |
Scaled difference with |
Scaled difference with |
|
---|---|---|---|---|---|---|---|
Scaled difference with |
6.32 | 5.12 | 6.90 | 5.23 | 9.24 | 5.33 | 4.94 |
Scaled difference with |
11.14 | 9.15 | 12.26 | 9.24 | 16.42 | 9.59 | 8.90 |
Scaled difference with |
6.40 | 5.40 | 7.10 | 5.33 | 9.59 | 5.86 | 5.46 |
Scaled difference with |
5.93 | 5.02 | 6.58 | 4.94 | 8.90 | 5.46 | 5.09 |
Matrix of relative
Anisotropic 2^nd^ order | Isotropic 2^nd^ order Oono & Puri | Isotropic 4^th^ order $ \eqref{high-order-2}$ | Laplacian |
Rotational |
Global |
|
---|---|---|---|---|---|---|
Scaled difference with |
32 | 24 | 52 | 66 | 117 | 134 |
Scaled difference with |
88 | 119 | 69 | 163 | 200 | 258 |
Scaled difference with |
55 | 28 | 60 | 86 | 129 | 155 |
Scaled difference with |
61 | 29 | 70 | 97 | 100 | 139 |
The Laplacian
So it appears that we are left with two candidates :
-
$\sigma = 0.395$ if close conformity to classic discrete Laplacians is required. This values also yields a rotational error lower than any discrete Laplacian (but not by much). -
$\sigma = 1.0518535$ if a balanced trade-off between rotation-invariance and conformity to discrete Laplacians is required.
Both yield a rather low variance.
Considering the image as a multi-scale collection of gradients, we can process the Laplacian at increasingly coarser scales and collect the contribution of each scale as a linear combination, weighted with the coefficient computed in the previous section. To do so, we use a wavelet decomposition scheme :
graph TD;
L([luminance]) --> G1["G(s)"]
G1 & L --> M1["-"]
M1 --> Mul1
C1(["gauss_coeff((√2)⁰ s)"]) --> Mul1["×"]
Mul1 --> P
G1 --> G2["G(s)"]
G1 & G2 --> M2["-"]
M2 --> Mul2
C2(["gauss_coeff((√2)¹ s)"]) --> Mul2["×"]
Mul2 --> P
P["+"] --> Lap([Multi-scale Laplacian])
Only the two first scales have been represented on the graph, where the Gaussian standard deviation s
for compatibility. So we apply an iterative blur step at each scale
def gauss_coeff(sigma):
return (2. * np.pi / (np.sqrt(np.pi) * sigma**2))
def find_details_density(im):
result = np.zeros_like(im)
sigma = 1.0518535
im_copy = im.copy()
for i in range(5):
blur = gaussian_filter(im_copy, sigma, mode="constant", order=0, truncate=4)
result += gauss_coeff(np.sqrt(2)**i * sigma) * (blur - im_copy)
im_copy = blur
return result
Test pic :
Find edges with
Find edges with
Find edges with
Find edges with discrete laplacian by orthogonal 2^nd^ order finite differences of the gaussian scales with
Find here the utility functions called in the code snippets above :
import numpy as np
from PIL import Image
from matplotlib import pyplot as plt
from scipy.ndimage import gaussian_filter, median_filter, rotate
from scipy import signal
def find_luminance_density(im):
# convert linear Rec709/sRGB to Y of CIE XYZ 1931 2° standard observer
return 0.2126 * linear[:, :, 0] + 0.7152 * linear[:, :, 1] + 0.0722 * linear[:, :, 2]
def remove_gamma(im):
# remove gamma assuming sRGB encoding and output linear RGB
return (im <= 0.04045) * im / 12.92 + (im > 0.04045) * ((im + 0.055) / (1 + 0.055))**2.4
def apply_gamma(im):
# apply sRGB gamma back and output non-linear
return (im <= 0.0031308) * im * 12.92 + (im > 0.0031308) * (1.055 * im**(1 / 2.4) - 0.055)
def correlate(ref_mat, test_mat):
# trim and align test_mat, array padded for rotations, to match ref_mat, reference array
y_rot, x_rot = test_mat.shape
y, x = ref_mat.shape
x_off = int((x_rot - x) / 2)
y_off = int((y_rot - y) / 2)
return test_mat[y_off:y_off + y, x_off:x_off + x]
# Open image and convert to luminance
image = Image.open('Jan_Vermeer_-_The_Art_of_Painting_-_Google_Art_Project.jpg').convert('RGB')
im = np.asarray(image) / 255.
im = find_luminance_density(remove_gamma(im))
# Display luminance for control
plt.axis('off')
plt.imshow(apply_gamma(im), cmap="Greys_r")
plt.savefig("luminance.jpg", bbox_inches="tight", pil_kwargs={'optimize': True})
Output (luminance) :
Footnotes
-
YOUNG, Richard A. The Gaussian derivative model for spatial vision. I- Retinal mechanisms. Spatial vision. 1987. Vol. 2, no. 4, p. 273–293. ↩
-
VERMEER, Jan Vermeer. The Art of Painting (1666-1668). Google Art Project, 2012. <https://commons.wikimedia.org/wiki/File:Jan_Vermeer_-_The_Art_of_Painting_-_Google_Art_Project.jpg ↩
-
PROVATAS, Nikolas and ELDER, Ken. Phase-Field Methods in Materials Science and Engineering. Wiley-VCH Verlag GmbH & Co. KGaA, 2010, p. 219. ↩
-
OONO, Yoshitsugu et PURI, Sanjay. Study of phase-separation dynamics by use of cell dynamical systems. I. Modeling. Physical Review A, 1988, vol. 38, no 1, p. 434. ↩
-
COLLATZ, Lotha. The Numerical Treatment of Differential Equations, Springer-Verlag, Berlin-Heidelberg-NewYork, 1966. ↩
-
PATRA, Michael et KARTTUNEN, Mikko. Stencils with isotropic discretization error for differential operators. Numerical Methods for Partial Differential Equations: An International Journal, 2006, vol. 22, no 4, p. 936-953. ↩ ↩2
-
BROMILEY, Paul. Products and convolutions of Gaussian probability density functions. Tina-Vision Memo, 2003, vol. 3, no 4, p. 1. ↩