Tuesday, September 17, 2024

The Math Behind Kernel Density Estimation | by Zackary Nay | Sep, 2024

Must read


The next derivation takes inspiration from Bruce E. Hansen’s “Lecture Notes on Nonparametric” (2009). If you’re serious about studying extra you may confer with his unique lecture notes right here.

Suppose we needed to estimate a likelihood density operate, f(t), from a pattern of knowledge. A great beginning place can be to estimate the cumulative distribution operate, F(t), utilizing the empirical distribution operate (EDF). Let X1, …, Xn be impartial, identically distributed actual random variables with the widespread cumulative distribution operate F(t). The EDF is outlined as:

Then, by the sturdy legislation of enormous numbers, as n approaches infinity, the EDF converges nearly certainly to F(t). Now, the EDF is a step operate that might seem like the next:

import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm

# Generate pattern information
np.random.seed(14)
information = np.random.regular(loc=0, scale=1, measurement=40)

# Type the information
data_sorted = np.type(information)

# Compute ECDF values
ecdf_y = np.arange(1, len(data_sorted)+1) / len(data_sorted)

# Generate x values for the traditional CDF
x = np.linspace(-4, 4, 1000)
cdf_y = norm.cdf(x)

# Create the plot
plt.determine(figsize=(6, 4))
plt.step(data_sorted, ecdf_y, the place='submit', shade='blue', label='ECDF')
plt.plot(x, cdf_y, shade='grey', label='Regular CDF')
plt.plot(data_sorted, np.zeros_like(data_sorted), '|', shade='black', label='Information factors')

# Label axes
plt.xlabel('X')
plt.ylabel('Cumulative Chance')

# Add grid
plt.grid(True)

# Set limits
plt.xlim([-4, 4])
plt.ylim([0, 1])

# Add legend
plt.legend()

# Present plot
plt.present()

Subsequently, if we have been to try to discover an estimator for f(t) by taking the spinoff of the EDF, we’d get a scaled sum of Dirac delta capabilities, which isn’t very useful. As a substitute allow us to think about using the two-point central distinction formulation of the estimator as an approximation of the spinoff. Which, for a small h>0, we get:

Now outline the operate okay(u) as follows:

Then we now have that:

Which is a particular case of the kernel density estimator, the place right here okay is the uniform kernel operate. Extra typically, a kernel operate is a non-negative operate from the reals to the reals which satisfies:

We are going to assume that each one kernels mentioned on this article are symmetric, therefore we now have that okay(-u) = okay(u).

The second of a kernel, which supplies insights into the form and conduct of the kernel operate, is outlined as the next:

Lastly, the order of a kernel is outlined as the primary non-zero second.

We are able to solely reduce the error of the kernel density estimator by both altering the h worth (bandwidth), or the kernel operate. The bandwidth parameter has a a lot bigger influence on the ensuing estimate than the kernel operate however can be far more troublesome to decide on. To reveal the affect of the h worth, take the next two kernel density estimates. A Gaussian kernel was used to estimate a pattern generated from a typical regular distribution, the one distinction between the estimators is the chosen h worth.

import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import gaussian_kde

# Generate pattern information
np.random.seed(14)
information = np.random.regular(loc=0, scale=1, measurement=100)

# Outline the bandwidths
bandwidths = [0.1, 0.3]

# Plot the histogram and KDE for every bandwidth
plt.determine(figsize=(12, 8))
plt.hist(information, bins=30, density=True, shade='grey', alpha=0.3, label='Histogram')

x = np.linspace(-5, 5, 1000)
for bw in bandwidths:
kde = gaussian_kde(information , bw_method=bw)
plt.plot(x, kde(x), label=f'Bandwidth = {bw}')

# Add labels and title
plt.title('Affect of Bandwidth Choice on KDE')
plt.xlabel('Worth')
plt.ylabel('Density')
plt.legend()
plt.present()

Fairly a dramatic distinction.

Now allow us to have a look at the influence of fixing the kernel operate whereas maintaining the bandwidth fixed.

import numpy as np
import matplotlib.pyplot as plt
from sklearn.neighbors import KernelDensity

# Generate pattern information
np.random.seed(14)
information = np.random.regular(loc=0, scale=1, measurement=100)[:, np.newaxis] # reshape for sklearn

# Intialize a relentless bandwidth
bandwidth = 0.6

# Outline completely different kernel capabilities
kernels = ["gaussian", "epanechnikov", "exponential", "linear"]

# Plot the histogram (clear) and KDE for every kernel
plt.determine(figsize=(12, 8))

# Plot the histogram
plt.hist(information, bins=30, density=True, shade="grey", alpha=0.3, label="Histogram")

# Plot KDE for every kernel operate
x = np.linspace(-5, 5, 1000)[:, np.newaxis]
for kernel in kernels:
kde = KernelDensity(bandwidth=bandwidth, kernel=kernel)
kde.match(information)
log_density = kde.score_samples(x)
plt.plot(x[:, 0], np.exp(log_density), label=f"Kernel = {kernel}")

plt.title("Affect of Completely different Kernel Features on KDE")
plt.xlabel("Worth")
plt.ylabel("Density")
plt.legend()
plt.present()

Whereas visually there’s a giant distinction within the tails, the general form of the estimators are comparable throughout the completely different kernel capabilities. Subsequently, I’ll focus primarily give attention to discovering the optimum bandwidth for the estimator. Now, let’s discover a number of the properties of the kernel density estimator, together with its bias and variance.



Supply hyperlink

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article