The goal of kldest
is to estimate Kullback-Leibler (KL) divergence between two probability distributions and based on:
- a sample from and the probability density of , or
- samples from and from .
The distributions and may be uni- or multivariate, and they may be discrete, continuous or mixed discrete/continuous.
Different estimation algorithms are provided for continuous distributions, either based on nearest neighbour density estimation or kernel density estimation. Confidence intervals for KL divergence can also be computed, either via subsampling (preferred) or bootstrapping.
Installation
You can install kldest from CRAN:
install.packages("kldest")
Alternatively, can install the development version of kldest from GitHub with:
# install.packages("devtools")
devtools::install_github("niklhart/kldest")
A minimal example for KL divergence estimation
KL divergence estimation based on nearest neighbour density estimates is the most flexible approach.
Set a seed for reproducibility
set.seed(0)
KL divergence between 1-D Gaussians
Analytical KL divergence:
kld_gaussian(mu1 = 0, sigma1 = 1, mu2 = 1, sigma2 = 2^2)
#> [1] 0.4431472
Estimate based on two samples from these Gaussians:
X <- rnorm(100)
Y <- rnorm(100, mean = 1, sd = 2)
kld_est_nn(X, Y)
#> [1] 0.2169136
Estimate based on a sample from the first Gaussian and the density of the second:
q <- function(x) dnorm(x, mean = 1, sd =2)
kld_est_nn(X, q = q)
#> [1] 0.6374628
Uncertainty quantification via subsampling:
kld_ci_subsampling(X, q = q)
#> $est
#> [1] 0.6374628
#>
#> $ci
#> 2.5% 97.5%
#> 0.2601375 0.9008446