Kullback–Leibler Divergence Calculator

JJ Ben-Joseph headshot JJ Ben-Joseph

Enter P and Q.

Understanding KL Divergence

Kullback–Leibler (KL) divergence measures how one probability distribution diverges from a second, reference distribution. Given discrete distributions P and Q over the same set, the KL divergence from P to Q is defined as

D(PQ)=iP(i)ln(P(i)Q(i))

Intuitively, the formula quantifies the extra information required to encode events sampled from P if we use a code optimized for Q. When P and Q are identical, the divergence is zero. As the distributions diverge, the value grows, reflecting the inefficiency of using Q to approximate P.

Common Applications

KL divergence appears throughout machine learning and information theory. In variational inference, models minimize KL divergence to find a simpler approximate distribution that mimics a complicated posterior. In reinforcement learning, policies are often constrained by a maximum KL divergence from prior policies to ensure stable updates. The concept also helps compare language models, evaluate generative networks, and track training progress in classification problems.

Because KL divergence is asymmetric, D(PQ) generally differs from D(QP). This asymmetry underscores its interpretation as a measure of relative entropy—the expected extra message length when P is encoded with Q's code.

Using the Calculator

Enter probabilities for P and Q separated by commas. Each list should contain the same number of values and sum to 1. The script normalizes them if necessary. When you press the compute button, it iterates through the arrays, sums P(i)ln(P(i)Q(i)), and displays the result. Probabilities equal to zero contribute nothing because the limit of xln(x) approaches zero as x vanishes.

Example

Suppose P assigns probabilities {0.6,0.4} while Q assigns {0.5,0.5}. Plugging these into the formula yields

D(PQ)=0.6ln(1.2)+0.4ln(0.8)0.02

Small values indicate the distributions are close, while large values highlight stark differences.

Perspectives

By experimenting with the inputs, you can see how skewing probability mass increases the divergence. Extreme mismatches quickly produce large values. This sensitivity to improbable events is a hallmark of KL divergence and influences its use in robust statistics. In practice, KL divergence informs algorithms ranging from expectation-maximization to reinforcement learning policy updates, showcasing its broad relevance.

Properties and Intuition

Several important properties make KL divergence central to information theory. The value is always non‑negative thanks to Gibbs’ inequality and becomes zero only when the two distributions are identical. Because the logarithm appears inside the summation, very small probabilities can have an outsized impact; assigning positive mass to an event that never occurs under the reference distribution incurs an infinite cost. KL divergence is also asymmetric and fails to satisfy the triangle inequality, so it is not a true metric despite behaving like a distance in some contexts.

One intuitive interpretation comes from coding theory. If you encode symbols produced by P using a code optimized for Q, each symbol on average requires D(PQ) extra nats or bits depending on the logarithm base. The total expected message length equals the entropy of P plus this divergence, linking the concept closely to cross‑entropy.

Cross-Entropy and Jensen–Shannon Divergence

Our calculator also reports the cross‑entropy HP(Q), which equals H(P)+D(PQ). Cross‑entropy measures the expected number of code units needed when events drawn from P are encoded with probabilities from Q. To offer a symmetric alternative, the script computes the Jensen–Shannon (JS) divergence, which averages the KL divergence in both directions after forming the midpoint distribution M = 12(P+Q). Unlike KL divergence, JS divergence is always finite and its square root satisfies the triangle inequality.

Step-by-Step Example

Consider P = {0.7, 0.2, 0.1} and Q = {0.5, 0.3, 0.2}. Because the probabilities already sum to one, normalization is unnecessary. Using natural logs, compute Pi \ln \frac{P_i}{Q_i} for each component: 0.7\ln(1.4)+0.2\ln(2/3)+0.1\ln(1/2). Summing the terms gives approximately 0.081 nats. Reversing the roles of P and Q yields about 0.083 nats, and averaging the two produces a JS divergence near 0.082 nats. Switching the log base to two converts these values into bits.

Handling Inputs Carefully

Zero probabilities cause complications, so the calculator normalizes inputs and ensures all entries are non‑negative. If any location has Pi > 0 while the corresponding Qi = 0, the divergence is undefined and a warning appears. Practitioners often apply smoothing—adding tiny pseudocounts—to avoid infinite values and to reflect uncertainty about truly impossible events.

The drop‑down menu chooses between natural and base‑2 logarithms. The latter expresses results in bits, which is handy for information‑theoretic interpretations. After computation, a Copy Result button appears so you can paste the output into reports or notebooks.

Broader Context

KL divergence underlies numerous algorithms. Minimizing D(PQ) corresponds to maximum likelihood estimation. In natural language processing it quantifies how well one language model predicts tokens produced by another. Neuroscientists compare firing rate distributions using KL divergence, and ecologists assess differences in species abundance across habitats. The measure’s versatility stems from its ability to compare any categorical or discretized distribution in a principled way.

Related Calculators

Divergence and Curl Calculator - Analyze Vector Fields

Compute the divergence and curl of a vector field at a point.

divergence calculator curl calculator vector field

Divergence Theorem Calculator - Relate Flux and Volume Integral

Compute the flux of a vector field through a rectangular box and compare it with the triple integral of its divergence to illustrate Gauss's divergence theorem.

divergence theorem calculator gauss theorem surface flux volume integral

Shannon Entropy Calculator - Information Content

Compute Shannon entropy from probability values to quantify uncertainty in data.

Shannon entropy calculator information theory data uncertainty