Bayes’ theorem is a standard way to update a belief (a hypothesis) after you observe some new evidence. Instead of treating probability as fixed, Bayesian updating treats it as something you revise as information arrives. This is useful in everyday “is it true?” questions (spam filtering, A/B testing, fraud detection) and in high-stakes settings (medical testing, quality control, risk assessment).
This calculator implements the classic binary form of Bayes’ theorem: a hypothesis is either true (H) or not true (¬H), and the evidence E is either observed (for example, a “positive” test) or not. You provide three inputs—the prior probability of the hypothesis, the likelihood of the evidence if the hypothesis is true, and the likelihood of the evidence if the hypothesis is false—and the calculator returns the posterior probability of the hypothesis after seeing the evidence.
P(H): your belief that the hypothesis is true before seeing the evidence. In medical testing this is often the prevalence or base rate.P(E|H): the chance of observing the evidence if the hypothesis is true. In diagnostic testing, when E is “test is positive,” this is the sensitivity / true positive rate.P(E|¬H): the chance of observing the evidence even though the hypothesis is false. In diagnostic testing, this is the false positive rate, which equals 1 − specificity.Tip: Many users accidentally enter specificity into “Likelihood if False.” If you have specificity instead, convert it first: false positive rate = 100% − specificity.
In symbols, the posterior probability is:
Where P(¬H) = 1 − P(H). The denominator is a “normalizing” term that makes sure the result is between 0 and 1. It is also the overall probability of seeing the evidence:
P(E) = P(E|H)P(H) + P(E|¬H)P(¬H)
The output P(H|E) is the probability that the hypothesis is true given that you observed the evidence. If the result is higher than your prior, the evidence supports the hypothesis. If it’s lower than your prior, the evidence argues against it.
Two common interpretation pitfalls:
P(H|E) with P(E|H). Sensitivity/likelihood is not the same as the posterior probability.Suppose a disease affects 1% of the population. A test returns positive 95% of the time when the disease is present, and returns positive 10% of the time when the disease is absent.
P(H) = 0.01P(E|H) = 0.95P(E|¬H) = 0.10Compute:
P(H|E) = (0.95×0.01) / (0.95×0.01 + 0.10×0.99) ≈ 0.0876
So even after a positive test, the chance the patient truly has the disease is about 8.8%. The test is fairly sensitive, but the disease is rare and false positives accumulate among the many healthy people.
| Scenario | Prior P(H) | P(E|H) | P(E|¬H) | Posterior P(H|E) | What changed? |
|---|---|---|---|---|---|
| Baseline example | 1% | 95% | 10% | ≈ 8.8% | Rare condition limits posterior |
| Higher prior | 10% | 95% | 10% | ≈ 51.3% | More plausible upfront → higher posterior |
| Lower false positives | 1% | 95% | 1% | ≈ 49.0% | Better specificity (lower FPR) boosts posterior |
| Weaker evidence | 1% | 60% | 10% | ≈ 5.7% | Evidence less diagnostic |
P(E|H) and P(E|¬H), not P(H|E). Mixing these up yields misleading results.P(not E|H) and P(not E|¬H) instead.P(E) becomes 0 (for example, both likelihoods are 0), the posterior is undefined because the evidence is impossible under both scenarios. If you enter 0% or 100% for priors/likelihoods, you are making absolute claims that can force posteriors to 0 or 100.Bayes’ theorem is a standard identity in probability theory relating conditional probabilities: P(H|E) and P(E|H). Terminology like sensitivity/specificity is common in diagnostic testing and maps directly to the likelihood inputs above.