The geometric distribution describes the number of independent Bernoulli trials needed to obtain the first success. Each trial results in success with probability and failure with probability . If we let the random variable denote the trial on which the first success occurs, then follows a geometric distribution with parameter . This model applies whenever we repeat an experiment until a positive outcome happens, such as flipping a coin until it lands heads or testing lightbulbs until one works.
The probability mass function is
for integers .
The geometric distribution is memoryless, meaning the probability of success on the next trial does not depend on past failures. Formally, . This property simplifies many probabilistic analyses, as we can treat each trial as if we're starting anew after each failure.
We might also ask for the probability that the first success occurs within the first trials:
.
This cumulative distribution function (CDF) is easily computed by summing the geometric series of decreasing failure probabilities. Because each failure multiplies the chance of no success, the probability that we have not seen a success by the th trial is .
The geometric distribution exhibits a simple expectation and variance. We have
and .
This means that if the success chance is small, we expect to wait many trials, and the variance—how widely the number of trials may vary—becomes large. Conversely, with a high success probability, the expected waiting time shrinks considerably. The calculator echoes these two summary statistics after each computation so that you can compare them with empirical data. When the variance from your data exceeds the mean by a large margin, it may signal that the geometric model is insufficient and that additional sources of variability are at play.
Two common parameterizations exist for the geometric distribution. In this page, we adopt the convention that counts the trial on which the first success occurs and therefore takes values . Some texts instead define as the number of failures before the first success, yielding values starting at zero. The formulas are closely related: if counts failures, then . When comparing results across sources or using statistical software, keep this distinction in mind to avoid off-by-one errors.
Enter the probability of success (a number between 0 and 1) and the integer representing the number of trials. After clicking "Compute," the calculator returns three probabilities: the chance that the first success occurs exactly on trial , the probability that the success happens on or before trial , and the probability that you must wait beyond trials. The mean and variance for the entered are displayed as well, giving you a fuller statistical picture. You can experiment with different inputs to see how smaller values of produce a longer tail in the distribution, while larger values concentrate the probability mass near lower counts.
Results are also made copyable. Clicking the output box places the displayed numbers onto your clipboard so you can paste them into homework solutions, reports, or code comments. This small convenience speeds up repetitive workflows, especially when exploring multiple scenarios.
Suppose we flip a fair coin, so . What is the chance that we need exactly three flips to get heads? We calculate . The probability of observing the first head within three flips is . This intuitive example illustrates how the geometric distribution models waiting times in simple random experiments.
The geometric distribution appears in reliability engineering, queueing theory, and computer science. When evaluating the reliability of a component that has a fixed probability of failure each time it is tested, we can model the number of tests until failure. In networking, the number of transmissions required before a packet successfully passes through a noisy channel often follows a geometric distribution. Similarly, in algorithm analysis, the number of iterations needed for a probabilistic algorithm to succeed might be geometrically distributed.
The geometric distribution has been studied since the nineteenth century as part of the development of probability theory. Its memoryless property links it to the exponential distribution in continuous time. In fact, the geometric distribution can be seen as the discrete analogue of the exponential: both describe waiting times, but one counts discrete trials while the other measures continuous time. This connection underlies many results in stochastic processes, such as the relationship between Poisson processes and exponential interarrival times.
One extension is the negative binomial distribution, which counts the number of trials required to achieve a fixed number of successes rather than just one. Another angle is parameter estimation: given sample data, we can estimate by the reciprocal of the sample mean, a simple maximum likelihood result. You can also investigate how the geometric distribution interacts with random stopping times and renewal theory. Even though this calculator focuses on the most basic case, it opens the door to a rich field of discrete probability.
You can simulate a geometric random variable with a simple algorithm. Repeatedly draw a uniform random number between 0 and 1; if it is less than , record the current trial count as the waiting time, otherwise increment the count and draw again. Many programming languages include a built-in geometric function, but implementing it yourself reinforces understanding and allows custom behavior such as truncating after a maximum number of trials.
Misinterpreting what represents is a frequent source of confusion. The parameter is the success probability on a single trial, not the probability of seeing a success eventually—under repeated trials, eventual success is guaranteed if is greater than zero. Another pitfall involves neglecting independence: if the probability of success changes from trial to trial, the geometric model no longer applies. In such cases, you may need a negative binomial model with varying or a Markov chain that captures state-dependent probabilities.
By experimenting with the inputs and reading the accompanying math, you gain intuition for random processes governed by repeated trials. Whether you are modeling machine reliability, analyzing randomized algorithms, or studying Markov chains, understanding the geometric distribution equips you with valuable probabilistic insight. The expanded explanations and copyable outputs aim to make this foundational distribution easier to grasp and apply in practice.
Compute the nth term and sum of a geometric series by entering the first term, common ratio, and number of terms.
Compute probabilities for the negative binomial distribution including PMF, cumulative probability, mean and variance.
Compute the arithmetic-geometric mean of two positive numbers and learn its surprising connections to elliptic integrals and fast numerical algorithms.