The geometric distribution describes the number of independent Bernoulli trials needed to obtain the first success. Each trial results in success with probability \(p\) and failure with probability \(1-p\). If we let the random variable \(X\) denote the trial on which the first success occurs, then \(X\) follows a geometric distribution with parameter \(p\). This model applies whenever we repeat an experiment until a positive outcome happens, such as flipping a coin until it lands heads or testing lightbulbs until one works.
The probability mass function is
for integers \(k \ge 1\).
The geometric distribution is memoryless, meaning the probability of success on the next trial does not depend on past failures. Formally, \(P(X > s+t \mid X > s) = P(X > t)\). This property simplifies many probabilistic analyses, as we can treat each trial as if we're starting anew after each failure.
We might also ask for the probability that the first success occurs within the first \(k\) trials:
.
This cumulative distribution function (CDF) is easily computed by summing the geometric series of decreasing failure probabilities. Because each failure multiplies the chance of no success, the probability that we have not seen a success by the \(k\)th trial is \((1-p)^k\).
The geometric distribution exhibits a simple expectation and variance. We have
and .
This means that if the success chance is small, we expect to wait many trials, and the variance—how widely the number of trials may vary—becomes large. Conversely, with a high success probability, the expected waiting time shrinks considerably.
Enter the probability of success \(p\) (a number between 0 and 1) and the integer \(k\) representing the number of trials. After clicking "Compute," the calculator returns two values: the probability that the first success occurs exactly on trial \(k\), and the probability that it occurs on or before trial \(k\). You can experiment with different inputs to see how smaller values of \(p\) produce a longer tail in the distribution, while larger \(p\) values concentrate the probability mass near lower counts.
Suppose we flip a fair coin, so \(p = 0.5\). What is the chance that we need exactly three flips to get heads? We calculate \((1-0.5)^{2} \cdot 0.5 = 0.125\). The probability of observing the first head within three flips is \(1 - (1-0.5)^3 = 0.875\). This intuitive example illustrates how the geometric distribution models waiting times in simple random experiments.
The geometric distribution appears in reliability engineering, queueing theory, and computer science. When evaluating the reliability of a component that has a fixed probability of failure each time it is tested, we can model the number of tests until failure. In networking, the number of transmissions required before a packet successfully passes through a noisy channel often follows a geometric distribution. Similarly, in algorithm analysis, the number of iterations needed for a probabilistic algorithm to succeed might be geometrically distributed.
The geometric distribution has been studied since the nineteenth century as part of the development of probability theory. Its memoryless property links it to the exponential distribution in continuous time. In fact, the geometric distribution can be seen as the discrete analogue of the exponential: both describe waiting times, but one counts discrete trials while the other measures continuous time. This connection underlies many results in stochastic processes, such as the relationship between Poisson processes and exponential interarrival times.
One extension is the negative binomial distribution, which counts the number of trials required to achieve a fixed number of successes rather than just one. Another angle is parameter estimation: given sample data, we can estimate \(p\) by the reciprocal of the sample mean, a simple maximum likelihood result. You can also investigate how the geometric distribution interacts with random stopping times and renewal theory. Even though this calculator focuses on the most basic case, it opens the door to a rich field of discrete probability.
By experimenting with the inputs and reading the accompanying math, you gain intuition for random processes governed by repeated trials. Whether you are modeling machine reliability, analyzing randomized algorithms, or studying Markov chains, understanding the geometric distribution equips you with valuable probabilistic insight.
Evaluate probabilities for draws without replacement from a finite population.
Estimate the Larmor precession frequency of a charged particle in a magnetic field using charge, mass, and field strength.
Compute probability density and cumulative probability for the normal distribution.