The Cholesky decomposition expresses a symmetric, positive-definite matrix as the product , where is lower triangular with strictly positive diagonal entries. This factorization is valuable because it splits a complicated matrix into the product of simpler pieces, allowing systems of equations to be solved efficiently by forward and backward substitution. The method is also numerically stable for well-conditioned matrices, making it a cornerstone of numerical linear algebra.
To see why the decomposition exists, consider that any symmetric positive-definite matrix has all positive eigenvalues. Consequently, it can be viewed as the Gram matrix of a set of linearly independent vectors. The Cholesky algorithm essentially performs a specialized Gram–Schmidt process on those vectors, ensuring orthogonality while retaining symmetry. If is ×, we find such that multiplying by its transpose reproduces .
The algorithm proceeds element by element. We begin by setting . Because is positive definite, the square root is real and positive. The remaining entries in the first column follow from for . At each step, the algorithm subtracts the squares of previously computed values from the diagonal terms, ensuring that positivity is preserved.
Cholesky decomposition is preferred over more general techniques like LU decomposition when the matrix is symmetric and positive definite because it requires roughly half the computation and storage. The triangular matrix contains only about half of the entries of , and its transpose automatically completes the factorization. This symmetry leads to algorithms that run quickly and minimize rounding errors, providing accurate results even for large matrices.
Beyond solving linear systems, Cholesky factors play a key role in statistical modeling. In multivariate normal distributions, the covariance matrix is positive definite. Generating correlated random samples often begins with a Cholesky factorization: independent normal samples are transformed through to introduce the desired covariance. In optimization algorithms, such as trust-region methods, the Cholesky factor helps ensure stable steps when the Hessian matrix is positive definite.
Consider the following 3×3 symmetric matrix:
The Cholesky algorithm computes , then , . The next diagonal entry is , followed by . Finally, . The resulting lower triangular matrix provides the factorization .
Using the calculator, enter nine numbers representing your matrix in row-major order. Ensure the matrix is symmetric by making equal to . If the matrix fails to be positive definite—for example, if any computed square root becomes negative—the calculator reports an error. Otherwise, it outputs the entries of rounded to four decimal places. You can verify the result by multiplying with its transpose and comparing it to your original matrix.
Cholesky decomposition features in numerous applications, from machine learning to structural engineering. In Bayesian statistics, for instance, covariance matrices can be huge and dense. Storing and manipulating the Cholesky factor instead of the full matrix saves memory and speeds up computation. In finite element analysis, where stiffness matrices are symmetric and positive definite, Cholesky factors allow quick solutions to deformation problems. Each time you use the decomposition, the factor captures the essential geometry encoded by the original matrix while simplifying subsequent calculations.
Because Cholesky is so efficient, modern numerical libraries include highly optimized routines that exploit hardware acceleration. These routines can factor matrices with thousands of rows and columns. Understanding the algorithm at a smaller scale, as demonstrated in this calculator, provides insight into how those sophisticated implementations work under the hood. By experimenting with your own matrices, you will deepen your intuition for positive-definite forms, learn to recognize when Cholesky is applicable, and appreciate why this factorization is a trusted tool in computational science.
Solve linear constant-coefficient recurrences up to order three.
Estimate the radius of convergence of a power series from its coefficients.
Apply the rational root theorem to polynomial coefficients and discover all possible rational zeros.