The classic Hilbert matrix has entries . Despite its simple formula, this matrix is famously ill conditioned. As grows, the determinant approaches zero extremely quickly, making inverses numerically unstable. The Hilbert matrix arises in interpolation theory and numerical analysis as an example of how seemingly nice matrices can lead to severe rounding errors.
A condition number measures how sensitive a linear system is to perturbations. For a nonsingular matrix , the 1-norm condition number is . Large values indicate that a small change in the input could cause a large change in the solution of . Hilbert matrices have condition numbers that grow roughly exponentially with . Even leads to a on the order of .
This calculator generates the full matrix for a chosen size. When you select , the script fills a JavaScript array by evaluating for each pair of indices. The matrix is inherently symmetric and positive definite, properties that play a role in approximation theory. Because Hilbert matrices become badly conditioned, they also make popular test cases for numerical algorithms that solve linear systems.
The most basic linear algebra courses introduce the Hilbert matrix when discussing Gaussian elimination. In theory, the elimination process yields an exact inverse, but in practice rounding error severely limits accuracy. Consequently, using higher precision arithmetic or specialized algorithms becomes important. The 1-norm is only one possible measure of conditioning; the 2-norm condition number involves singular values, but this calculator sticks with the simpler norm because it requires less computation.
Once the matrix is formed, the script computes its inverse using math.inv
from the math.js
library. It then obtains the 1-norm via math.norm(matrix, 1)
. The product of the norms gives the condition number. When the matrix size is small, this value remains moderate. But as you experiment with larger , you will see rapid growth. This demonstrates why solving Hilbert systems requires caution even for only five or six dimensions.
The numerical stability issues occur because Hilbert matrices are nearly singular. Each row closely resembles the others; subtracting them yields tiny differences. When pivoting or inversion takes place, these nearly dependent rows magnify any rounding error. If you were to solve with standard double precision arithmetic for large , the computed result could be meaningless.
For , the matrix is
The condition number for this 3Ă3 case is already around . When you increase to 5, the value exceeds . This explosive growth is why Hilbert matrices are a notorious example in textbooks illustrating ill conditioning.
If you attempt polynomial interpolation with points evenly spaced in the interval , the resulting Vandermonde matrix closely resembles a Hilbert matrix. The interpolation coefficients then suffer from massive rounding errors, a phenomenon known as Runge's phenomenon. Understanding the condition number warns us about potential pitfalls when solving these systems numerically.
In statistics, Hilbert matrices appear in the method of moments for estimating parameters. Numerical analysts also study them to benchmark algorithms for LU decomposition and other factorizations. Because their entries decrease smoothly away from the top-left corner, Hilbert matrices capture how subtle correlations can undermine computational accuracy.
The calculator uses purely client-side JavaScript. When you click the button, it reads the integer , constructs the matrix as nested arrays, and calls math.inv
for the inverse. While this approach is easy to understand, it may struggle for beyond ten or so due to the underlying floating-point limitations in browsers. Nonetheless, it vividly illustrates how the condition number balloons.
Because math.js
handles both matrix operations and numeric formatting, we rely on it for concise code. You are welcome to view the page source to see the exact computation steps. The output prints the condition number with a fixed number of digits so you can compare values cleanly as you vary .
Try computing for through . Notice how each increment roughly multiplies the condition number by a large factor. Graphing against reveals a nearly linear relationship, emphasizing exponential growth. These experiments highlight why choosing better-conditioned bases or orthogonal polynomials can drastically improve numerical stability in interpolation and integral equations.
While the Hilbert matrix is an extreme case, many applied problems exhibit similar behavior to a lesser degree. Engineers, physicists, and statisticians must remain mindful of conditioning whenever solving linear systems or least-squares problems. Even moderate condition numbers can degrade accuracy when combined with measurement noise or limited precision.
This calculator focuses solely on the 1-norm. Other norms, particularly the 2-norm based on singular values, often provide tighter bounds on error magnification. Implementing a singular-value decomposition in JavaScript is more involved but could yield more insightful results. Additionally, the Hilbert matrix uses simple fractions, but one could modify the code to handle scaled or shifted versions that arise in some applications.
Despite these limitations, exploring the Hilbert matrix encourages a deeper appreciation of numerical stability. When solving real-world problems, you can use preconditioning, pivoting strategies, or higher precision arithmetic to tame ill-conditioning. This small tool offers a taste of the challenges that motivate much of modern numerical analysis.
Estimate the radius of convergence of a power series from its coefficients.
Compute the DCT-II of a numeric sequence to analyze signal frequencies.
Factor a non-negative matrix into two smaller non-negative matrices using multiplicative updates.