Gaussian elimination systematically transforms a linear system into a triangular form using elementary row operations. Given a matrix and vector , we seek satisfying . The method performs row swaps, scaling, and replacement to create zeros below the main diagonal, resulting in an upper triangular matrix. This makes back substitution straightforward.
The row operations are: (1) swap two rows, (2) multiply a row by a nonzero scalar, and (3) add a multiple of one row to another. These operations correspond to left multiplication by elementary matrices, which are invertible. Therefore, transforming into an upper triangular form does not change the solution set of the system.
Once in triangular form, the system looks like , , and . We then solve for and substitute upwards. This procedure works for 2×2 or 3×3 systems as handled by this calculator.
Carl Friedrich Gauss refined elimination in the early nineteenth century while studying astronomy and geodesy. The approach bears his name because he emphasized its systematic nature and error minimization. Today, Gaussian elimination forms the backbone of many numerical algorithms, from solving finite element problems to computing matrix inverses.
The method is closely related to LU decomposition, in which matrix factors into a lower triangular and an upper triangular . Forward substitution with followed by back substitution with solves the system efficiently. Understanding Gaussian elimination therefore offers insight into matrix factorization and numerical linear algebra more broadly.
While conceptually simple, elimination is sensitive to rounding errors. Pivoting strategies mitigate these errors by swapping rows to place the largest available element on the diagonal. Partial pivoting, used by many computational libraries, preserves accuracy without the overhead of full pivoting. This calculator performs partial pivoting automatically to produce reliable results for most practical systems.
Ill-conditioned matrices—those with nearly dependent rows—can still lead to large errors. Solutions may vary drastically with tiny changes in the input. Diagnosing such issues requires computing the condition number or using more stable algorithms like QR decomposition. Nevertheless, Gaussian elimination remains a foundational tool and a gateway to advanced techniques.
Consider the system and . Forming the augmented matrix and eliminating yields . Back substitution gives and . Substituting yields . The calculator executes these steps automatically.
Enter the coefficients row by row. Leave the third-row fields empty for a 2×2 system. The algorithm computes an augmented matrix, applies partial pivoting, and performs elimination. If a zero pivot appears despite pivoting, the system may be singular or require reordering. The solution vector displays with four decimal places for clarity.
By adjusting inputs, you can explore how scaling rows or swapping equations changes the elimination steps. This helps develop intuition for linear independence and the geometry of vector spaces, where each equation represents a plane in multidimensional space.
Gaussian elimination, though centuries old, remains one of the most widely taught algorithms in mathematics and engineering. From basic linear algebra courses to specialized numerical simulations, it exemplifies how systematic manipulation leads to elegant solutions. Mastering it prepares you for more advanced methods and a deeper understanding of matrices, determinants, and eigenvalues.
Construct a cubic Hermite interpolant through two points with specified derivatives.
Compute the covariance matrix for two or three datasets and explore its meaning.
Compute the Moore-Penrose pseudoinverse of a 2×2 matrix using a simple SVD approach.