The Cayley–Hamilton theorem states that every square matrix satisfies its own characteristic equation. For a 2×2 matrix with trace and determinant , the polynomial is . According to the theorem, substituting for yields the zero matrix:
.
This identity links the eigenvalues of to its algebraic structure. By studying it, we gain insight into how powers of reduce to simpler combinations of and the identity.
Enter four numbers for the matrix entries. When you press the verify button, the script computes the characteristic polynomial coefficients and , constructs the left-hand side of the Cayley–Hamilton expression, and displays the resulting matrix. Ideally all entries are zero, modulo rounding error.
This demonstration helps visualize a theorem that often seems abstract in textbooks. By trying different matrices, especially those with repeated eigenvalues or complex eigenpairs, you can see how the algebraic identity holds universally.
Arthur Cayley and William Hamilton formulated this theorem in the 1850s while studying determinants and quaternion algebra. Their work laid a foundation for modern linear algebra and representation theory. The theorem is a cornerstone in the study of matrix functions, making it possible to evaluate powers or exponentials by reducing high-degree terms.
In control theory and differential equations, the Cayley–Hamilton theorem underpins the computation of state-transition matrices and motivates the use of canonical forms. By expressing powers of through its characteristic coefficients, one can derive closed-form solutions to linear systems.
Every polynomial in a matrix can be reduced using its characteristic equation. For a 2×2 matrix, any power beyond the first can be rewritten as a combination of and the identity matrix. This reduction simplifies formulas and reveals deeper symmetries. It also ensures that eigenvalues completely determine the behavior of analytic functions of matrices.
The theorem also suggests efficient computational strategies. Instead of computing high powers directly, we can recursively apply the polynomial relation to keep the degree low. This technique proves useful in simulations and symbolic manipulation.
Consider a rotation matrix . The trace is and the determinant is one. Substituting these values reveals the polynomial identity, giving insight into rotational dynamics and periodicity.
By exploring matrices with different traces and determinants, you can observe how the left-hand side approaches zero. Numerical error may leave tiny residuals, but the pattern remains clear. This hands-on approach cements the abstract theorem in a concrete setting.
The Cayley–Hamilton theorem extends to matrices of any size, though computing the characteristic polynomial becomes more involved. It connects matrix theory to algebraic geometry, where eigenvalues correspond to roots of polynomials. Understanding this link opens pathways to advanced topics like Jordan canonical form and spectral decomposition.
By experimenting with this tool, you develop intuition for how matrix identities arise from algebraic constraints. The interplay between determinants, traces, and polynomial expressions reveals the structured nature of linear transformations.
Compute the singular value decomposition of a 2x2 matrix to understand matrix structure.
Find the shortest path between two nodes in a small graph using Dijkstra's algorithm.
Construct a cubic Hermite interpolant through two points with specified derivatives.