Matrices are a compact way to represent linear relationships. They show up whenever you want to transform vectors, solve multiple linear equations at once, or describe systems in physics, engineering, statistics, and computer graphics. A square matrix A is said to have an inverse (written A−1) if there exists another matrix such that:
A · A−1 = I and A−1 · A = I,
where I is the identity matrix (1s on the diagonal and 0s elsewhere). Conceptually, multiplying by A applies a linear transformation, and multiplying by A−1 “undoes” it. This is the matrix analogue of multiplying by 1/x to undo multiplication by x.
Not every matrix has an inverse. The key test is the determinant. If det(A) ≠ 0, then A is invertible (also called nonsingular). If det(A) = 0, then A is singular and no inverse exists.
For
A = [[a, b], [c, d]], the determinant is det(A) = ad − bc. If ad − bc ≠ 0, then:
For a 3×3 matrix, a standard symbolic expression is:
A−1 = adj(A) / det(A),
where adj(A) is the adjugate (transpose of the cofactor matrix). Practically, you compute:
This calculator automates those steps for you.
Suppose:
A = [[1, 2], [3, 4]].
Compute the determinant:
det(A) = (1)(4) − (2)(3) = 4 − 6 = −2 ≠ 0, so the inverse exists.
Then:
A−1 = (1/−2) [[4, −2], [−3, 1]] = [[−2, 1], [1.5, −0.5]].
You can verify:
[[1, 2], [3, 4]] · [[−2, 1], [1.5, −0.5]] = [[1, 0], [0, 1]].
| Approach | Best for | Pros | Cons |
|---|---|---|---|
| Closed-form 2×2 formula | Hand calculations for 2×2 matrices | Fast, simple, minimal steps | Only applies to 2×2 |
| Adjugate/cofactor method (3×3) | Small matrices; learning linear algebra | Exact symbolic structure; educational | Error-prone by hand; many intermediate determinants |
| Gaussian elimination (row reduction) | General manual procedure; sanity checking | Works beyond 3×3; reveals singularity via pivots | Longer by hand; needs careful arithmetic |
| Numerical methods (pivoting/SVD) | Near-singular or larger matrices in applications | More stable in floating-point arithmetic | More complex; typically needs a library/tooling |
An inverse exists exactly when det(A) ≠ 0.
Singular means det(A) = 0, so the matrix collapses space in some direction and cannot be undone by any inverse transformation.
If A is invertible, then x = A−1b (matrix-vector multiplication).
That usually means the determinant is close to 0 and the matrix is ill-conditioned. Small input errors can cause large output changes.
It is numerical (floating-point). For integers and “nice” matrices you may see exact-looking decimals, but the underlying computation is approximate.