A Gram matrix (also called a Gramian) captures how a collection of vectors relates through their inner products. If you have vectors in a real inner product space, the Gram matrix is the matrix whose -entry is the inner product of the -th and -th vectors.
In symbols, the entries are defined by
For real column vectors this inner product is just the dot product, so you can think of each entry as . The matrix is always symmetric (because ) and positive semi-definite, which means all its eigenvalues are non-negative.
Geometrically, the diagonal entries give the squared lengths of the vectors, while the off-diagonal entries encode the cosines of angles between them. This makes the Gram matrix a compact summary of lengths, angles, and linear dependence in a set of vectors.
Follow these steps to compute the Gram matrix for up to four vectors:
1, 2, 3.
You can interpret the result as a summary of how similar your vectors are. Large positive off-diagonal entries indicate that two vectors point in a similar direction, while small or zero entries suggest they are nearly orthogonal.
Suppose you enter vectors , each living in . Write each vector as . The dot product of two vectors and is
The Gram matrix is then the matrix whose entries are
In matrix form, if you arrange your vectors as columns of a matrix , then the Gram matrix is simply . Many algorithms in linear algebra, statistics, and machine learning rely on this construction.
Consider two vectors in :
, .
Compute their dot products:
The Gram matrix for is therefore
The diagonal entries (14 and 77) are the squared lengths and . The off-diagonal entries (both 32) measure how aligned the vectors are. If and were orthogonal, these entries would be zero.
The determinant of this Gram matrix, , is the squared area of the parallelogram spanned by and . A determinant of zero would indicate that the vectors are linearly dependent.
Once you have computed a Gram matrix, you can read several geometric and algebraic properties directly from it:
The Gram matrix is closely related to other common matrices in linear algebra and statistics. The table below summarizes some key differences.
| Matrix type | Definition | Captures | Typical use cases |
|---|---|---|---|
| Gram matrix | for a matrix of vectors | Inner products, lengths, angles, linear independence | Geometry of vector sets, kernel methods, basis analysis |
| Covariance matrix | Centered Gram matrix scaled by sample size | Variances and covariances of random variables | Statistics, data analysis, PCA |
| Kernel (Gram) matrix | Entries from a kernel function | Similarities in an implicit feature space | Support vector machines, Gaussian processes, kernel PCA |
In fact, many authors refer to a kernel matrix as a Gram matrix computed in a higher-dimensional feature space. The calculator on this page works with explicit real-valued vectors and the standard dot product.
Gram matrices appear in many areas of mathematics, data science, and engineering:
This online tool is designed to be simple and educational. Keep the following assumptions and limitations in mind when interpreting the results:
Within these limits, the tool gives a quick and transparent way to explore inner products, lengths, angles, and independence among a small set of vectors.
A Gram matrix summarizes all pairwise inner products of a set of vectors. It is used to study the geometry of vectors (lengths, angles, volumes), analyze linear independence, and form the basis of many algorithms in statistics and machine learning.
A covariance matrix measures how random variables vary together and is built from centered data, while a Gram matrix is just the inner products of raw vectors. For data points arranged as rows of a matrix, the covariance matrix is a centered, rescaled version of the Gram matrix.
No. This tool assumes all components are real numbers and uses the standard dot product. For complex vectors one would use the Hermitian inner product, which conjugates one of the vectors; that generalization is not implemented here.