When faced with a large system of linear equations where is symmetric and positive definite, direct methods like Gaussian elimination may be computationally costly or memory-intensive. The conjugate gradient (CG) method offers an efficient alternative by iteratively improving an approximate solution without storing the full matrix factorization. This property makes CG indispensable in solving sparse systems that arise from discretizing partial differential equations, especially in engineering and physics simulations.
The CG method treats solving the linear system as minimizing the quadratic function . Starting from an initial guess, the algorithm proceeds along conjugate directions that ensure each step eliminates error components independently with respect to the -inner product. The residual vector guides these directions, with updates chosen to minimize the quadratic form at each stage. The beauty of CG is that in exact arithmetic it converges in at most steps for an -dimensional system, though rounding errors in practice typically necessitate more iterations.
After computing the residual , we choose a search direction . The step size is obtained by
The new approximation becomes . We then update the residual and compute a coefficient that ensures the next search direction is conjugate:
Finally . Through these updates, the algorithm steadily approaches the true solution while preserving conjugacy between search directions.
To illustrate, consider solving = . Starting with , the algorithm first sets equal to , and to . After calculating , the method updates the guess and residual. With just a few iterations, the solution converges to approximately , closely matching the exact solution.
Enter the upper triangular part of your matrix (the calculator assumes symmetry) and the right-hand side vector. You may leave the third row and column blank to solve a 2×2 system. Specify how many iterations to perform. When you press Solve, the script executes the conjugate gradient algorithm starting from the zero vector. The final approximation appears with six decimal places. If the matrix is not positive definite, the method may fail to converge—this simple implementation does not include checks for definiteness.
The conjugate gradient method is celebrated for solving large, sparse, symmetric positive-definite systems with modest computational resources. Because each iteration requires only matrix-vector products, it is well suited for matrices where most entries are zero. Modern scientific computing libraries use CG extensively for finite element analysis, fluid dynamics, and machine learning optimization. Beyond these practical applications, the method illuminates deeper connections between linear algebra and optimization, showcasing how minimizing a quadratic function yields solutions to linear equations.
Experimenting with this calculator will help you appreciate how the residual decreases and how conjugate directions accelerate convergence relative to simpler methods. You can try different matrices and observe the progression of approximations. With practice, you will see how preconditioning—transforming the system to improve convergence—builds upon the CG framework to handle extremely ill-conditioned problems.
Developed in the 1950s, the conjugate gradient algorithm remains a cornerstone of numerical linear algebra. Its blend of simplicity, efficiency, and elegance has influenced countless other iterative techniques. By exploring the method here, you gain a stepping stone toward advanced topics like Krylov subspace methods, multi-grid solvers, and optimization strategies used throughout computational science.
Find the greatest common divisor and least common multiple of any two numbers using this handy calculator. Ideal for students, teachers, and anyone needing quick math help.
Estimate the fractal dimension from the number of pieces and scaling ratio.
Compute the discrete convolution of two finite sequences for signal processing and probability.