Conjugate Gradient Solver

JJ Ben-Joseph headshot JJ Ben-Joseph

Enter coefficients and iterations.

The Need for Conjugate Gradient

When faced with a large system of linear equations Ax=b where A is symmetric and positive definite, direct methods like Gaussian elimination may be computationally costly or memory-intensive. The conjugate gradient (CG) method offers an efficient alternative by iteratively improving an approximate solution without storing the full matrix factorization. This property makes CG indispensable in solving sparse systems that arise from discretizing partial differential equations, especially in engineering and physics simulations.

Core Idea of the Algorithm

The CG method treats solving the linear system as minimizing the quadratic function fx=12xTAx-bTx. Starting from an initial guess, the algorithm proceeds along conjugate directions that ensure each step eliminates error components independently with respect to the A-inner product. The residual vector r guides these directions, with updates chosen to minimize the quadratic form at each stage. The beauty of CG is that in exact arithmetic it converges in at most n steps for an n-dimensional system, though rounding errors in practice typically necessitate more iterations.

Update Formulas

After computing the residual r(k)=b-Ax(k), we choose a search direction pk. The step size \alphak is obtained by

\alphak=r(k)r(k)pkApk

The new approximation becomes x(k+1)=x(k)+\alphakpk. We then update the residual and compute a coefficient \betak that ensures the next search direction is conjugate:

\betak=r(k+1)r(k+1)r(k)r(k)

Finally pk+1=r(k+1)+\betakpk. Through these updates, the algorithm steadily approaches the true solution while preserving conjugacy between search directions.

Example of the Method

To illustrate, consider solving 4113x = 12. Starting with x=0,0, the algorithm first sets r equal to b, and p0 to r. After calculating \alpha0, the method updates the guess and residual. With just a few iterations, the solution converges to approximately 0.0909,0.6364, closely matching the exact solution.

Using the Calculator

Enter the upper triangular part of your matrix (the calculator assumes symmetry) and the right-hand side vector. You may leave the third row and column blank to solve a 2×2 system. Specify how many iterations to perform. When you press Solve, the script executes the conjugate gradient algorithm starting from the zero vector. The final approximation appears with six decimal places. If the matrix is not positive definite, the method may fail to converge—this simple implementation does not include checks for definiteness.

Advantages and Broader Impact

The conjugate gradient method is celebrated for solving large, sparse, symmetric positive-definite systems with modest computational resources. Because each iteration requires only matrix-vector products, it is well suited for matrices where most entries are zero. Modern scientific computing libraries use CG extensively for finite element analysis, fluid dynamics, and machine learning optimization. Beyond these practical applications, the method illuminates deeper connections between linear algebra and optimization, showcasing how minimizing a quadratic function yields solutions to linear equations.

Experimenting with this calculator will help you appreciate how the residual decreases and how conjugate directions accelerate convergence relative to simpler methods. You can try different matrices and observe the progression of approximations. With practice, you will see how preconditioning—transforming the system to improve convergence—builds upon the CG framework to handle extremely ill-conditioned problems.

Developed in the 1950s, the conjugate gradient algorithm remains a cornerstone of numerical linear algebra. Its blend of simplicity, efficiency, and elegance has influenced countless other iterative techniques. By exploring the method here, you gain a stepping stone toward advanced topics like Krylov subspace methods, multi-grid solvers, and optimization strategies used throughout computational science.

Step-by-Step Worked Example

Consider solving the 3×3 system arising from a discretized heat equation. Let

410131012 x = 101.

Starting from the zero vector, the initial residual equals b. The first search direction is therefore p0=b. Computing the step size using the formulas above yields \alpha0=0.2. Updating the approximation gives x1=0.2,0,0.2. Subsequent residuals and search directions follow, and after only a few iterations the solution converges to approximately 0.0909,0.6364,0.1818, illustrating the method’s efficiency.

Comparison With Other Solvers

The table below contrasts common linear solvers. CG outperforms basic iterative methods on SPD systems, while direct methods excel for small dense matrices.

MethodBest ForMemory UseConvergence Speed
Conjugate GradientSparse SPDLowFast
Gauss-SeidelDiagonally dominantLowModerate
JacobiSimple parallel casesLowSlow
Gaussian EliminationSmall denseHighOne-step

While CG scales well, its reliance on positive definiteness means it may fail for indefinite or nonsymmetric systems. Preconditioners such as incomplete Cholesky factorization can dramatically reduce iteration counts, but they require additional setup time.

Limitations and Assumptions

This implementation omits advanced features like preconditioning, adaptive stopping criteria, and checks for definiteness. Round-off error can degrade conjugacy over many iterations, so real-world codes re-orthogonalize directions or restart periodically. Furthermore, the calculator assumes inputs correspond to a symmetric matrix; asymmetry can lead to divergence. Users tackling nonsymmetric systems might explore the biconjugate gradient or GMRES methods instead.

Related Calculators

To compare iterative approaches, try the Gauss-Seidel Calculator or the Gaussian Elimination Calculator for direct solving. These tools illustrate how algorithm choice affects performance and accuracy.

Related Calculators

Jacobi Iterative Method Calculator - Solve Linear Systems

Approximate solutions of 2x2 or 3x3 linear systems using the Jacobi iterative algorithm.

Jacobi method calculator iterative solver

CSS Gradient Generator - Linear and Radial

Create linear or radial CSS gradients with adjustable colors and angles entirely client-side.

css gradient generator linear gradient radial gradient color picker

Gauss-Seidel Method Calculator - Iterative Linear Solver

Solve 2x2 or 3x3 linear systems using the Gauss-Seidel iterative algorithm.

Gauss-Seidel calculator iterative method linear system solver