In parallel computing, we try to make programs run faster by dividing work across multiple processors or cores. Intuitively, it might seem that doubling the number of processors should almost always cut the runtime in half, or that using 32 processors should be roughly 32 times faster than using one. In reality, that rarely happens. Some portions of the code cannot be parallelized, and there is overhead from coordination, communication, and synchronization. These factors place a hard limit on the speedup you can achieve.
Amdahl’s law is a simple but powerful model that describes this limit. It connects three key quantities:
This calculator implements Amdahl’s law so you can quickly estimate the best-case speedup and efficiency for a given workload and processor count. It is useful for high-performance computing (HPC), parallel programming, performance modeling, and capacity planning.
Consider a program where some part must always run sequentially, while the rest can be perfectly parallelized across n processors. Let:
If the total runtime on a single processor is normalized to 1, then:
So the total runtime on n processors is:
T(n) = (1 - p) + p / n
The speedup is defined as the ratio between the original runtime and the new runtime:
S(n) = T(1) / T(n) = 1 / [ (1 - p) + p / n ]
The central Amdahl’s law speedup formula is:
where:
The parallel efficiency measures how effectively you are using each processor:
Interpreting these values:
The calculator typically reports two primary outputs: speedup and efficiency.
As you increase n while holding p constant:
This diminishing-return behavior is the core insight of Amdahl’s law: even a small serial fraction can cap your speedup at a relatively low value, no matter how many processors you add.
Suppose a program has 90% of its workload parallelizable, so p = 0.9, and we run it on n = 4 processors.
T(4) = (1 - 0.9) + 0.9 / 4 = 0.1 + 0.225 = 0.325S(4) = 1 / 0.325 ≈ 3.08E = S / n = 3.08 / 4 ≈ 0.77This means that:
Now consider using n = 16 processors with the same p = 0.9:
T(16) = 0.1 + 0.9 / 16 = 0.1 + 0.05625 = 0.15625S(16) = 1 / 0.15625 = 6.4E = 6.4 / 16 = 0.4Even though you quadrupled the processor count from 4 to 16, the speedup increased only from 3.08 to 6.4, and the efficiency dropped from 0.77 to 0.4. This illustrates the strong limiting effect of the serial part of the workload.
The table below shows theoretical speedup values for selected parallel fractions and processor counts using Amdahl’s law.
| Parallel Fraction (p) | n = 2 | n = 4 | n = 8 | n = 16 |
|---|---|---|---|---|
| 0.5 | 1.33 | 1.60 | 1.78 | 1.88 |
| 0.9 | 1.82 | 3.08 | 4.71 | 7.02 |
| 0.99 | 1.98 | 3.88 | 7.53 | 13.90 |
| 0.999 | 1.998 | 3.996 | 7.89 | 15.42 |
You can see that even with very high parallel fractions, the speedup grows sublinearly with n. For example, with p = 0.99, moving from 8 to 16 processors increases the speedup only from about 7.53 to 13.90.
Amdahl’s law is intentionally simple. It is most useful as a theoretical upper bound and a way to build intuition, not as a precise performance predictor. This calculator follows the same assumptions:
Because of these assumptions, the calculator is best used to:
Always compare the calculator’s predictions with empirical measurements from profiling and benchmarking on your actual system.
Amdahl’s law assumes a fixed problem size and focuses on how the serial portion of a workload limits speedup as you increase the number of processors. It is most relevant when you keep the total work constant and want to know the maximum acceleration you can expect.
Gustafson’s law, in contrast, assumes that when you have more processors you will typically run larger problems. Under this assumption, it is often possible to achieve near-linear speedup, because the parallel part grows with the number of processors while the serial overhead stays roughly constant. In short, Amdahl’s law emphasizes limits for fixed workloads, while Gustafson’s law emphasizes scaling for growing workloads.
The parallelizable fraction p represents the portion of your program’s runtime that can, in principle, be executed in parallel. A value of p = 0.8 means that 80% of the execution time is parallelizable and the remaining 20% is inherently serial.
To estimate p in practice, you can:
Because p is an idealized quantity, treat your estimate as a guideline rather than an exact value.
No. Amdahl’s law is a simplified model that gives an upper bound on speedup under ideal conditions. Real performance is almost always lower because of communication overhead, load imbalance, memory contention, and other factors. Use this calculator to understand trends and limits, then validate with actual measurements on your hardware.
Use Amdahl’s law when the total workload is fixed and you want to know whether adding more processors is beneficial for that specific problem size. Use Gustafson’s law when you expect to increase the workload as more processors become available (for example, running higher-resolution simulations or processing larger datasets). Considering both perspectives helps you make more informed scalability decisions.