Amdahl's Law Speedup and Efficiency Calculator

JJ Ben-Joseph headshot JJ Ben-Joseph

Introduction: Why Amdahl’s Law Matters

In parallel computing, we try to make programs run faster by dividing work across multiple processors or cores. Intuitively, it might seem that doubling the number of processors should almost always cut the runtime in half, or that using 32 processors should be roughly 32 times faster than using one. In reality, that rarely happens. Some portions of the code cannot be parallelized, and there is overhead from coordination, communication, and synchronization. These factors place a hard limit on the speedup you can achieve.

Amdahl’s law is a simple but powerful model that describes this limit. It connects three key quantities:

This calculator implements Amdahl’s law so you can quickly estimate the best-case speedup and efficiency for a given workload and processor count. It is useful for high-performance computing (HPC), parallel programming, performance modeling, and capacity planning.

What Is Amdahl’s Law?

Consider a program where some part must always run sequentially, while the rest can be perfectly parallelized across n processors. Let:

If the total runtime on a single processor is normalized to 1, then:

So the total runtime on n processors is:

T(n) = (1 - p) + p / n

The speedup is defined as the ratio between the original runtime and the new runtime:

S(n) = T(1) / T(n) = 1 / [ (1 - p) + p / n ]

Formulas for Speedup and Efficiency

The central Amdahl’s law speedup formula is:

S = 1 ( 1 p ) + p n

where:

The parallel efficiency measures how effectively you are using each processor:

E = S n

Interpreting these values:

How to Use This Calculator

  1. Estimate the parallelizable fraction p.
    Profile your code or workload on a single processor. Determine what portion of the total execution time can, in principle, be run in parallel. Express this as a fraction between 0 and 1. For example, if 80% of the time is spent in loops that could run independently on multiple cores, then p = 0.8.
  2. Choose the number of processors n.
    This could be the number of CPU cores, hardware threads, GPU streaming multiprocessors, or cluster nodes you plan to use. Enter an integer value ≥ 1.
  3. Run the calculation.
    The calculator applies Amdahl’s law to compute the theoretical speedup S and the resulting efficiency E.
  4. Interpret the results.
    Compare the computed speedup with the ideal linear speedup n. A large gap indicates that the serial fraction or overhead is limiting scalability. Use the efficiency to judge whether adding more processors is worthwhile.

Interpreting the Results

The calculator typically reports two primary outputs: speedup and efficiency.

As you increase n while holding p constant:

This diminishing-return behavior is the core insight of Amdahl’s law: even a small serial fraction can cap your speedup at a relatively low value, no matter how many processors you add.

Worked Example

Suppose a program has 90% of its workload parallelizable, so p = 0.9, and we run it on n = 4 processors.

  1. Compute the runtime on 4 processors:
    T(4) = (1 - 0.9) + 0.9 / 4 = 0.1 + 0.225 = 0.325
  2. Compute the speedup:
    S(4) = 1 / 0.325 ≈ 3.08
  3. Compute the efficiency:
    E = S / n = 3.08 / 4 ≈ 0.77

This means that:

Now consider using n = 16 processors with the same p = 0.9:

Even though you quadrupled the processor count from 4 to 16, the speedup increased only from 3.08 to 6.4, and the efficiency dropped from 0.77 to 0.4. This illustrates the strong limiting effect of the serial part of the workload.

Comparison Table for Typical Values

The table below shows theoretical speedup values for selected parallel fractions and processor counts using Amdahl’s law.

Parallel Fraction (p) n = 2 n = 4 n = 8 n = 16
0.5 1.33 1.60 1.78 1.88
0.9 1.82 3.08 4.71 7.02
0.99 1.98 3.88 7.53 13.90
0.999 1.998 3.996 7.89 15.42

You can see that even with very high parallel fractions, the speedup grows sublinearly with n. For example, with p = 0.99, moving from 8 to 16 processors increases the speedup only from about 7.53 to 13.90.

Limitations and Assumptions

Amdahl’s law is intentionally simple. It is most useful as a theoretical upper bound and a way to build intuition, not as a precise performance predictor. This calculator follows the same assumptions:

Because of these assumptions, the calculator is best used to:

Always compare the calculator’s predictions with empirical measurements from profiling and benchmarking on your actual system.

Frequently Asked Questions

What is the difference between Amdahl’s law and Gustafson’s law?

Amdahl’s law assumes a fixed problem size and focuses on how the serial portion of a workload limits speedup as you increase the number of processors. It is most relevant when you keep the total work constant and want to know the maximum acceleration you can expect.

Gustafson’s law, in contrast, assumes that when you have more processors you will typically run larger problems. Under this assumption, it is often possible to achieve near-linear speedup, because the parallel part grows with the number of processors while the serial overhead stays roughly constant. In short, Amdahl’s law emphasizes limits for fixed workloads, while Gustafson’s law emphasizes scaling for growing workloads.

How do I interpret the parallelizable fraction input?

The parallelizable fraction p represents the portion of your program’s runtime that can, in principle, be executed in parallel. A value of p = 0.8 means that 80% of the execution time is parallelizable and the remaining 20% is inherently serial.

To estimate p in practice, you can:

Because p is an idealized quantity, treat your estimate as a guideline rather than an exact value.

Can Amdahl’s law predict real-world performance exactly?

No. Amdahl’s law is a simplified model that gives an upper bound on speedup under ideal conditions. Real performance is almost always lower because of communication overhead, load imbalance, memory contention, and other factors. Use this calculator to understand trends and limits, then validate with actual measurements on your hardware.

When should I use Amdahl’s law versus Gustafson’s law?

Use Amdahl’s law when the total workload is fixed and you want to know whether adding more processors is beneficial for that specific problem size. Use Gustafson’s law when you expect to increase the workload as more processors become available (for example, running higher-resolution simulations or processing larger datasets). Considering both perspectives helps you make more informed scalability decisions.

Parallel scaling inputs
Enter p and n to compute speedup, efficiency, and diminishing returns.

Embed this calculator

Copy and paste the HTML below to add the Amdahl's Law Calculator for Parallel Speedup and Efficiency to your website.