Amdahl’s Law
Amdahl’s Law bounds the speedup from parallelizing a task. A task splits into a serial fraction and a parallel fraction. The serial fraction is the ceiling. No matter how many processors you add, the serial part still runs sequentially. From ECE459 L09.
On processors, the parallel time is:
where:
- is the serial (single-processor) runtime
- is the serial fraction of the work
- is the parallelizable fraction
- is the number of processors
This rearranges into the speedup form:
The limit on the right is the hard ceiling set by the serial fraction alone.
Plugging in at :
| Speedup | |
|---|---|
Even a 5% serial fraction caps you at roughly on 18 cores.
Estimating empirically from a measured speedup at :
where:
- is the observed speedup at processors
- is the number of processors used in the measurement
Generalized to many parts with individual speedups:
where:
- is the fraction of original runtime spent in part
- is the speedup applied to part
Use it to pick between optimization options: whichever formula gives the larger number wins. The speedups don’t have to come from parallelization.
Example from L09. A program with 4 parts; you can implement Option 1 or Option 2:
| Part | Fraction of runtime | Option 1 speedup | Option 2 speedup |
|---|---|---|---|
| 1 | |||
| 2 | |||
| 3 | |||
| 4 |
Plug and chug:
Option 1 wins. Option 2 attacks the biggest fraction but only at . Spreading smaller speedups across the other parts adds up to more.
Assumptions that make Amdahl pessimistic
Amdahl assumes a fixed problem size, identical behaviour at 1 vs processors, and no overheads. It ignores synchronization cost, which drags real speedup down further.
Amdahl asks what fraction of your code can you parallelize. Gustafson’s Law flips the question: in practice you can just make the problem bigger.