Amdahl’s law is a formula used in parallel computing to find the maximum improvement improvement possible by improving a particular part of a system. In parallel computing, Amdahl's law is mainly used to predict the theoretical maximum speedup for program processing using multiple processors. It is named after Gene Amdahl, a computer architect from IBM and the Amdahl Corporation.
To understand the law lets consider a single processor system which completes a job in 10 Hours. To improve the performance we can try using parallel processors and make the execution time fast, but it may be the case that some of the code cannot be made to run in parallel. For the sake of example lets consider 20% of the code cannot be run in parallel.
So P = 80%
P = proportion of program which can be executed in parallel.
Amdahl’s law states that in parallelization, if P is the proportion of a system or program that can be made parallel, then the maximum speedup that can be achieved using N number of processors is 1/((1-P)+(P/N).
Using the law - with 10 processors, we can achieve speedup of 1/(1-0.8)+(0.8/10) = 3.57%
Calculating the maximum speedup:
We can only achieve maximum improvement with N tending to infinity (maximum processors)
for our example - we can only achieve improvemnt of 1/(1-0.8) = 5X times
5X in our example would mean job can now be completed just in 2 hours.
Graphical representation of the law
The law was derived by Gene Amdahl and was presented at the AFIPS Spring Joint Computer Conference in 1967.
SPEForums need your contributions to build up a strong repository of performance engineering resources.
Not everything can be paralleled and optimized