So, for example, if we can speed up 60% of the system to the point where it re-
quires close to no time, our net speedup will still only be 1
/
0
.
4
=
2
.
5. We saw this
performance with our dictionary program as we replaced insertion sort by quick-
sort. The initial version spent 173.05 of its 177.57 seconds performing insertion
sort, giving
α
=
0
.
975. With quicksort, the time spent sorting becomes negligible,
giving a predicted speedup of 39.3. In fact, the actual measured speedup was a
bit less: 173
.
05
/
4
.
72
=
37
.
6, due to inaccuracies in the profiling measurements. We
were able to gain a large speedup because sorting constituted a very large fraction
of the overall execution time.
Amdahl’s law describes a general principle for improving any process. In
addition to applying to speeding up computer systems, it can guide a company
trying to reduce the cost of manufacturing razor blades, or a student trying to
improve his or her gradepoint average. Perhaps it is most meaningful in the world
of computers, where we routinely improve performance by factors of 2 or more.
Such high factors can only be achieved by optimizing large parts of a system.
5.15
Summary
Although most presentations on code optimization describe how compilers can
generate efficient code, much can be done by an application programmer to assist
the compiler in this task. No compiler can replace an inefficient algorithm or data
structure by a good one, and so these aspects of program design should remain
a primary concern for programmers. We also have seen that optimization block-
ers, such as memory aliasing and procedure calls, seriously restrict the ability of
compilers to perform extensive optimizations. Again, the programmer must take
primary responsibility for eliminating these. These should simply be considered
parts of good programming practice, since they serve to eliminate unneeded work.
Tuning performance beyond a basic level requires some understanding of the
processor’s microarchitecture, describing the underlying mechanisms by which
the processor implements its instruction set architecture. For the case of out-of-
order processors, just knowing something about the operations, latencies, and
issue times of the functional units establishes a baseline for predicting program
performance.