In last 50 years, the field of scientific computing has undergone rapid change – we have experienced a remarkable turnover
of technologies, architectures, vendors, and the usage of systems. Despite all these changes, the long-term evolution of performance
seems to be steady and continuous, following Moore’s Law rather closely. In 1965 Gordon Moore one of the founders of Intel,
conjectured that
... [Show full abstract] the number of transistors per square inch on integrated circuits would roughly double every year. It turns
out that the frequency of doubling is not 12 months, but roughly 18 months [8]. Moore predicted that this trend would continue
for the foreseeable future. In Figure 1, we plot the peak performance over the last five decades of computers that have been
called “supercomputers.” A broad definition for a supercomputer is that it is one of the fastest computers currently available.
They are systems that provide significantly greater sustained performance than that available from mainstream computer systems.
The value of supercomputers derives from the value of the problems they solve, not from the innovative technology they showcase.
By performance we mean the rate of execution for floating point operations. Here we chart KFlop/s (Kilo-flop/s, thousands
of floating point operations per second), MFlop/s (Maga-flop/s, millions of floating point operations per second), GFlop/s
(Gega-flop/s, billions of floating point operations per second), TFlop/s (Tera-flop/s, trillions of floating point operations
per second), and PFlop/s (Peta-flop/s, 1000 trillions of floating point operations per second). This chart shows clearly how
well this Moore’s Law has held over almost the complete lifespan of modern computing – we see an increase in performance averaging
two orders of magnitude every decade.