Computers of NASA 1960's
http://www.computerhistory.org/timeline/1960/
"High-performance computing" redirects here. For narrower definitions of HPC, see High-throughput computing and Many-task computing. For other uses, see Supercomputer (disambiguation).
The IBM Blue Gene/P supercomputer "Intrepid" at Argonne National Laboratory runs 164,000 processor cores using normal data center air conditioning, grouped in 40 racks/cabinets connected by a high-speed 3-D torus network.[1][2]
A supercomputer is a computer with a high level of computing performance compared to a general-purpose computer. Performance of a supercomputer is measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). As of 2017, there are supercomputers which can perform up to nearly a hundred quadrillions of FLOPS,[3] measured in P(eta)FLOPS.[4] As of November 2017, all of the world's fastest 500 supercomputers run Linux-based operating systems.[5] Additional, state of the art research is being conducted in China, United States, European Union, Taiwan and Japan to build even more faster, powerful and technologically superior exascale supercomputers.[6]
Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). Throughout their history, they have been essential in the field of cryptanalysis.[7]
Supercomputers were introduced in the 1960s, and for several decades the fastest were made by Seymour Cray at Control Data Corporation (CDC), Cray Research and subsequent companies bearing his name or monogram. The first such machines were highly tuned conventional designs that ran faster than their more general-purpose contemporaries. Through the 1960s, they began to add increasing amounts of parallelism with one to four processors being typical. From the 1970s, the vector computing concept with specialized math units operating on large arrays of data came to dominate. A notable example is the highly successful Cray-1 of 1976. Vector computers remained the dominant design into the 1990s. From then until today, massively parallel supercomputers with tens of thousands of off-the-shelf processors became the norm.[8][9]
The US has long been a leader in the supercomputer field, first through Cray's almost uninterrupted dominance of the field, and later through a variety of technology companies. Japan made major strides in the field in the 1980s and 90s, but since then China has become increasingly important. As of June 2016, the fastest supercomputer on the TOP500 supercomputer list is the Sunway TaihuLight, in China, with a LINPACK benchmark score of 93 PFLOPS, exceeding the previous record holder, Tianhe-2, by around 59 PFLOPS. Sunway TaihuLight's emergence is also notable for its use of indigenous chips, and is the first Chinese computer to enter the TOP500 list without using hardware from the United States. As of June 2016, China, for the first time, had more computers (167) on the TOP500 list than the United States (165). However, US built computers held ten of the top 20 positions;[10][11] as of November 2017, the U.S. has four of the top 10 and China two.