Intel and Argonne National Lab on ‘exascale’ and their new Aurora supercomputer
Reported today on TechCrunch
For the full article visit: https://tcrn.ch/2qoVMt1
Intel and Argonne National Lab on 'exascale' and their new Aurora supercomputer
The scale of supercomputing has grown almost too large to comprehend, with millions of compute units performing calculations at rates requiring, for first time, the exa prefix - denoting quadrillions per second. How was this accomplished? With careful planning… and a lot of wires, say two people close to the project.
Having noted the news that Intel and Argonne National Lab were planning to take the wrapper off a new exascale computer called Aurora (one of several being built in the U.S.) earlier this year, I recently got a chance to talk with Trish Damkroger, head of Intel's Extreme Computing Organization, and Rick Stevens, Argonne's associate lab director for computing, environment and life sciences.
The two discussed the technical details of the system at the Supercomputing conference in Denver, where, probably, most of the people who can truly say they understand this type of work already were. So while you can read at industry journals and the press release about the nuts and bolts of the system, including Intel's new Xe architecture and Ponte Vecchio general-purpose compute chip, I tried to get a little more of the big picture from the two.
Intel and Cray are building a $500 million 'exascale' supercomputer for Argonne National Lab
It should surprise no one that this is a project long in the making - but you might not guess exactly how long: more than a decade. Part of the challenge, then, was to establish computing hardware that was leagues beyond what was possible at the time.
"Exascale was first being started in 2007. At that time we hadn't even hit the petascale target yet, so we were planning like three to four magnitudes out," said Stevens. "A