The supercomputer war between the US and China is set to begin in earnest with the advent of research by the University of Warwick that will be presented at the Supercomputing conference in New Orleans next week.
Professor Stephen Jarvis, a Royal Society Industry Fellow at the Department of Computer Science at the University of Warwick, is set to reveal details about what is expected to become a major contest between China’s general-purpose GPU (GPGPU) design in its world-leading supercomputer and the US’s focus on scalable interconnects to bring together simpler processing cores, such as in IBM’s BlueGene architecture.
The goal for both, and for any other countries who manage to work their way onto the battlefield, is to increase processing power to an exascale, which is a mind-boggling one quintillion floating-point operations per second. The increase over the currently used petascale would be in the region of 1,000 times.
The target may be many years off, but it’s one that both the US and China are eagerly trying to reach before the other does, despite the potential environmental impact of such monstrous machines. The research suggests that a supercomputer operating at just 1 exaflop would need the same power as a small town to operate.
Jarvis’ research uses mathematical models, benchmarking and simulation in efforts to determine the kind of performance we can expect from the next-generation in supercomputing.
The report suggests that of all the designs GPGPU and BlueGene are the most likely to reach exascale, but that they are not without problems. Jarvis found that the GPGPU design used significantly less of its peak performance than BlueGene, effectively leaving a large section of the supercomputer idle, while the BlueGene design requires more processing elements to achieve the same goals of GPU-based systems.
Jarvis discovered that small GPU-based systems managed to solve problems at between three and seven times fasters than CPU-based systems, but that BlueGene machines saw a faster improvement in performance with the addition of more processing elements than a similar increase in processing elements in a GPU-based machine.
Jarvis also found that there is a large gap between programming and engineering, with engineers pushing ahead with faster and more powerful systems, while programmers are struggling to write the code to actually make use of those advances.
“Given the crossroads at which supercomputing stands, and the national pride at stake in achieving Exascale, this design battle will continue to be hotly contested,” Jarvis said. “It will also need the best modeling techniques that the community can provide to discern good design from bad.”