Tag: exascale

US looking for vendors for two exascale supercomputers

Eniac-USarmyPhoto700The US Government is ready to seek vendors to build two exascale supercomputers — costing roughly $200 million to $300 million each — by 2019.

The two systems will be built at the same time and will be ready for use by 2023, although it’s possible one of the systems could be ready a year earlier.

However the boffins and the vendors do not know if   Donald “Prince of Orange” Trump’s administration will change directions. Indications are so far that science and supercomputers might be something that his administration are not keen on as they might be used to look at climate change.

At the annual supercomputing conference SC16 last week in Salt Lake City, a panel of government scientists outlined the exascale strategy developed by President Barack Obama’s administration. When the session was opened to questions, the first two were about Trump. One attendee quipped that “pointed-head geeks are not going to be well appreciated”.

Another person in the audience, John Sopka, a high-performance computing software consultant, asked how the science community will defend itself from claims that “you are taking the money from the people and spending it on dreams,” referring to exascale systems.

Paul Messina, a computer scientist and distinguished fellow at Argonne National Labs who heads the Exascale Computing Project said that the goal of the exascale computing project is to help economic competitiveness and economic security.

Politically, there ought to be a lot in HPC’s favor. A broad array of industries rely on government supercomputers to conduct scientific research, improve products, attack disease, create new energy systems and understand climate, among many other fields. Defense and intelligence agencies also rely on large systems.

There is also likely to be a technology race between the US and China. The Chinese want to have an exascale computer ready by 2020 which will challenge America’s tech dominance.

The US plans to award the exascale contracts to vendors with two different architectures. This is not a new approach and is intended to help keep competition at the highest end of the market. Recent supercomputer procurements include systems built on the IBM Power architecture, Nvidia’s Volta GPU and Cray-built systems using Intel chips.

The timing of these exascale systems — ready for 2023 — is also designed to take advantage of the upgrade cycles at the national labs. The large systems that will be installed in the next several years will be ready for replacement by the time exascale systems arrive.

 

 

Exascale computing ready in a decade

Intel fellow Shekhar Borkar has told the Semicon West fab tool vendor tradeshow that exascale computing will become a reality before the decade is out.

But he said that while the technology will be possible, thanks to parallelism and technology scaling, it will not meet its full potential.

According to EETimes, Borker said that the problem is that exascale computing needs to overcome power consumption barriers.

He said that by about 2018, engineers are expected to create an exascale supercomputer, capable of a 1,000-fold performance improvement compared with today’s petaflop systems.

But an exascale computer will consume vast amounts of power, according to Borkar, and the real challenge will be to build one which consumes only 20 megawatts (MW) of power.

If they manage this then giga-scale systems consuming only 20 milliwatts of power can be used in small toys and mega-scale systems that consume only 20 microwatts could be used in heart monitors. 

Huang claims GPUs will give him power

The Glorious Leader of Nvidia, Jen-Hsun Huang, claims that the march towards exascale computing will be hindered unless the world bows down to his mighty GPU.

During a keynote rally at the SC 11 supercomputing show in Seattle, says that unless technology venders turn power over to him it will be a long march towards exascale capabilities indeed.

He said that exascale computing “the next frontier for our industry” will enable faster, more powerful high-performance computing (HPC) applications in such industries as energy, medicine and defence.

According to Eweek,  the industry wants to meet the goal of an exaflop by 2019, while doing so within a 200 megawatt limit.

At the moment the world’s fastest computer, Fujitsu’s K Computer, can manage 10.51-petaflops but then needs to have a lie down. Reaching the exaflop level would require about 100 times better performance.

Huang says that in the GPUs of Nvidia lie the answer for what will drive HPC is parallel processing, and what will propel that are graphics processing units, or GPUs.

He said that parallel computing is tricky. Nvidia is forming an Axis of Allies with Cray, The Portland Group and the Center for Analysis and Prediction of Storms to create OpenACC, an effort to create a standard for parallel computing.

The glorious plan is to enable researchers, scientists and corporations to run applications in a parallel fashion on heterogeneous CPU/GPU systems.

Parallel programmers will be able to outline directives to the compiler, which will do all the work to optimise the applications for GPU-accelerated environments.

Huang hopes that this will bring a lot more people to parallel computing.