Tag: supercomputer

US looking for vendors for two exascale supercomputers

Eniac-USarmyPhoto700The US Government is ready to seek vendors to build two exascale supercomputers — costing roughly $200 million to $300 million each — by 2019.

The two systems will be built at the same time and will be ready for use by 2023, although it’s possible one of the systems could be ready a year earlier.

However the boffins and the vendors do not know if   Donald “Prince of Orange” Trump’s administration will change directions. Indications are so far that science and supercomputers might be something that his administration are not keen on as they might be used to look at climate change.

At the annual supercomputing conference SC16 last week in Salt Lake City, a panel of government scientists outlined the exascale strategy developed by President Barack Obama’s administration. When the session was opened to questions, the first two were about Trump. One attendee quipped that “pointed-head geeks are not going to be well appreciated”.

Another person in the audience, John Sopka, a high-performance computing software consultant, asked how the science community will defend itself from claims that “you are taking the money from the people and spending it on dreams,” referring to exascale systems.

Paul Messina, a computer scientist and distinguished fellow at Argonne National Labs who heads the Exascale Computing Project said that the goal of the exascale computing project is to help economic competitiveness and economic security.

Politically, there ought to be a lot in HPC’s favor. A broad array of industries rely on government supercomputers to conduct scientific research, improve products, attack disease, create new energy systems and understand climate, among many other fields. Defense and intelligence agencies also rely on large systems.

There is also likely to be a technology race between the US and China. The Chinese want to have an exascale computer ready by 2020 which will challenge America’s tech dominance.

The US plans to award the exascale contracts to vendors with two different architectures. This is not a new approach and is intended to help keep competition at the highest end of the market. Recent supercomputer procurements include systems built on the IBM Power architecture, Nvidia’s Volta GPU and Cray-built systems using Intel chips.

The timing of these exascale systems — ready for 2023 — is also designed to take advantage of the upgrade cycles at the national labs. The large systems that will be installed in the next several years will be ready for replacement by the time exascale systems arrive.

 

 

Brits develop supercomputer to moan about next year’s weather

first-ever-fish_665107nUK boffins have developed a supercomputer to help them moan about the weather nearly a year before it happens.

Next to football, the moaning about the weather is the most popular sport in the UK and also one where they still are better than anyone else in the world. So the UK government thought it would be a wizard wheeze to splash out on a new super computer which enabled weather people to predict the weather a year in advance.

In October 2014 the government confirmed its investment of £97 million in a new high performance computing facility for the Met Office.

Now apparently the first phase of this supercomputer has gone live.  A spokesman for the Met Office said that turning research into highly detailed operational forecasts and services will enable us to produce innovative forecasts – for example focussing high resolution models on strategically important infrastructure such as airports and flood defences.

More detailed forecasts will make it possible to predict small-scale, high impact weather features with greater skill, such as thunderstorms that have the potential to lead to flash flooding.

The computer is based at the Exeter Science Park. It will run a project called the Earth System Model which captures all major aspects of the Earth’s climate system  – oceans, atmosphere, atmospheric chemistry, terrestrial carbon cycle and ocean biogeochemistry.

It can improve UK environmental prediction by using weather forecasting models together with other detailed prediction models, such as for flooding, coastal and river impacts, and atmospheric dispersion – used for volcanic ash, disease spread.

In the study paper entitled, Skilful predictions of the winter North Atlantic Oscillation one year ahead, the forecast researchers claim that new supercomputer-powered techniques have helped them develop a system to accurately predict North Atlantic Oscillation (NAO) – the climatic phenomenon which heavily impacts winters in the UK.

Using a ‘hindcasting’ technique, the researchers discovered that since 1980 the supercomputer would have been able to predict winter weather a year in advance with 62 percent accuracy.

China makes supercomputer without US chips

Mao Tse Tung - Wikimedia CommonsThe People’s Republic of China has made a huge supercomputer without needing to buy any US chips.

The Sunway TaihuLight China has 10.65 million compute cores built entirely with Chinese microprocessors and there is not a single US computer which matches it. The TaihuLight sticks two fingers up at the US for banning the sale of Intel’s Xeon chips to China.

The super computer has a theoretical peak performance is 124.5 petaflops and it is the first system to exceed 100 petaflops.

TaihuLight is installed at China’s National Supercomputing Center in Wuxi, uses ShenWei CPUs developed by Jiangnan Computing Research Lab in Wuxi. The operating system is a Linux-based Chinese system called Sunway Raise.

It is used for advanced manufacturing, earth systems modelling, life science and big data applications.

The US initiated this ban because China, it claimed, was using its Tianhe-2 system for nuclear explosive testing activities. The US stopped live nuclear testing in 1992 and now relies on computer simulations. Critics in China suspected the U.S. was acting to slow that nation’s supercomputing development efforts.

The fastest US supercomputer, number 3 on the Top500 list, is the Titan, a Cray supercomputer at US Dept. of Energy’s Oak Ridge National Laboratory with a theoretical peak of about 27 petaflops.

Intel releases Knight’s Landing

Monty-Python-and-The-Holy-Grail-monty-python-16538945-845-468Chipzilla has announced its new version of Xeon Phi, otherwise known as Knight’s Landing which is a 72-core coprocessor solution manufactured on a 14nm process with 3D Tri-Gate transistors.

These use coprocessors built around Intel’s Many Integrated Core (MIC) architecture that, just combine a whole bunch of cores into a single chip, which itself is part of a larger PCI-E add-in card solution for supercomputing applications.

Add-in cards run alongside engines just like NVIDIA’s Tesla GPU accelerators which can help with number crunching. Knight’s Landing is pretty good at supercomputing tasks, like working out how many years humanity has left in climate change simulations, genetic analysis, investment portfolio risk management and searching for new energy sources.

Knight’s Landing succeeds the current version of Xeon Phi, codenamed Knight’s Corner, which has up to 61 cores. Knight’s Landing has double-precision performance exceeding three teraflops and over 8 teraflops of single-precision performance. It also has 16GB of on-package MCDRAM memory, which Intel says is five times more power efficient as GDDR5 and three times as dense.

In making the announcement Charlie Wuischpard, vice president and general manager of HPC Platform Group at Intel said that supercomputing was entering a new era and being transformed from a tool for a specific problem to a general tool for many,”

“System-level innovations in processing, memory, software and fabric technologies are enabling system capabilities to be designed and optimised for different usages, from traditional HPC to the emerging world of big data analytics and everything in between. We believe the Intel Scalable System Framework is the path forward for designing and delivering the next generation of systems for the ‘HPC everywhere’ era.”

Submerge your supercomputer in liquid

Yellow-Submarine-HeaderA team of boffins have discovered that if you take your supercomputer and immerse it in tanks of liquid coolant you can make it super efficient.

The Vienna Science Cluster uses immersion cooling which involves putting  SuperMicro servers into a dielectric fluid similar to mineral oil.

The servers are slid vertically into slots in the tank, which is filled with 250 gallons of ElectroSafe fluid, which transfers heat almost as well as water but doesn’t conduct an electric charge.

The Vienna Science Cluster 3 system has a mechanical Power Usage Effectiveness rating of just 1.02, meaning the cooling system overhead is just 2 percent of the energy delivered to the system.

This means that 600 teraflops of computing power uses just 540 kilowatts of power and 1,000 square feet of space.

Christiaan Best, CEO and founder of Green Revolution Cooling, which designed the immersion cooling system. “It is particularly impressive given that it uses zero water. We believe this is a first in the industry.”

Most data centres cool IT equipment using air, while liquid cooling has been used primarily in high-performance computing (HPC). But cloud computing and “big data,” could make liquid cooling relevant for a larger pool of data centre operators.

The Vienna design combines a water-less approach with immersion cooling, which has proven effective for cooling high-density server configurations, including high-performance computing clusters for academic computing, and seismic imaging for energy companies.

Futuristic Intel chips under the bonnet of supercomputer

tm40-whirlwindA supercomputer being installed in the Argonne National Laboratory will have more than 70 core Knight’s Hill architecture and some other fairly sexy interconnect.

According to the Platform the project – dubbed “Aurora” – will be the first time in 20 years that a chipmaker, as opposed to a systems vendor, has been awarded the contract to build a leading-edge national computing resource.

Aurora will reach a peak performance of 180 petaflops and will be so powerful it can calculate the existence of rice pudding and income tax before it is switched on in 2018.

The machine will be a next-generation variant of Cray’s “Shasta” supercomputer line, which it has been designing in conjunction with Intel since the chip maker bought the Cray interconnect business three years ago for $140 million.

The new $200 million supercomputer is set to be installed at Argonne’s Leadership Computing Facility and will be part of a trio of systems aimed at bolstering nuclear security initiatives.

Aurora, with its 180 petaflops peak will pull 13 megawatts. This is an 18X performance improvement in performance with just 2.7x the power.

Supercomputer passes Turing test

More than a a third of Royal Society testers have been fooled by a super computer into thinking that it was a 13 year old boy.

Five machines were tested at the Royal Society in central London to see if they could fool people into thinking they were humans during text-based conversations.

The test was devised in 1950 by computer science pioneer and World War II code breaker Alan Turing, who said that if a machine was indistinguishable from a human, then it was “thinking”.

So far no computer has passed the Turing test, which requires 30 percent of human interrogators to be duped during a series of five minute keyboard conversations.

“Eugene Goostman”, a computer program developed to simulate a 13-year-old boy, managed to convince 33 percent of the judges that it was human, the university said.

Professor Kevin Warwick, from the University of Reading, said: “In the field of artificial intelligence there is no more iconic and controversial milestone than the Turing test.

“It is fitting that such an important landmark has been reached at the Royal Society in London, the home of British science and the scene of many great advances in human understanding over the centuries. This milestone will go down in history as one of the most exciting.”

The successful machine was created by Russian-born Vladimir Veselov, who lives in the United States, and Ukrainian Eugene Demchenko who lives in Russia.

Of course a 13 year old boy is not difficult to simulate. The computer had to have an incredibly slow start up time, not say much and mumble, but it was a start.

The event on Saturday was poignant as it took place on the 60th anniversary of the death of Turing, who laid the foundations of modern computing. 

Get a supercomputer for $99

Adapteva is selling a $99 parallel-processing board that can  build a Linux supercomputer for less than $100.

Cheap underpowered computers are the new black since the $25 and $35 Raspberry Pi models appeared so Adapteva is trying to flog a more expensive gizmo in a crowded market.

However, Adapteva is the first super-cheap supercomputer. Dubbed Parallela, Adapteva’s board the size of a Raspberry Pi, but packs a significantly more powerful punch.

It has 1GB of RAM, 2 USB 2.0 ports, a microSD slot, an HDMI connection, and a 10/100/1000 Ethernet port – all pretty standard. But Parallela has an ARM A9 processor with a 64-core Epiphany Multicore Accelerator, which helps the board achieve around 90 gigaflops.

The boards can’t replace standard desktops or gaming rigs but it will create a more efficient number cruncher. This board will use Ubuntu Linux 12.04 for its operating system.

Adapteva CEO Andreas Olofsson said that conventional computing improved so quickly that in most applications, there was no need for massively parallel processing.

“Unfortunately, serial processing performance has now hit a brick wall, and the only practical path to scaling performance in the future is through parallel processing. To make parallel software applications ubiquitous, we will need to make parallel hardware accessible to all programmers, create much more productive parallel programming methods, and convert all serial programmers to parallel programmers,” he said. 

It's back to the good old Crays

The development of Big Data has been a money spinner for the much written off Cray Computer.

Cray was a big name in the 1970s when you wanted a computer the size of an office block to run your payroll.  However, it disappeared into the background 20 years ago with the rise of the PC.

According to Reuters, Cray is surging back to prominence. Its shares have almost doubled over the last year.

The reason is that the explosion of data and the need to work out what it all means demands greater computing power.

Barry Bolding, a Cray vice president, at the company’s Seattle headquarters, said that five years ago people thought they could run simulations on a laptop.

That might have been true at the time, but now raw data is being created in exabytes. More data means a bigger computer, a bigger computer means more data, he said

More than 2.5 exabytes of data are now generated every day, and the world’s capacity to store that data is doubling every 40 months, which all plays to Cray’s cunning plans.

Cray cabinets cost $500,000 and some of the bigger customers can group 200 or more into massive supercomputers worth hundreds of millions of dollars.

Titan, completed by Cray last year, is the world’s third-fastest supercomputer, takes up the size of a basketball court and can perform more than 20,000 trillion calculations a second.

Cray has 900 employees and a market value of around $940 million, has changed ownership several times and in June it nicked long term IBM customer the European Centre for Medium-Range Weather Forecasts.

Wall Street analysts are expecting revenue of $519 million this year, up 23 percent from 2012, with a gross profit margin around 34 percent.

Intel wants to examine your DNA

Fashion bag maker Intel is sprucing up supercomputer design so that some top boffins can have half a chance to understand DNA codes.

Chipzilla has signed a deal with imec and five Flemish universities to set up the ExaScience Life Lab on the imec campus in Leuven.

The objective of the collaboration is to come up with new supercomputer ideas that will generate breakthroughs in life sciences and biotechnology.

The supercomputers will be designed to accelerate the processing of entire genome sequences. At the moment an analysis takes approximately 48 hours and the thought is that computer design is falling behind. There will also be an expected explosion of genome data becoming available in the coming years.

According to Intel, the lab will examine the use of computer simulations in the life sciences. Testing hypotheses through computer simulation both cells and tissues instead of through wet-lab testing saves considerable amounts of time and cost associated to lab tests

Intel lab manager Luc Provoost said that Chipzilla has an extensive network of research labs in Europe. Once operational, the ExaScience Life Lab will be our European centre of excellence for high performance computing in the life sciences..

Later the plan is to collaborate with Janssen Pharmaceuticals to build even more powerful DNA supercomputers.