Tag: HPC

China triples super computer share

ChinaThe latest TOP500 list, which twice yearly publishes a list of 500 supercomputers, said that China tripled the number of supercomputers or high performance systems.

Meanwhile, the authors of the report said that the USA has fallen to its lowest point since the list was first compiled in 1993.

The report notes that China is also fast becoming a serious manufacture of high performance computers (HPCs) with “multiple” vendors become more active.

Top of the lsit is China’s Tianhe-2, which has kept its position as the world’s most powerful supercomputer. Tianhe-2 was developed by the Chinese National University of Defence Technology.

Tienhe-2 has a performance of 33.86 petaflop/s using the Linkpack benchmark. Second on the list is a Cray XK7 system called Titan, deployed at the US Department of Energy’s Oak Ridge National Laboratory.

Britain doesn’t come anywhere in the top 10.

Dell pushes high performance computing

Dell logoDell said that it has introduced systems intended to make high performance computing (HPC) and data analytics suitable for mainstream adoption.

The HPC System Portfolio includes machines which easy to configure and design, domain specific designs tuned by Dell engineers and aimed at scientific, engineering and analytics, and full validated systems.

Application specific systems include an HPC system for genomic data analysis, using its relationship with the Translational Genomics Research Institute.

HPC for manufacturing is for enterprises running complicated manufacturing design simulations using workstations, clusters or both.

And HPC system for research is, as its name implies, for complex scientific analysis.

Dell also said it has expanded its HPC lab in conjunction for Intel, in Austin Texas.

Dell said it is the first major original equipment manufacturer (OEM) to join Intel’s Fabric Builders programme giving it access to Intel’s Xeon Phi CPU family.

Dell claims server shipments booming

Dell logoA senior executive at Dell claimed that his company has now taken second place globally in shipments of servers and in revenues.

Speaking to Digitimes, Ashley Gorakhpurwalla, general manager of servers at Dell, said his company has narrowed the gap between it and arch rival Hewlett Packard.

Gorakhpurwalla claimed that Dell is now the number one server vendor in North America and in China.

Nor does Dell appear to view the original development manufacturers (ODMs) such as Quanta to be a threat to its own business.

Dell believes its server growth in the high performance computer (HPC) sector is boosting its revenues worldwide.

Open source vendors don’t bother Dell either, said Gorakhpurwalla.

Nvidia sees disruptions in PC market as an opportunity

Nvidia seems to think that no crisis should ever go to waste, hence it believes it can capitalize on disruptions in the PC market and weather the storm with ease. We are of course talking about Tegra, Nvidia’s foray into the ARM SoC market, which was off to a slow start but it seems that it is here to stay. 

When the going gets tough, AMD can fall back on its console design wins, whereas Nvidia had to be a bit more creative. Speaking at a Barclays event, meticulously transcribed by Seeking Alpha, Nvidia VP of investor relations Rob Csongor said the company has the ability to make “disciplined” investments for growth by leveraging its core R&D. In other words, the company can simply rehash GPU R&D to come up with competitive mobile SoC designs, at a fraction of the cost.

Csongor said Nvidia views disruptions in the PC market as opportunities and has strategies to “drive” the disruptions to achieve its own goals, which sounds a bit like the good old “if you can’t beat them, join them” approach. 

“The net effect of all of this with mobile and cloud disrupting the PC, we believe this has created enormous opportunities for Nvidia. Essentially by 2015, we believe there will be over 3 billion HD devices in other words imagine any device that’s high definition becomes an opportunity for Nvidia to extend its GPU into,” he said.

Csongor went on to point out that Nvidia has spent over $6 billion in R&D for visual computing over the last decade. 

“When we develop a Tegra processor to go and target the mobile market, we’re leveraging a lot of R&D investments that we’ve done in GPU,” he said.

While this is true, it should be noted that the first three generations of Tegra chips had rather disappointing graphics and were routinely outperformed by SoCs designed by Apple, Samsung, Qualcomm and others. However, with the Tegra 4 Nvidia hope to turn things around, as it is supposed to feature the fastest GPU in the ARM universe. In addition, the Tegra 4i and future Tegra chips will also have LTE on board.

Csongor pointed out that Nvidia is rapidly expanding beyond its traditional PC market, with its first handheld console and other Tegra 4 devices, including car infotainment systems, hybrid tablets and high performance computers based on GPU-derived chips. 

US government calls for help from Intel, AMD

Intel and AMD have both landed multi million dollar contracts from the US Department of Energy to crank up research into high performance computing.

AMD announced earlier this week that it will undertake development of HPC technologies to move towards the future of exascale computing. This means that AMD will be handed a cheque for $12.6 million to continue research into processor and memory technologies.

Now Intel has announced that it will also bag a contract from the DOE to conduct exascale computing research. As part of the government’s Fast Forward programme Intel’s Intel Federal subsidiary has been given $19 million to conduct research too.

Intel says that it will be pushing towards exascale computing by the end of the decade, with research going into its Xeon chips.

We doubt that the two firms will be sharing too many notes as part of the programme, though both will be joining up with a number of academics and industry figures as part of the joint public-private project.

Advances in HPC should boost both national infrastructure and create benefits for business, accessing computation power thousands of times higher than today, with similar power consumption.

In the UK a number of HPC facilities aimed at business use have also been unveiled recently.

UK's fastest GPU-based supercomputer unveiled

Two of the UK’s most powerful supercomputers will whirr into action tomorrow, giving businesses and academics access to the cutting edge processing power.

One is an HP machine with Nvidia‘s Tesla accelerated GPU on board, named Emerald, which will come online along the Iridis 3, and will be used for a variety of number-crunching applications.  

These will include healthcare studies into Tamiflu and swine flu, as well helping deploy the world’s most powerful telescope, simulating 4G communications networks, and more.

They will also be used for business applications, giving SMEs a leg up to use the power of High Performance Computing, and can be used for testing new products.

The supercomputers were funded by a £3.7 million grant from the Engineering and Physical Sciences Research Council – part of a £145 million government investment in e-infrastructure. Both the supercomputers will be unveiled at the Science and Technology Facilities Council’s Rutherford Appleton Laboratory (RAL).

This is where Emerald will be housed, while the Iridis 3 is being hosted by the University of Southampton.

Dr Lesley Thompson, Director of EPSRC’s Research Base said that the new supercomputers are crucial to “maintaining the UK’s leading science base and underpinning our national competitiveness and economic recovery”.

Universities secretary David Willets heralded the suptercomputers as keeping the UK at the cutting edge of science.

The announcement comes just a few days after the news that Cambridge University and Imperial College London have teamed up to unveil another Nvidia supported supercomputer – the CORE. 

British Academia unveils business supercomputer

Cambridge University and Imperial College London have launched a supercomputer named CORE which it says is the UK’s most advanced High Performance Computing platform, which will be available both to private industry and academia.

The system is powered by over 22,000 Intel processor cores which run along with over 3 petabytes of high performance file system, as well as one of the UK’s biggest Nvidia GPU clusters. Its cloud amounts to 300 teraflops, the university says.

CORE is a part of the Business, Innovation and Skills e-infrastructure expansion programme which is seeking to introduce HPC and systems that understand big data into a wider framework in the UK, accessible to both industry and universities.

As well as running complex simulation, CORE has already been used by partners such as Rolls Royce to build what the university says is a “real competitive advantage”. 

CORE director from the department of materials and physics at Imperial College London, Dr Peter Haynes, said that for the UK, CORE is “completely unique in terms of its scale and breadth of knowledge”. Haynes believes that, more significantly, the system’s track record has been doing very well to date and is one of the most effective business ready e-infrastructure services available in the UK. 

The approach to public-private supercomputing is not new, but it is an idea that is gaining momentum. 

CORE will be for hire to partners who want to deploy their own in house systems, the university said. That will include consultancy from procurement and design up to project management, analytics, optimisation, and project management.

Dr Paul Calleja, CORE director at Cambridge, claimed that CORE proves UK leadership in high performance computing and big data design, for SMBs and enterprise-scale customers, as well as in life sciences and materials modelling.  

Intel takes majority of supercomputer top 500

The Top 500 supercomputer list gave ample opportunity for bitter rivals Intel and AMD to talk up all the good they’re doing in that sector.

AMD issued a statement headlined: “AMD supercomputing leadership continues with broader developer ecosystem and latest top 500”. It keenly pointed out that 24 of the top 100 are powered by AMD, though Intel was pleased to announce its architecture powered 74 percent of all systems, and 77 percent of all new entries.

Intel talked up the fact that Europe’s fastest supercomputer – SuperMUC in Germany – makes use of the Intel Xeon E5 family. The system delivers 2.9 petaflops, which is a fair amount of flops. Meanwhile, the company has decided on the brand for its Many Integrated Core architecture products. They will be out by the end of 2012. Chipzilla has abandoned Knight’s Corner to go with ‘Xeon Phi’. Xeon Phi will be available in the PCIe form factor, holding over 50 cores and a minimum of 8GB GDDR5 memory, plus 512b wide SIMD support.

AMD pointed out that, working closely with its partners in HPC, there have been several new developments in LS-DYNA simulation software for LSTC’s AMD Opteron 6200 series, as well as programming options for AMD GPUs from CAPS, and Mellanox’s Connect-IB announcement that promises to bring FDR 56Gb/s InfiiBand to AMD portfolios.

LS-DYNA is an element program that simulates complex problems and is used by the auto, aerospace, construction, military, manufacturing, and biongineering industries. The beta version is available while general availability should come in the third quarter 2012.

Intel claims that, with its technology, the HPC industry will be tackling challenges like mapping the full human genome in 12 hours at under $1,000, compared to the two weeks at $60,000 that is necessary now. The company promises to deliver exascale performance by 2018.

IBM nabbed the slot for most powerful supercomputer on the list.

UK gov splashes out £11m on climate change supercomputing

The British government is set to spend £11 million on IBM supercomputing capacity to model climate change more effectively.

As part of a £60 investment, the Department for Energy Climate Change (DECC) is looking to improve the UK’s ability to understand and prepare for climate change.

Much of the cash for development, around £50 million, will go to the Met Office Hadley Centre to aid climate research and modelling up to 2015.

Approximately £11 million has been spent on High Performance Computing.  This basically means supercomputing capacity and the hardware necessary for climate modelling. With a load of new kit, it seems that DECC is hoping to give an even more accurate reading of when the world’s penguin population will croak and just when progress can begin for vineyards in the Hull region.

According to DECC, the new investment has taken the form of eight supernodes (32 drawers) of IBM Power775 supercomputer servers.

It also includes data archive storage for extra HPC hardware – 33 petabytes of storage, three servers, 5760 media tapes and two tape frames.

The full DECC contribution was £7.43 million, for six supernodes, with the Department for Environment, Food, and Rural Affairs (DEFRA) chipping in £3.76 million for two supernodes and the data archive storage.

The idea behind the climate modelling is to help make businesses understand threats better, and provide more evidence to support greater use of renewable energy.

Universities Minister David “Two Brains” Willets said that supercomputing is “fundamental to modern research”, especially with increasing data complexity.

TechEye approached the Met Office about ‘The Penguin Question’ – to find out what the supercomputer will do exactly, and whether it can give us a precise date for the ice caps melting – but we are yet to receive comment. 

Nvidia invites global boffins to GPU love-fest

NVDA is wheeling out its CEO Jen-Hsun Huang to kick-start the GTC Asia event, part of its GPU Technology Conference series, looking at and promoting – guess what? – GPUs.

All manner of boffins will be attending, including representatives from HP Labs, Harvard University, Shanghai Jiao Tong University, Swiss National Supercomputing Center, Tokyo Institute of Technology, Chinese Academy of Sciences, Institute of Process Engineering and more.

The whole idea of GTC Asia, Nvidia says, is a platform for developers, programmers and research scientists to share their findings on complex computational problems by using GPUs.

As you can imagine, with that many boffins in tow to speak, there will be a lot of talk about how GPUs can advance scientific research and other areas of academia.

There will also be an emerging companies summit, which Nvidia promises will show off start-ups using GPUs to push forward modern computing.

Potential CUDA boffins of the future are invited to indulge in a CUDA student workshop.

Nvidia is certain that the world needs its almighty GPUs if it’s going to achieve that lofty goal of exascale computing.