Chipzilla’s billion dollar investment in Nervana might be the key to making its server chips more intelligent.
Intel is laying out its roadmap to advance artificial intelligence performance across the board and Nervana technology appears to be everywhere.
The high-performance silicon market is dominated by GPUs. However, with Nervana inside, Intel hopes its new corporate tech with its a fully-optimized software and hardware stack will give that business model a good kicking.
Nervana hardware will initially be available as an add-in card that plugs into a PCIe slot. The first Nervana silicon, codenamed Lake Crest, will make its way to select Intel customers in H1 2017.
Intel is also talking about Knights Mill, which is the next generation of the Xeon Phi processor family. Intel said that Knights Mill will deliver a 4x increase in deep learning performance compared to existing Xeon Phi processors and the combined solution with Nervana will offer orders of magnitude gains in deep learning performance.
Diane Bryant, Executive VP of Intel’s Data Center Group said that the Intel Nervana platform to produce breakthrough performance and dramatic reductions in the time to train complex neural networks.
Intel CEO Brian Krzanich said that Nervana’s technologies will produce a 100-fold increase in performance in the next three years to train complex neural networks, enabling data scientists to solve their biggest AI challenges faster.
A partly Dell-powered supercomputer, Stampede, is being lauded tomorrow at its home in Texas Advanced Computing Center (TACC) at Austin’s University of Texas.
Stampede has Dell’s PowerEdge servers under the bonnet and is the largest of the company’s public production cluster deployments so far. The supercomputer is supported by the National Science Foundation and has a 2.2 petaflop base cluster.
TACC also has what is currently the largest configuration of Intel Xeon Phi parallel coprocessors, managing just over seven petaflops of performance. All in, the integrated system has almost 10 petaflops, which Dell points out means the supercomputer can run nearly 10 quadrillion math operations a second.
Research is already being carried out on Stampede to predict the frequency of earthquakes in California, as well as to identify and image brain tumours, mixing MRI scan data with other biophysical models to chart tumour growth. The research is also focusing on designing nanocatalysts to capture CO2 from exhaust, to be converted into valuable substances used for industrial applications, Dell said.
While Stampede is not gunning for the top spot on the international supercomputing arms race, it’s clear the system has interesting applications. Supercomputers are increasingly being used to tackle tough questions and make complex calculations, quickly, where traditional methods could take weeks for similar results – for instance, in DNA mapping.
Last year, our favourite supercomputer story was about a system running calculations to see if the entire universe is, in fact, a simulation created by supercomputers in the distant future.
The dedication event for Stampede will be tomorrow at the J J Pickle Research Campus, Austin. Former stakeholders will be speaking at the event, including Michael Dell, Intel’s Diane Bryant, Congressman Lamar Smith and the University of Texas at Austin president William Powers.
The Top 500 supercomputer list gave ample opportunity for bitter rivals Intel and AMD to talk up all the good they’re doing in that sector.
AMD issued a statement headlined: “AMD supercomputing leadership continues with broader developer ecosystem and latest top 500”. It keenly pointed out that 24 of the top 100 are powered by AMD, though Intel was pleased to announce its architecture powered 74 percent of all systems, and 77 percent of all new entries.
Intel talked up the fact that Europe’s fastest supercomputer – SuperMUC in Germany – makes use of the Intel Xeon E5 family. The system delivers 2.9 petaflops, which is a fair amount of flops. Meanwhile, the company has decided on the brand for its Many Integrated Core architecture products. They will be out by the end of 2012. Chipzilla has abandoned Knight’s Corner to go with ‘Xeon Phi’. Xeon Phi will be available in the PCIe form factor, holding over 50 cores and a minimum of 8GB GDDR5 memory, plus 512b wide SIMD support.
AMD pointed out that, working closely with its partners in HPC, there have been several new developments in LS-DYNA simulation software for LSTC’s AMD Opteron 6200 series, as well as programming options for AMD GPUs from CAPS, and Mellanox’s Connect-IB announcement that promises to bring FDR 56Gb/s InfiiBand to AMD portfolios.
LS-DYNA is an element program that simulates complex problems and is used by the auto, aerospace, construction, military, manufacturing, and biongineering industries. The beta version is available while general availability should come in the third quarter 2012.
Intel claims that, with its technology, the HPC industry will be tackling challenges like mapping the full human genome in 12 hours at under $1,000, compared to the two weeks at $60,000 that is necessary now. The company promises to deliver exascale performance by 2018.
IBM nabbed the slot for most powerful supercomputer on the list.