Nvidia creates “miracle” deep learning chip

5136037690_97d228fa58Nvidia chief executive Jen-Hsun Huang announced that the company has created a new chip which can do five miracles – the Tesla P100 for deep-learning computing.

With 15 billion transistors, it’s the biggest FinFET chip Nvidia ever made. Huang told the throngs at the GPUTech conference in San Jose, California. He unveiled the chip after he said that deep-learning artificial intelligence chips have already become the company’s fastest-growing business.

Huang is claiming a lot for the chip saying it could do “five miracles.”  Not quite Jesus’s 37 but clearly Nvidia is catching up – although Huang’s definition of a miracle might be a little different from Christian myth.

“Three years ago, when we went all in, it was a leap of faith,” Huang said. “If we build it, they will come. But if we don’t build it, they won’t come.”

The chip has 15 billion transistors, or three times as much as many processors or graphics chips on the market. It takes up 600 square millimeters. The chip can run at 21.2 teraflops. Huang said that several thousand engineers laboured  on it for years.

“We decided to go all-in on A.I.,” Huang said. “This is the largest FinFET chip that has ever been done.”

Nvidia says it is shipping P100 to IBM, HPE, Dell, Cray, AI and cognitive cloud players, and key research institutions.

Huang showed a demo from Facebook that used deep learning to train a neural network how to recognize a landscape painting. They then used the network to create its own landscape painting.

He said that deep learning has become a new computing platform, and the company is dealing with hundreds of startups in the space that plan to take advantage of the platform.

“Our strategy is to accelerate deep learning everywhere,” Huang said.

Nvidia has also built a 170-teraflop DGX-1 supercomputer using the Tesla P100 chip.

“This is a beast of a machine, the densest computer ever made,” he said.