Nvidia builds huge neural network

Graphics chip maker Nvidia has revealed that it has helped Stanford University create the world’s largest artificial neural network built to model how the human brain learns.

The network is 6.5 times bigger than the previous record-setting network developed by Google in 2012.

Neural networks are capable of “learning” how to model the behaviour of the brain. They can recognise objects, characters, voices and audio in the same way that humans do.

Creating large-scale neural networks is extremely computationally expensive. Google used 1,000 CPU-based servers, or 16,000 CPU cores, to develop its neural network. This network taught itself  to recognise cats in a series of YouTube videos, which was not difficult as most YouTube videos are about cats.

The Stanford team, led by Andrew Ng, director of the university’s Artificial Intelligence Lab, created an equally large network with only three servers using Nvidia GPUs to accelerate the processing of the big data generated by the network.

Using 16 Nvidia GPU-accelerated servers, the team then created an 11.2 billion-parameter neural network which was 6.5 times bigger than a network Google announced in 2012.

The bigger and more powerful the neural network, the more accurate it is likely to be in tasks such as object recognition, enabling computers to model more human-like behaviour.

Sumit Gupta, Nvidia’s general manager of the Tesla Accelerated Computing Business Unit said GPU accelerators can bring large-scale neural network modelling to the masses.

Now any researcher or company can use machine learning to solve all kinds of real-life problems with just a few GPU-accelerated servers.