Google straps 16,000 processors together

Researchers working for the search engine outfit Google have created one of the world’s largest neural networks which can identify if something is a cat or not.

For several years Google has been building a brain simulator by connecting 16,000 computer processors, which they turned loose on the internet to learn on its own.

Rather than hunting for Sara Conner, or plotting the end of human kind, the brain simulator hunted around the net for pictures of cats.

According to the New York Times, this is trickier than it sounds. The AI had a list of 20,000 distinct items to tick off before it said “that is a cat.”

The Google research team, lead by the Stanford University computer scientist Andrew Ng and the Google fellow Jeff Dean, used an array of 16,000 processors to create a neural network with more than one billion connections.

This was fed random thumbnails of images which were extracted from 10 million YouTube videos.

The software-based neural network mirrored theories developed by biologists that suggest neurons are trained inside the brain to detect important objects.

Most machine vision technology depends on having humans “supervise” the learning process by labelling features. In this case the computer was given no help in identifying features.

Ng said it was a matter of throwing shedloads of data at the algorithm and letting the data speak and have the software automatically learn from the data.

At no point was the computer told “this is a cat’ it basically invented the concept of a cat.

The Google computer assembled a dreamlike digital image of a cat by employing a hierarchy of memory locations to successively cull out general features after being exposed to millions of images. This is exactly what takes place in the brain’s visual cortex.

Google will present the results of their work at a conference in Edinburgh, Scotland, where there are a lot of cats.