Google claims its TPU improves machine learning

victorian-education-2Google claims that its Tensor Processing Unit (TPU), advances machine learning capability by a factor of three generations.

Google CEO Sundar Pichai told the Google’s I/O developer conference that TPUs deliver an order of magnitude higher performance per watt than all commercially available GPUs and FPGA.

Pichai said the chips powered the AlphaGo computer that beat Lee Sedol, the world champion in the incredibly complicated game called Go. Google still is not going into details of the Tensor Processing Unit but the company did disclose a little more information in its blog.

“We’ve been running TPUs inside our data centres for more than a year, and have found them to deliver an order of magnitude better-optimised performance per watt for machine learning. This is roughly equivalent to fast-forwarding technology about seven years into the future (three generations of Moore’s Law),” the blog said. “TPU is tailored to machine learning applications, allowing the chip to be more tolerant of reduced computational precision, which means it requires fewer transistors per operation. Because of this, we can squeeze more operations per second into the silicon, use more sophisticated and powerful machine learning models, and apply these models more quickly, so users get more intelligent results more rapidly.”

The tiny TPU can fit into a hard drive slot within the data centre rack and has already been powering RankBrain and Street View, the blog said.

What Google is not saying is what a TPU actually is and if it will be a replacement for a CPU or a GPU. Word on the street is that the TPU could be a form of chip that implements the machine learning algorithms that are crafted using more power hungry GPUs and CPUs.