Google has developed its custom chips so that the learning algorithms work faster. For some years Google was in a dilemma because if all users started using its voice recognition services, then it will have a huge problem expanding the data servers. This is why it has now developed Tensor Processing Unit or TPU.
What is a Tensor Processing Unit?
Google never really divulged much about the TPUs, but now it is sharing some of the details about it. It is a chip that is meant to accelerate the inference stage of the deep neural networks. The company published a paper on Wednesday that saw the surge of the network. They are supposed to be 15 to 30 times faster than the Intel Haswell CPU or Nvidia K80 GPU. Another thing that the company also found out was that TPU was 25 to 80 times faster than CPU and GPU.
It is FAST!
This remarkable discovery shall help enhance the user experience. The company is striving to build machine learning applications. TPUs are extremely useful when it comes to energy efficiency. So this is one aspect in which it can be both costs effective and faster to use.
“The point is, the internet takes time, so if you’re using an Internet-based server, it takes time to get from your device to the cloud, it takes time to get back,” said Norm Jouppi, a hardware engineer at Google. “Networking and various things in the cloud — in the data center — they take some time. So that doesn’t leave a lot of [time] if you want near-instantaneous responses.”
More on the way
The company also tested the TPUs with new hardware. It is possible that as the hardware world develops the performance gaps decreases further. On the other hand, while neural networks work by how our brain’s neuron transfers information, CNN1 works on the way brain takes in visual information.
“As CNN1 currently runs more than 70 times faster on the TPU than the CPU, the CNN1 developers are already very happy, so it’s not clear whether or when such optimizations would be performed,”
– they wrote.