{"componentChunkName":"component---src-templates-blog-post-js","path":"/blog/google-more-than-doubles-its-ai-chip-performance-with-tpu-v4/","result":{"data":{"site":{"siteMetadata":{"title":"No Frills News"}},"contentfulNfnPost":{"postTitle":"Google More Than Doubles Its AI Chip Performance with TPU V4","slug":"google-more-than-doubles-its-ai-chip-performance-with-tpu-v4","createdLocal":"2021-05-19 14:31:14.218640","publishDate":"2021-05-18 18:30:14+00:00","feedName":"Image Recognition","sourceUrl":{"sourceUrl":"https://www.datacenterknowledge.com/machine-learning/google-more-doubles-its-ai-chip-performance-tpu-v4"},"postSummary":{"childMarkdownRemark":{"html":"<p>TPU V4 (TPU stands for Tensor Processing Unit) reaches an entirely new height in computing performance for AI software running in Google data centers.\nA single TPU V4 pod, a cluster of interconnected servers combining about 500 of these processors is capable of 1 exaFLOP performance, Google CEO Sundar Pichai said in his livestreamed Google IO keynote Tuesday morning.\nTPU V4 pods will be deployed at Google data centers “soon,” Pichai said.\nBesides using these systems for its own AI applications, such as search suggestions, language translation, or voice assistant, Google rents TPU infrastructure, including entire TPU pods, to Google Cloud customers.\nTPU V4 instances will be available to Google Cloud customers later this year, Pichai said.</p>"}}}},"pageContext":{"slug":"google-more-than-doubles-its-ai-chip-performance-with-tpu-v4"}}}