Google has unveiled its latest Tensor Processing Unit (TPU), named Ironwood, during the Google Cloud Next 2025 event. This 7th-generation TPU is designed specifically for inference computing, enhancing the performance of AI applications such as ChatGPT. Ironwood features 192 GB of memory per chip, six times that of its predecessor, Trillium, enabling the processing of larger models and datasets while reducing data transfer needs. Additionally, Google announced the integration of Lyria into Vertex AI, making it the only platform with generative media models for video, image, speech, and music. The company also introduced a 69-page prompt engineering masterclass and a 9-hour training course aimed at improving AI prompting skills. With these advancements, Google aims to reclaim its position in the competitive AI landscape, which has been dominated by Microsoft in recent years.
I took Google’s 9-hour prompt engineering course so you don’t have to: Here’s a 10-min summary. https://t.co/EF99328QWB
Google unveils new prompt engineering playbook: 10 key points on mastering Gemini, other AI tools. https://t.co/geTPsIQgU8
. @Google 's new TPU Ironwood, built for the "age of inference". The performance numbers are unbelievable.. 🔥 - 192 GB per chip, 6x that of Trillium, which enables processing of larger models and datasets, reducing the need for frequent data transfers and improving https://t.co/EdcuWcSm8w https://t.co/ZOFSKtcr2B