OpenAI said it has no current plans to deploy Google’s in-house Tensor Processing Units at commercial scale, clarifying that the chips are only being used in early testing. A spokesperson told Reuters on 30 June that introducing new hardware broadly would require extensive architectural and software changes. The ChatGPT maker remains reliant on Nvidia graphics processors and AMD accelerators to meet growing demand and continues to rent capacity from cloud specialist CoreWeave. OpenAI is simultaneously designing its own silicon and expects to reach the “tape-out” design-completion milestone later this year. Reuters had reported earlier this month that OpenAI had signed up for Google Cloud services, fuelling speculation that the company might shift more workloads to its rival’s TPUs. Google declined to comment on the latest clarification.
OpenAI said it has no active plans to use Google's in-house chip to power its products, two days after Reuters and other news outlets reported on the AI lab's move to turn to its competitor's artificial intelligence chips to meet growing demand. https://t.co/wuGOM5QrHd
OpenAI is also developing its chip, an effort that is on track to meet the "tape-out" milestone this year, where the chip's design is finalized and sent for manufacturing. #OpenAI #Google #AI #AIChips https://t.co/MfxTJJpSB0
OpenAI spokesperson said on Sunday that while the AI lab is in early testing with some of Google's TPUs, it has no plans to deploy them at scale right now. https://t.co/TAdk5ys338