Jensen Huang, the CEO of Nvidia ($NVDA), announced during the GTC 2024 keynote that OpenAI's latest model, presumed to be GPT-4 or potentially GPT-5, has 1.8 trillion parameters. This development suggests a significant advancement in the capabilities of language models. Huang also hinted at the potential for a 'chatGPT moment' for bots, indicating a breakthrough in conversational AI may be imminent. Additionally, it was revealed that the model required an extensive amount of computational power for training, utilizing 30 billion quadrillion FLOPS. There was some confusion regarding the exact number of tokens the model used for training, with figures ranging from 1.7 to 2-3 trillion tokens mentioned.
Jensen Huang: OpenAI's latest model has 1.8 trillion parameters and required 30 billion quadrillion FLOPS to train https://t.co/Sz5G3imFgr
Jensen Huang of @nvidia just said @OpenAI “latest model” is 1.7 trillion tokens and used 2-3 trillion tokens of information for training… is he referring to #chatgpt4 or #chatgpt5…? #ai #llm #NVDA #NVIDIAconference https://t.co/X4Wh9m4bhe
Jensen Huang @nvidia $NVDA during his GTC 2024 keynote just said the "chatGPT moment" for bots may be just around the corner...