Musk’s xAI plans supercomputer to enhance AI chatbot Grok
The Grok 2 model required 20,000 Nvidia H100 GPUs, while future versions may require up to 100,000 chips.
Elon Musk has revealed that his AI startup, xAI, plans to construct a supercomputer to enhance its AI chatbot Grok, aiming to launch it by fall 2025. According to the report, Musk suggested that xAI might collaborate with Oracle to develop this vast computational resource. When complete, the supercomputer would utilise Nvidia’s flagship H100 GPUs and be four times larger than the biggest GPU clusters currently available.
The Grok 2 model already required about 20,000 Nvidia H100 GPUs for training, and Musk anticipates that future models, like Grok 3, will need around 100,000 of these chips. Nvidia’s H100 GPUs are in high demand due to their dominance in the AI data centre chip market, making them challenging to procure.
Musk established xAI last year to compete with AI powerhouses such as Microsoft-backed OpenAI and Google’s Alphabet. His ambitious plan to build the supercomputer marks his commitment to advancing AI technology and maintaining a competitive edge in the rapidly evolving industry.