Elon Musk and Larry Ellison Begged Nvidia CEO Jensen Huang for AI GPUs at Dinner
Larry Ellison and Elon Musk met Nvidia's CEO to request more AI GPUs. Oracle plans a Zettascale supercluster with 131,072 GPUs and aims to secure power with nuclear reactors.
Read original articleDuring a recent earnings call, Oracle founder Larry Ellison revealed that he and Elon Musk had a dinner with Nvidia CEO Jensen Huang, where they expressed their urgent need for Nvidia's AI GPUs. Ellison humorously described the encounter as a plea for more GPUs, stating, "Please take our money; no, take more of it." This meeting appears to have been fruitful, as Oracle announced plans to develop a Zettascale AI supercluster featuring 131,072 Nvidia GB200 NVL72 Blackwell GPUs, which will deliver 2.4 ZettaFLOPS of AI performance, surpassing Musk's xAI Memphis Supercluster. To support its ambitious AI initiatives, Oracle has also secured permits to build three modular nuclear reactors for power, although these will take years to deploy. In the interim, the company may utilize mobile generators to meet energy demands. Despite being smaller than competitors like Amazon Web Services and Microsoft Azure, Oracle Cloud Infrastructure (OCI) is positioning itself as a flexible alternative, capable of meeting specific customer needs, including offline servers for enhanced security. Ellison emphasized the competitive landscape in AI, noting that training frontier AI models could cost around $100 billion over the next three years, highlighting the urgency for companies to lead in AI processing.
- Larry Ellison and Elon Musk sought Nvidia GPUs during a dinner with Jensen Huang.
- Oracle plans to build a Zettascale AI supercluster with 131,072 Nvidia GPUs.
- The company has secured permits for three modular nuclear reactors to meet power needs.
- Oracle Cloud Infrastructure aims to provide flexibility and security for clients.
- Training frontier AI models is projected to cost $100 billion in the next three years.
Related
Nvidia and Microsoft leap frogged Apple
Microsoft and Nvidia outpace Apple in market value through AI focus. Concerns on AI sustainability due to energy use. Investors demand real AI results, caution grows. Industry's success tied to energy efficiency and results delivery. Apple trails in AI competition.
XAI's Memphis Supercluster has gone live, with up to 100,000 Nvidia H100 GPUs
Elon Musk launches xAI's Memphis Supercluster with 100,000 Nvidia H100 GPUs for AI training, aiming for advancements by December. Online status unclear, SemiAnalysis estimates 32,000 GPUs operational. Plans for 150MW data center expansion pending utility agreements. xAI partners with Dell and Supermicro, targeting full operation by fall 2025. Musk's humorous launch time noted.
Four co's are hoarding billions worth of Nvidia GPU chips. Meta has 350K of them
Meta has launched Llama 3.1, a large language model outperforming ChatGPT 4o on some benchmarks. The model's development involved significant investment in Nvidia GPUs, reflecting high demand for AI training resources.
So Who Is Building That 100k GPU Cluster for XAI?
Elon Musk's xAI seeks to build a 100,000 GPU cluster for AI projects, requiring significant Nvidia GPUs. A "Gigafactory of Compute" is planned in Memphis, aiming for completion by late 2025.
Oracle designing data center powered by nuclear reactors
Oracle is developing a data center powered by three small modular nuclear reactors to meet AI-driven electricity demands. Commercial deployment of this technology in the U.S. is expected in the 2030s.
Related
Nvidia and Microsoft leap frogged Apple
Microsoft and Nvidia outpace Apple in market value through AI focus. Concerns on AI sustainability due to energy use. Investors demand real AI results, caution grows. Industry's success tied to energy efficiency and results delivery. Apple trails in AI competition.
XAI's Memphis Supercluster has gone live, with up to 100,000 Nvidia H100 GPUs
Elon Musk launches xAI's Memphis Supercluster with 100,000 Nvidia H100 GPUs for AI training, aiming for advancements by December. Online status unclear, SemiAnalysis estimates 32,000 GPUs operational. Plans for 150MW data center expansion pending utility agreements. xAI partners with Dell and Supermicro, targeting full operation by fall 2025. Musk's humorous launch time noted.
Four co's are hoarding billions worth of Nvidia GPU chips. Meta has 350K of them
Meta has launched Llama 3.1, a large language model outperforming ChatGPT 4o on some benchmarks. The model's development involved significant investment in Nvidia GPUs, reflecting high demand for AI training resources.
So Who Is Building That 100k GPU Cluster for XAI?
Elon Musk's xAI seeks to build a 100,000 GPU cluster for AI projects, requiring significant Nvidia GPUs. A "Gigafactory of Compute" is planned in Memphis, aiming for completion by late 2025.
Oracle designing data center powered by nuclear reactors
Oracle is developing a data center powered by three small modular nuclear reactors to meet AI-driven electricity demands. Commercial deployment of this technology in the U.S. is expected in the 2030s.