Meta founder and CEO Mark Zuckerberg speaks during the Meta Connect event at Meta headquarters in Menlo Park, California, on Sept. 27, 2023.
Josh Edelson | AFP | Getty Images
Meta is spending billions of dollars on Nvidia’s popular computer chips, which are at the heart of artificial intelligence research and projects.
In an Instagram Reels post on Thursday, Zuckerberg said the company’s “future roadmap” for AI requires it to build a “massive compute infrastructure.” By the end of 2024, Zuckerberg said that infrastructure will include 350,000 H100 graphics cards from Nvidia.
Zuckerberg didn’t say how many of the graphics processing units (GPUs) the company has already purchased, but the H100 didn’t hit the market until late 2022, and that was in limited supply. Analysts at Raymond James estimate Nvidia is selling the H100 for $25,000 to $30,000, and on eBay they can cost over $40,000. If Meta were paying at the low end of the price range, that would amount to close to $9 billion in expenditures.
Additionally, Zuckerberg said Meta’s compute infrastructure will contain “almost 600k H100 equivalents of compute if you include other GPUs.” In December, tech companies like Meta, OpenAI and Microsoft said they would use the new Instinct MI300X AI computer chips from AMD.
Meta needs these heavy-duty computer chips as it pursues research in artificial general intelligence (AGI), which Zuckerberg said is a “long term vision” for the company. OpenAI and Google’s DeepMind unit are also researching AGI, a futuristic form of AI that’s comparable to human-level intelligence.
Meta’s chief scientist Yann LeCun stressed the importance of GPUs during a media event in San Francisco last month.
″[If] you think AGI is in, the more GPUs you have to buy,” LeCun said at the time. Regarding Nvidia CEO Jensen Huang, LeCun said “There is an AI war, and he’s supplying the weapons.”
In Meta’s third-quarter earnings report, the company said that total expenses for 2024 will be in the range of $94…
Read the full article here