- Meta will deploy millions of Nvidia processors over the next few years
- Meta plans to use Nvidia’s Grace CPUs in standalone servers for the first time
- The rollout includes Nvidia’s Blackwell and upcoming Vera Rubin AI accelerators
Meta Platforms Inc. has agreed to deploy "millions" of Nvidia Corp. processors over the next few years, tightening an already close relationship between two of the biggest companies in the artificial intelligence industry.
Meta, which accounts for about 9% of Nvidia's revenue, is committing to use more AI processors and networking equipment from the supplier, according to a statement Tuesday. For the first time, it also plans to rely on Nvidia's Grace central processing units, or CPUs, at the heart of standalone computers.
The rollout will include products based on Nvidia's current Blackwell generation and the forthcoming Vera Rubin design of AI accelerators.
"We're excited to expand our partnership with Nvidia to build leading-edge clusters using their Vera Rubin platform to deliver personal superintelligence to everyone in the world," Meta Chief Executive Officer Mark Zuckerberg said in the statement.
The pact reaffirms Meta's loyalty to Nvidia at a time when the AI landscape is shifting. Nvidia's systems are still considered the gold standard for artificial intelligence infrastructure — and generate hundreds of billions of dollars in revenue for the chipmaker. But rivals are now offering alternatives, and Meta is working on building its own in-house components.
Shares of Nvidia and Meta both rose about 1% in late trading after the agreement was announced. Advanced Micro Devices Inc., Nvidia's rival in AI processors, fell around 3%.
Nvidia's AI accelerators, the chips that help develop and run artificial intelligence models, fetch an average of $16,061 apiece, according to a recent IDC estimate. That means a million of the chips would cost more than $16 billion — and that doesn't account for the higher price of newer versions or the other Nvidia equipment that Meta is buying.
But Meta was already the second-largest buyer of Nvidia products. It accounted for a total of about $19 billion in the last fiscal year, according to data compiled by Bloomberg.
Ian Buck, Nvidia's vice president of accelerated computing, said the two companies aren't putting a dollar figure on the latest commitment or laying out a timeline.
Buck argues that only Nvidia is able to offer the breadth of components, systems and software that a company wishing to be a leader in AI needs. Still, it's reasonable for Meta and others to test out other alternatives, he said.
Zuckerberg, meanwhile, has made AI the top priority at Meta, pledging to spend hundreds of billions of dollars to build the infrastructure needed to compete in this new era.
Meta has already projected record spending for 2026, with Zuckerberg saying last year that the company would put $600 billion toward US infrastructure projects over the next three years. Meta is building several gigawatt-sized data centers around the country, including in Louisiana, Ohio and Indiana. One gigawatt is roughly the amount of energy needed to power 750,000 homes.
Buck stressed that Meta will be the first large data center operator to use Nvidia's CPUs in standalone servers. Typically, Nvidia offers this technology in combination with its high-end AI accelerators — chips that owe their lineage to graphics processors.
This shift represents an encroachment into territory dominated by Intel Corp. and AMD. It also provides an alternative to some of the in-house chips that are designed by large data center operators, such as Amazon.com Inc.'s Amazon Web Services.
Buck said the uses for such chips are only growing. Meta, owner of Facebook and Instagram, will use the chips itself and also rely on Nvidia-based computing capacity offered by other companies.
Nvidia CPUs will be increasingly used for tasks such as data manipulation and machine learning, Buck said.
"There's many different kinds of workloads for CPUs," Buck said. "What we've found is Grace is an excellent back-end data center CPU," meaning it handles the behind-the-scenes computing tasks.
"It can actually deliver two times the performance per watt on those back-end workloads," he said.
(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)
Track Latest News Live on NDTV.com and get news updates from India and around the world