Broadcom Inc became the latest chipmaker to forge a blockbuster data centre deal with OpenAI, triggering a rally that added more than US$150 billion to its market value.
In a pact announced Monday, OpenAI agreed to buy custom chips and networking components from Broadcom to help power its artificial intelligence services. OpenAI had already struck deals for data centres and chips that easily top US$1 trillion, and the company plans to spend tens of billions of dollars more on Broadcom chips, according to people familiar with the matter.
The agreement, which follows OpenAI deals with Nvidia Corp and Advanced Micro Devices Inc, is meant to add 10 gigawatts’ worth of AI data centre capacity — a level equivalent to the peak energy demand of New York City.
The wrinkle with the Broadcom pact is it will let OpenAI tailor the chips to meet specific needs. The startup said it would unlock “new levels of capability and intelligence” by applying lessons gleaned from developing AI models to hardware technology.
For Broadcom, the move provides deeper access to the booming AI market. Though the company has already seen its revenue from artificial intelligence computing climb, Broadcom has remained in the shadow of Nvidia, the dominant seller of AI processors.
Investors sent Broadcom shares up 9.9% to US$356.70 on Monday, betting that the OpenAI alliance will generate major new revenue for the chipmaker. But the details of how OpenAI will pay for the equipment aren’t spelled out. While the AI startup has shown it can easily raise funding from investors, it’s burning through wads of cash and doesn’t expect to be cash-flow positive until around the end of this decade.
See also: Oracle event is chance to show US$370 billion stock rally has legs
The eye-popping deals inked by OpenAI this year could dramatically expand the startup’s computing power. Nvidia, whose chips handle the majority of AI work, said last month that it will invest as much as US$100 billion in OpenAI to support new infrastructure — with a goal of at least 10 GW of capacity. And just last week, OpenAI announced a pact to deploy 6 GW of AMD processors over multiple years.
OpenAI's wave of AI infrastructure deals
See also: Bank of Singapore uses AI agents to cut source of wealth report time to one hour
As AI and cloud companies announce large projects every few days, it’s often not clear how the efforts are being financed. The interlocking deals also have boosted fears of a bubble in AI spending, particularly as many of these partnerships involve OpenAI, a fast-growing but unprofitable business.
While purchasing chips from others, OpenAI has also been working on designing its own semiconductors. They’re mainly intended to handle the inference stage of running AI models — the phase after the technology is trained.
OpenAI and Broadcom plan to begin deploying racks of data centre servers containing the new custom gear in the second half of 2026. The rollout should be completed by the end of 2029, according to the companies.
There’s no investment or stock component to the Broadcom deal, OpenAI said, making it different than the agreements with Nvidia and AMD. The companies didn’t provide financial terms on the transaction, and an OpenAI spokesperson declined to comment on how the company will finance the chips. But the underlying idea is that more computing power will let OpenAI sell more services.
A single gigawatt of AI computing capacity today costs roughly $35 billion for the chips alone, with 10 GW totaling upwards of US$350 billion. But a chief reason OpenAI is working to develop its own chip is to bring down its costs, and it’s unclear what price Broadcom’s chips will command under the deal.
OpenAI might be trying to emulate Alphabet Inc.’s Google, which made its own chips using Broadcom’s technology and saw lower costs compared with other AI companies, such as Meta Platforms Inc, according to Bloomberg Intelligence analyst Mandeep Singh. Google’s success with Broadcom might have steered OpenAI to that chipmaker, rather than suppliers such as Marvell Technology Inc, Singh added.
In announcing the agreement, OpenAI Chief Executive Officer Sam Altman said that his company has been working with Broadcom for 18 months.
To stay ahead of Singapore and the region’s corporate and economic trends, click here for Latest Section
The startup is rethinking technology starting with the transistors and going all the way up to what happens when someone asks ChatGPT a question, he said on a podcast released by his company alongside Broadcom CEO Hock Tan. “By being able to optimise across that entire stack, we can get huge efficiency gains, and that will lead to much better performance, faster models, cheaper models.”
“If you do your own chips, you control your destiny,” Tan said in the podcast Monday.
Broadcom has increasingly been seen as a key beneficiary of AI spending, helping propel its share price this year. The stock was up 40% so far this year through Friday’s close, outpacing a 29% gain by the benchmark Philadelphia Stock Exchange Semiconductor Index. OpenAI, meanwhile, has garnered a US$500 billion valuation, making it the world’s biggest startup by that measure.
By tapping Broadcom’s networking technology, OpenAI is hedging its bets. Broadcom’s Ethernet-based options compete with Nvidia’s proprietary technology. OpenAI also will be designing its own gear as part of its work on custom hardware, the startup said.
Broadcom won’t be providing the data centre capacity itself. Instead, it will deploy server racks with custom hardware to facilities run by either OpenAI or its cloud-computing partners.
A single gigawatt is about the capacity of a conventional nuclear power plant. Still, 10 GW of computing power alone isn’t enough to support OpenAI’s vision of achieving artificial general intelligence, said OpenAI co-founder and President Greg Brockman.
“That is a drop in the bucket compared to where we need to go,” he said.
Getting to the level under discussion isn’t going to happen quickly, said Charlie Kawwas, president of Broadcom’s semiconductor solutions group. “Take railroads — it took about a century to roll it out as critical infrastructure. If you take the internet, it took about 30 years,” he said. “This is not going to take five years.”