Floating Button
Home Views Tech

‘Code red’ at OpenAI as the Google empire strikes back

Assif Shameen
Assif Shameen • 10 min read
‘Code red’ at OpenAI as the Google empire strikes back
Google’s Gemini gains ground, heating AI competition and reshaping the chip market.
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

Artificial intelligence (AI) chatbot ChatGPT officially turned three late last month. There was no uncorking of champagne bottles at its creator OpenAI’s headquarters in Mission Bay, San Francisco. Indeed, the mood was also sombre 72km away in Santa Clara, the home of the world’s largest firm, chip giant Nvidia, which supplies the accelerator chips that have helped power the ongoing AI boom. Blame it all on party poopers search giant Google and start-up Anthropic AI, once considered laggards in the AI marathon that now suddenly find themselves in the driver’s seat with momentum.

Instead of unleashing a round of celebrations, OpenAI’s CEO, Sam Altman, in an internal memo declared a “code red” general state of crisis to marshal more resources towards improving ChatGPT as competitive pressure from Google and other AI rivals begins to intensify.

Ironically, three years ago, it was Google that was forced to sound the “code red” alert after OpenAI unveiled ChatGPT. CEO Sundar Pichai went on to warn Google staff that the AI chatbot could threaten the future of Search, which accounted for 56% of its revenue. Now, it is Altman’s turn to sound alarm bells over Google’s latest Gemini 3 model and an increasingly fierce AI model race with Google, Elon Musk’s xAI, Meta Platform Holdings and Anthropic. OpenAI CEO urged his staff “to stay focused through short-term competitive pressure … expect the vibes out there to be rough for a bit”, as a result of Google’s latest counterpunch.

Make no mistake, Gemini 3 is hurting everyone from OpenAI to xAI to Meta. Even before it unveiled its latest AI model, Google had begun gaining traction following its release of an image-generation tool called Nano Banana, with over five billion images created. The Gemini 3 model builds on that as it is significantly more skilled at coding, developing applications and generating images than the other AI models out there. Gemini 3’s strong benchmark results on multimodal reasoning, math and code are giving Google not just momentum but also credibility.

Gemini 3 outperforms ChatGPT-5.1, the latest version of OpenAI’s chatbot, which has real-time access to the web, enhanced reasoning, coding skills and image and video-generation capabilities. OpenAI recently released its Sora video-generation app, which racked up a million users in five days. “Gemini 3’s biggest advantage is its sheer size and the vast amount of compute that went into creating it,” notes veteran tech strategist Ben Thompson. “OpenAI has had difficulty creating the next generation of models beyond the GPT-4 level of size and complexity,” he says. “What has carried the company is a genuine breakthrough in reasoning that produces better results.”

For now, however, OpenAI remains ahead, though Gemini is closing the gap. ChatGPT has 800 million monthly active users compared with Gemini’s 650 million. For its part, Gemini has been leveraging its integration with Google services, including search, Gmail and YouTube, to attract users who value multimodal features and an agentic smart assistant that can perform several-step tasks across different tools. Google is betting that Gemini’s superior technical benchmarks will help it triumph over ChatGPT. OpenAI is on track to release a new reasoning model soon that will likely be better than Google’s Gemini 3, while conceding that the company needs to make major improvements to the ChatGPT experience.

See also: Trump signs order seeking to limit state-level AI regulation

Before OpenAI unleashed its chatbot three years ago, Google was seen as the natural leader in AI research. In 2014, it bought control of the prestigious London-based AI research lab DeepMind, co-founded by Demis Hassabis and Syrian-British researcher Mustafa Suleyman, who went on to start Inflection AI and is now CEO of the Microsoft consumer AI unit. Among other things, DeepMind’s AlphaGo programme beat the world champion at the ancient game of Go, taught itself chess and other complex games in hours, and cracked a 50-year-old scientific puzzle about how proteins fold. DeepMind has merged with Google Brain, the firm’s other AI unit, to form a single team that could take on the OpenAI challenge.

ChatGPT had the distinction of having the fastest mass adoption for a consumer product in history. Within five days of its release, one million users had already tried OpenAI’s chatbot. Within two months, ChatGPT had over 100 million users. It took Apple’s iPhone 10 weeks to reach one million users and three and a half years to reach 100 million global users. It took Instagram 10 weeks to reach a million users and two and a half years to reach 100 million. It took Netflix three years to reach a million users and in 10 years reach 100 million. It took more than nine years for the internet to reach 100 million users. And TV took decades literally to reach 100 million households around the world.

Three years after the release of ChatGPT and the constant ‘cry wolf’ of a bubble about to pop, AI is still alive, kicking and growing at warp speed. What’s changed, however, is the narrative around AI in recent months.

See also: Reddit challenges world’s first teen social media ban in court

Gone are the days when Nvidia was recognised as the far and away leader of the AI pack, supplying 90% of the chips used by companies like OpenAI, Microsoft, Meta, Amazon.com, Google, Oracle and Anthropic to train their large language models and increasingly for inferencing and reasoning as well. Now there is a real race between AI chip firms Nvidia and Advanced Micro Devices, as well as between them and their customers Google, Amazon and Meta, that are getting customised AI chips designed by application-specific integrated circuit (ASIC) firms such as Broadcom and Marvell Technology.

Soaring valuations

One way to look at the changing narrative around AI is to compare the valuation of various AI players the day before ChatGPT was released and their current market value. Nvidia had a market value of around US$390 billion in November 2022. That has since ballooned 11.5 times to US$4.4 trillion ($5.7 trillion). Broadcom’s market value at the end of November 2022 was US$230 billion.

Last week, its market capitalisation touched US$1.91 trillion, or up 8.3 times in three years. Marvell’s market cap before the advent of ChatGPT was just US$30 billion; it is now valued at US$86.4 billion. Another huge gainer: Microsoft, which invested US$14 billion in OpenAI after ChatGPT was released. Back then, Microsoft had a market value of US$1.86 trillion. That peaked at US$4.1 trillion in October, though the stock is down nearly 14% to US$3.5 trillion. Oh, by the way, OpenAI, which was then a non-profit, was valued at US$14 billion before it released ChatGPT. Last month, the for-profit, unlisted OpenAI raised funds at a valuation of US$500 billion, making it the most valuable private firm on earth.

Clearly, Wall Street now sees Google and its parent Alphabet as the de facto leader in the AI race, along with its chip partner Broadcom. Alphabet shares are up 125% from their April lows, while Broadcom has surged 164% in the same period. Nvidia is up 94% from its own April lows, while Microsoft, which owns 24% of OpenAI, is up 35% from its early April lows.

ChatGPT and the ongoing AI boom literally saved the US economy from the brink of a recession. In November 2022, the US Federal Reserve and central banks around the world were raising rates in what was the fastest rate hiking cycle in history and global markets were in free fall. The arrival of ChatGPT changed the narrative. The US benchmark S&P 500 index is up 70% while the tech-heavy Nasdaq is up 124% over the past three years, before counting reinvested dividends. Despite all the babble about a looming AI bubble, the global AI market, which is expected to top US$400 billion this year, is projected to grow to US$5 trillion by 2033. America’s AI boom has unleashed an unprecedented surge in capital investment that is reshaping data centres, construction industries, electric utilities and chip manufacturing across the US and AI start-ups. A recent Goldman Sachs research report forecasts AI will boost global productivity and potentially raise global GDP by 7% over the next decade. According to some estimates, AI-related infrastructure spending could account for nearly half of the GDP growth in the US this year. And the wealth effect from soaring AI and other tech stocks is estimated to have boosted US consumer spending by about US$180 billion this past year.

Focus on chip designers

Sink your teeth into in-depth insights from our contributors, and dive into financial and economic trends

Right now, all eyes are on the chip designers and the race to build an AI chip as good as Nvidia’s Blackwell Ultra or its next big chip, Rubin. A single Nvidia GB200 Superchip that includes both GPU (graphic processing unit) and CPU (central processing unit) can cost up to US$70,000. Of course, large customers buy chips as part of a package that includes Nvidia’s CUDA (Compute Unified Device Architecture) software and an array of services. Nvidia has gross margins of 73% on its AI chips. Amazon’s founder, Jeff Bezos, once said: “Your margin is my opportunity”. Large customers of Nvidia like Google, Amazon, Meta, xAI and OpenAI see a huge opportunity.

Last week, Amazon’s cloud unit released its own latest AI chip, Trainium3, in an attempt to sell hardware that could take Nvidia and Google-designed chips head-on. Amazon’s chip push is an important part of its strategy to regain its leadership role in tech. Despite being the dominant seller of rented computing power and data storage, Amazon Web Services (AWS) has struggled to replicate that lead among top developers of AI tools, as some firms opt instead to rely on Google or Microsoft, which has close ties to OpenAI.

Amazon has said Trainium3 can power the intensive calculations behind AI models more cheaply and efficiently than Nvidia’s GPUs. While it hopes to entice companies with Trainium3’s performance, the tech still lacks the deep software libraries that help customers get Nvidia’s GPUs up and running fast. On Dec 2, AWS also unveiled a new Amazon AI model that can process text, speech, images and video: Amazon Nova 2 Omni.

Little wonder that Google is designing its own in-house tensor processing units (TPUs) with the help of Broadcom. TPUs are closely integrated with Google Cloud, the third-largest cloud provider behind AWS and Microsoft’s Azure. They are also cost-effective due to vertical integration and proprietary hardware. Essentially, Google is tailor-making the chips for its own use. So, how do the new TPUs compare with Nvidia’s top-of-the-line chips? While they may not have the same performance as Nvidia’s chips, they are customised for Google’s use. Buying an ASIC chip is like getting a shirt made by a tailor. At the department store, you might get a good designer-branded shirt, but only a custom tailor can make you one that perfectly fits.

Where does that leave Nvidia, which was recently allowed to export its high-end H200 AI chips to China in return for sharing 25% of the sales revenue with the US government? While ASIC chips will reduce Nvidia’s market share from the current 90% to around 75%, its revenue and profits are expected to continue growing, as the AI chip market is still expanding and the company also sells software and services alongside its chips. Slower growth has rightly shifted focus back to the stock’s valuation.

Assif Shameen is a technology and business writer based in North America

×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2025 The Edge Publishing Pte Ltd. All rights reserved.