Chip giant Qualcomm announced Monday that it will release two new artificial intelligence chips – setting the stage for a head-to-head race with billionaire Jensen Huang’s Nvidia.
It’s a major shift for Qualcomm, which has long focused on manufacturing semiconductors for wireless activity and mobile phones – not for the power-hungry data centers that are powering the AI boom.
Shares in Qualcomm jumped 11% Monday to close at $187.68.
Qualcomm — headed by Brazilian-born CEO Cristiano Amon — said its new, super-powered chips will be available in liquid-cooled server racks, which help customers like cloud service providers save on operational costs.
Nvidia and AMD also sell their graphics processing units, or GPUs, in full-rack systems. Large language models like OpenAI’s ChatGPT rely on these powerful systems.
Still, Qualcomm is facing a steep uphill battle. Nvidia already dominates about 90% of the data center market and boasts a massive $4.6 trillion market cap.
Nvidia’s chips were used to train OpenAI’s ChatGPT, still widely viewed as the most popular AI chatbot.
But earlier this month, OpenAI broadened its supply chain. It announced plans to start buying chips from rival AMD, and even potentially take a stake in the company.
The AI race has shown no signs of slowing down. Nearly $6.7 trillion in capital expenditures will be spent on AI infrastructure through 2030, according to a McKinsey estimate.
Qualcomm’s new AI200 and AI250 chips will go on sale in 2026 and 2027, respectively, the company said in a press release.
A rack will require 160 kilowatts of power, similar to rack solutions from Nvidia.
Qualcomm’s chip-based accelerator cards will support 768 gigabytes of memory – more than comparable options from Nvidia and AMD.
The company declined to comment on the price of its AI chips, cards or racks.
“Qualcomm AI200 and AI250 are designed for frictionless adoption and rapid innovation,” Durga Malladi, the firm’s senior vice president and general manager for technology planning, said in a statement Monday.
“Our rich software stack and open ecosystem support make it easier than ever for developers and enterprises to integrate, manage, and scale already trained AI models on our optimized AI inference solutions.”
Qualcomm’s new AI chips will be based on its Hexagon neural processing units, or NPUs, which are used for its smartphone chips.
The company aims to sell its new chips and other parts separately, so customers can design their own racks if they’d like. Nvidia or AMD could even become potential clients for some data center parts, Malladi told reporters last week.
“What we have tried to do is make sure that our customers are in a position to either take all of it or say, ‘I’m going to mix and match,’” he said.