
Qualcomm introduced Monday that it’s going to launch new artificial intelligence accelerator chips, marking new competitors for Nvidia, which has thus far dominated the marketplace for AI semiconductors.
The inventory soared 11% following the information.
The AI chips are a shift from Qualcomm, which has so far centered on semiconductors for wi-fi connectivity and cellular units, not large information facilities.
Qualcomm mentioned that each the AI200, which is able to go on sale in 2026, and the AI250, deliberate for 2027, can are available a system that fills up a full, liquid-cooled server rack.
Qualcomm is matching Nvidia and AMD, which provide their graphics processing models, or GPUs, in full-rack techniques that permit as many as 72 chips to behave as one pc. AI labs want that computing energy to run essentially the most superior fashions.
Qualcomm’s information middle chips are primarily based on the AI components in Qualcomm’s smartphone chips referred to as Hexagon neural processing models, or NPUs.
“We first wished to show ourselves in different domains, and as soon as we constructed our power over there, it was fairly straightforward for us to go up a notch into the info middle stage,” Durga Malladi, Qualcomm’s basic supervisor for information middle and edge, mentioned on a name with reporters final week.
The entry of Qualcomm into the info middle world marks new competitors within the fastest-growing market in know-how: tools for brand spanking new AI-focused server farms.
Practically $6.7 trillion in capital expenditures shall be spent on information facilities by way of 2030, with the bulk going to techniques primarily based round AI chips, in response to a McKinsey estimate.
The trade has been dominated by Nvidia, whose GPUs have over 90% of the market thus far and gross sales of which have pushed the corporate to a market cap of over $4.5 trillion. Nvidia’s chips have been used to coach OpenAI’s GPTs, the big language fashions utilized in ChatGPT.
However firms reminiscent of OpenAI have been in search of alternate options, and earlier this month the startup introduced plans to purchase chips from the second-place GPU maker, AMD, and doubtlessly take a stake within the firm. Different firms, reminiscent of Google, Amazon and Microsoft, are additionally creating their very own AI accelerators for his or her cloud providers.
Qualcomm mentioned its chips are specializing in inference, or operating AI fashions, as an alternative of coaching, which is how labs reminiscent of OpenAI create new AI capabilities by processing terabytes of knowledge.
The chipmaker mentioned that its rack-scale techniques would finally price much less to function for purchasers reminiscent of cloud service suppliers, and {that a} rack makes use of 160 kilowatts, which is comparable to the excessive energy draw from some Nvidia GPU racks.
Malladi mentioned Qualcomm would additionally promote its AI chips and different components individually, particularly for shoppers reminiscent of hyperscalers that favor to design their very own racks. He mentioned different AI chip firms, reminiscent of Nvidia or AMD, might even develop into shoppers for a few of Qualcomm’s information middle components, reminiscent of its central processing unit, or CPU.
“What we have now tried to do is be sure that our prospects are able to both take all of it or say, ‘I’ll combine and match,'” Malladi mentioned.
The corporate declined to remark, the value of the chips, playing cards or rack, and what number of NPUs could possibly be put in in a single rack. In Might, Qualcomm announced a partnership with Saudi Arabia’s Humain to provide information facilities within the area with AI inferencing chips, and will probably be Qualcomm’s buyer, committing to deploy as much as as many techniques as can use 200 megawatts of energy.
Qualcomm mentioned its AI chips have benefits over different accelerators when it comes to energy consumption, price of possession, and a brand new strategy to the best way reminiscence is dealt with. It mentioned its AI playing cards assist 768 gigabytes of reminiscence, which is increased than choices from Nvidia and AMD.
Qualcomm’s design for an AI server referred to as AI200.
Qualcomm
Qualcomm in the future inventory chart.
