As expected, Computex became the epicenter of AI PCs with the conference’s first official keynote. AMD, which has been integrating NPUs into its mobile PC SoCs for several generations, came out strong with the Ryzen AI 300 mobile SoC. The new SoC set a new record with 50 AI NPU TOPS using INT8 data types, beating Microsoft’s NPU performance requirement of 40 TOPS. While multiple OEMs came on board to support AMD, questions remain about both the Ryzen AI 300 and the new PCs it will ship with.
While AMD didn’t provide full specifications for its Ryzen AI 300 series SoCs, they did provide a glimpse at some key features including the new 5 with up to 12 cores / 24 threads.Number The first two devices in this product family are the Ryzen AI 9 HX 370 and Ryzen AI 9 365, which feature the same NPU capable of 50 AI TOPS, albeit with slightly fewer CPU cores, cache, and graphics.
According to AMD, the new XDNA 2 NPUs deliver up to five times the performance and up to two times the power efficiency compared to previous generation NPUs, and leverage a block floating-point data format that allows for 16-bit precision at 8-bit processing speeds.
PCs with the new devices will be announced in July and will be able to support on-device AI applications such as Microsoft’s Copilot, Recall, Co-creator, live captioning and real-time translation. AMD is also working with more than 150 independent software developers (ISVs) to optimize AI applications for the platform.
What is still unknown are details like the AI ​​TOPS (CPU + GPU + NPU) across the mobile SoC, power consumption of the device and platform, pricing of the device or AI PC, etc. However, given the competitive nature of this segment with Qualcomm and Intel SoCs, we expect AMD-based AI PCs to be price competitive.
Ryzen AI SoC 300 SoC-based PCs won’t be available for a month or more after the launch of Qualcomm Snapdragon X Elite SoC-based PCs, but a month isn’t too long and they’ll be available before the back-to-school shopping season. Some Microsoft Copilot applications will be available soon, but it could be months or years before most applications can effectively leverage the NPU. Furthermore, software matures significantly over time. Tirias Research believes that as developers learn how to leverage on-device NPUs, they will soon exceed the capabilities of current NPUs. As a result, we are caught in a new performance race to increase the AI ​​performance capabilities of PCs year after year. Some of this performance will be delivered through the NPU, some through other SoC processors, and some through discrete GPUs and dedicated AI accelerators. This is an exciting time for the PC segment, but we’re just getting started.
AMD also announced new desktop PC processors, embedded SoCs, and datacenter products, but we will cover those in separate articles.