Leading partner in AI chip design is literally shaping the future in California’s Silicon Valley
Last week, a brainstorming session on artificial intelligence and AI-driven chip design was held in Silicon Valley. His two giants of the industry, Nvidia and his Synopsys, held a conference that brought together developers and technology innovators in very different, but complementary ways. His Nvidia, world-renowned for its AI acceleration technology and silicon platform dominance, and Synopsys, a long-time industry leader in semiconductor design tools, IP and automation, are both currently thriving in machine learning and artificial intelligence. We are taking advantage of the huge market opportunity that exists. Both companies are launching the proverbial arsenal of enabling technologies, with Nvidia leading the way with large-scale AI accelerator chips and Synopsys helping chip developers leverage AI into many of the tedious steps in the chip design and verification process. I made it. In fact, we may soon not only reach a tipping point in AI, but perhaps even a kind of “starting” point. In other words, which came first, the chicken or the egg? The AI ​​or the AI ​​chip? I know this is science fiction for many people, but I thought it was the highlight of this AI show last week. Let’s dig into some of the points I felt.
Synopsys leverages AI in 3D for semiconductor EDA
There’s no doubt about it. The folks at Synopsys put Nvidia in the spotlight at his Nvidia GPU Technology conference earlier this week regarding the company’s announcements about AI accelerators, which are now the de facto standard in data centers. . But as Nvidia CEO Jensen Huang pointed out on stage with his Synopsys CEO Sassine Ghazi (above), there’s a good reason for this. This means that nearly every chip that Nvidia designs and sends to manufacturing is implemented using Synopsys EDA tools for design, validation, and porting to the chip fab. But Ghazi’s keynote also touched on a new technology from Synopsys called 3DSO.ai, which, to be honest, surprised me a bit.
Synopsys launched a design space optimization AI tool in 2021. This greatly speeds up the process of place and route or floor planning chip designs. Finding optimal circuit layout and routing in large semiconductor designs is labor-intensive and complex, often requiring consideration of both performance, power efficiency, and silicon cost. Synopsys DSO.ai forces machines to perform this iterative process vigorously, significantly reducing engineering effort and speeding time to market with more optimized chip designs.
Synopsys 3DSO.ai now features multiple design automation layers for the new era of chiplets, in addition to critical design thermal analysis, taking this technology to the next level for modern 3D stacked chiplet solutions. I am. In essence, Synopsys 3DSO.ai is not just playing a kind of Tetris to optimize your chip design; rather, it performs 3D Tetris, optimizing placement and routing in three dimensions, and Provide thermal analysis to ensure physical properties are thermally stable. It is also possible or optimal. Yes, that’s right. AI Power Chip Design in 3D – Officially Shocked.
NVIDIA Blackwell GPU AI accelerator and robotics technology attracted attention
For the GPU Technology Conference, Nvidia is pulling out all the stops again, this time packing the San Jose SAP Center with a crowd of developers, press, analysts, and even some of the biggest names in the tech world like Michael Dell. My Analyst Business His partner and long-time friend Marco Chiappetta covers GTC highlights in detail here (also check out AI NIM, it’s very interesting). But for me, the star of his Jensen Huang show was the company’s new Blackwell GPU architecture for AI and project GR00T for building. A humanoid robot and another AI-powered chip tool called cuLitho that is currently being employed in production by Nvidia and his TSMC. cuLitho’s net is that the design of expensive chip mask sets for patterning these designs onto wafers during production has gotten a coveted shot through machine learning and AI. Nvidia claims that its own GPUs, combined with his cuLitho model, can improve chip lithography performance by up to 40 times, and also offer significant power savings compared to traditional CPU-based servers. I am claiming. And this technology is now in full production, partnering with Synopsys on the design and validation side and TSMC on the manufacturing side.
And then there’s Blackwell. If you thought Nvidia’s Hopper H100 and H200 GPUs were monster AI silicon engines, Nvidia’s Blackwell is like “unleashing the Kraken.” For reference, both single-die and dual-die Blackwell GPUs are made up of about 208 billion transistors, more than 2.5 times more than Nvidia’s Hopper architecture. However, these dual GPU clusters act as one large GPU communicating through Nvidia’s NH-HB1 high-bandwidth fabric, which provides an impressive 10TB/s of throughput. When you combine these GPUs with his 192 GB of HMB3e memory with over 8 TB of peak bandwidth, you can expect twice the memory and twice the bandwidth of the H100. Nvidia is also combining two Blackwell GPUs with a Grace CPU for a triple AI solution called the Grace Blackwell Superchip (also known as GB200).
Dual GB200 servers can be configured across a rack using Nvidia’s 5th generation NVLink technology, which provides twice the throughput of the previous generation, resulting in the Nvidia GB200 NVL72 AI supercomputer. The NVL72 cluster configures up to 36 of his GB200 superchips within this rack, connected via his NVLink spines on the back. It’s a pretty wild design that also features an Nvidia BlueField 3 data processing unit, which the company says is 30 times faster than his previous generation H100-based system at large-scale language model inference with 1 trillion parameters. I claim that. The GB200 NVL72 is also claimed to have 25x lower power consumption and 25x better TCO. The company is also configuring up to eight racks of DGX SuperPODs consisting of NVL72 supercomputers. Nvidia announced that numerous partners have adopted Blackwell, including Amazon Web Services, Dell, Google, Meta, Microsoft, OpenAI, and more. The company vows to bring these new powerful AI GPU solutions to market later this year.
So Nvidia isn’t just leading the charge as an 800-pound behemoth of AI processing, it looks like it’s just getting ready.
Another area where Nvidia continues to accelerate its execution is robotics, with Jensen Huang’s GTC 2024 Robot Show highlighting the Project Gr00T (yes, that’s correct with two zeros) foundation model for humanoid robots. It was another wild ride. GR00T, which stands for Generalist Robot 00 Technology, is a robot that not only has natural language input and conversation, but also imitates human movements and actions for dexterity and how to navigate and adapt to the changing world around it. It is intended to train. As I said, it’s science fiction, but it looks like Nvidia is ready to make it a reality sooner rather than later with his GR00T.
And really, that’s what impressed me most from both Nvidia and Synopsys while I was in the Valley last week. Problems and workloads that were once considered unsolvable are now being solved and executed at an ever-increasing pace with machine learning. And it has a compounding effect, making great progress year after year. In this fascinating age of technology, I feel lucky to be an observer and guide of sorts, and that’s what gets me up in the morning.