September 9th, 2024

AMD announces unified UDNA GPU architecture – bringing RDNA and CDNA together

AMD has introduced UDNA, a unified GPU architecture merging RDNA and CDNA to simplify development and enhance competitiveness against Nvidia. Timelines for implementation remain unclear, with improved AI capabilities expected.

Read original articleLink Icon
AMD announces unified UDNA GPU architecture – bringing RDNA and CDNA together

AMD has announced a new unified GPU architecture called UDNA, which merges its RDNA and CDNA architectures. This strategic move aims to enhance AMD's competitiveness against Nvidia's CUDA ecosystem, which has a significant developer base. Jack Huynh, AMD's senior vice president, emphasized that the unification will simplify development for users, allowing for a more streamlined approach to both consumer and data center applications. The previous split between RDNA, focused on gaming, and CDNA, aimed at compute-centric tasks, created complexities for developers. UDNA is intended to address these issues by providing a single architecture that supports both markets, potentially improving backward and forward compatibility. However, Huynh did not provide a specific timeline for the rollout of UDNA, indicating that it may take several product generations before it fully materializes. The architecture is expected to include enhanced AI capabilities, which are currently limited in RDNA. AMD's ongoing efforts to improve its ROCm software stack are also crucial for competing with Nvidia, which has established a stronghold in the AI and HPC sectors. The company faces challenges in gaining developer support and optimizing its software ecosystem to match Nvidia's success.

- AMD introduces UDNA, a unified GPU architecture combining RDNA and CDNA.

- The unification aims to simplify development and enhance competitiveness against Nvidia's CUDA.

- UDNA is expected to improve backward and forward compatibility for developers.

- Specific timelines for UDNA's implementation remain unclear, with several generations needed for full realization.

- Enhanced AI capabilities are anticipated as part of the UDNA architecture.

Link Icon 3 comments
By @Jlagreen - 7 months
That's actually funny because before with Vega AMD was exactly there and decided to go RDNA and CDNA because it made more sense for separating consumer and data center.

While at the same time, Nvidia was going the other route and trying to make CUDA support all GPUs and have all features on consumer as well just like Tensor Cores.

AMD going back to one architecture is basically admitting that the separation in the past was a mistake and that Nvidia went the right way.

But AMD makes the next mistake. By not competing in high end in gaming. Not because gaming is important but because the RTX 3090/4090 are among the most wanted AI accelerator cards. If you look at it, the RTX 2000 series had no 2090 and Titan. The RTX 2080TI however was more of a success than most could imagine. Thanks to the tensor cores small severs were built back then and used a lot in academia and small enterprises for ML.

Nvidia reacted to that and released RTX 3090. The RTX 3090 was way ahead in gaming and probably a RTX 3080TI would have been enough. But the 3090 offered way more memory and huge compute for ML. The same applies to RTX 4090. I read somewhere that the RTX 4090 has ~50-70% of compute of H100 for ML and that at 5-10% of the price. Yes it has no NVLink and has much less memory size and bandwith but still it's THE card used in academia for students to enter the world of ML.

AMD shouldn't neglect high end gaming because Nvidia uses it to offer a product which is great for ML but marketed for gaming. AMD should release a $2000 gaming card being better than Nvidia if they want their SW to really get some spread. And they should start giving academia free consumer cards as Nvidia has been doing for almost a decade. If you want the community to use and drive your SW then give them incentives for god's sake!

By @jdboyd - 7 months
As some one bummed by how second class RDNA had been for ROCm, this tremendously exciting. While there are other reasons that ROCm is second class to CUDA, I think this has been a big one.
By @ksec - 7 months
Sometimes I dont get HN's algorithm. 50 points, zero comments, high point to comment ratio. And still not on front two page.