AMD Instinct MI325X to Feature 256GB HBM3E Memory, CDNA4-Based MI355X with 288GB
AMD announced updates to its Instinct GPUs, introducing the MI325X with 256GB memory and 6 TB/s bandwidth, and the MI355X with 288GB memory and 8 TB/s bandwidth, launching in 2025.
Read original articleAMD has announced updates to its Instinct series of GPUs, specifically the MI325X and MI355X models. The MI325X will feature 256GB of HBM3E memory, a reduction from the previously planned 288GB due to difficulties in securing the necessary memory stacks. This model will still provide a memory bandwidth of 6 TB/s and has a thermal design power (TDP) of up to 1000W. In contrast, the upcoming MI355X, set for release in the second half of 2025, will utilize the newer CDNA4 architecture and will include 288GB of HBM3E memory, with an increased bandwidth of 8 TB/s. The MI355X will also support new data types such as FP4 and FP6 and will be manufactured using a more advanced 3nm process. Additionally, AMD has confirmed plans for a future MI400 model based on the CDNA-Next architecture, expected to launch in 2026, although specific details about its specifications have not yet been disclosed.
- AMD's MI325X will have 256GB of HBM3E memory and a bandwidth of 6 TB/s.
- The MI355X will feature 288GB of HBM3E memory with an 8 TB/s bandwidth, targeting NVIDIA's Blackwell GPUs.
- The MI325X has a TDP of up to 1000W, while the MI355X will be built on a 3nm process.
- AMD plans to release the MI400 model in 2026, with no specifications available yet.
Related
Testing AMD's Giant MI300X
AMD introduces Radeon Instinct MI300X to challenge NVIDIA in GPU compute market. MI300X features chiplet setup, Infinity Cache, CDNA 3 architecture, competitive performance against NVIDIA's H100, and excels in local memory bandwidth tests.
AMD MI300X performance compared with Nvidia H100
The AMD MI300X AI GPU outperforms Nvidia's H100 in cache, latency, and inference benchmarks. It excels in caching performance, compute throughput, but AI inference performance varies. Real-world performance and ecosystem support are essential.
AMD's Long and Winding Road to the Hybrid CPU-GPU Instinct MI300A
AMD's journey from 2012 led to the development of the powerful Instinct MI300A compute engine, used in the "El Capitan" supercomputer. Key researchers detailed AMD's evolution, funding, and technology advancements, impacting future server offerings.
AMD Ryzen 9 9950X3D and 9900X3D to Feature 3D V-Cache on Both CCD Chiplets
AMD will release the Ryzen 9 9950X3D and 9900X3D with 3D V-cache technology, enhancing gaming performance. The Ryzen 7 9800X3D is expected in late October 2024, with Q1 2025 for the others.
AMD launches AI chip to rival Nvidia's Blackwell
AMD launched the Instinct MI325X AI chip to compete with Nvidia's GPUs, targeting a $500 billion AI market by 2028, while also introducing EPYC 5th Gen CPUs optimized for AI workloads.
For anyone who doesn't follow AMD at all (good move, their consumer support for compute leaves scars) they appear to have a strategy of targeting the server market in hopes of scooping out the high-profit part of the GPGPU world. Hopefully that does well for them, but based on my years of regret at being an AMD customer watching the AI revolution zoom by, I'd be hesitant about that translating to good compute experiences on consumer hardware. I assume the situation is much improved from what I was used to, but I don't trust them to see supporting small users as a priority.
Quick thing to show the sheer scale of these figures. This is 10^15 operations per second, and if you sit a foot from your screen that takes light about a nanosecond to reach you. That means that from the light leaving your screen to it hitting your eyeballs these things can have done another million calculations.
I know this isn't particularly constructive, but I'm hit with waves of nostalgia and older performance figures seeing this.
Related
Testing AMD's Giant MI300X
AMD introduces Radeon Instinct MI300X to challenge NVIDIA in GPU compute market. MI300X features chiplet setup, Infinity Cache, CDNA 3 architecture, competitive performance against NVIDIA's H100, and excels in local memory bandwidth tests.
AMD MI300X performance compared with Nvidia H100
The AMD MI300X AI GPU outperforms Nvidia's H100 in cache, latency, and inference benchmarks. It excels in caching performance, compute throughput, but AI inference performance varies. Real-world performance and ecosystem support are essential.
AMD's Long and Winding Road to the Hybrid CPU-GPU Instinct MI300A
AMD's journey from 2012 led to the development of the powerful Instinct MI300A compute engine, used in the "El Capitan" supercomputer. Key researchers detailed AMD's evolution, funding, and technology advancements, impacting future server offerings.
AMD Ryzen 9 9950X3D and 9900X3D to Feature 3D V-Cache on Both CCD Chiplets
AMD will release the Ryzen 9 9950X3D and 9900X3D with 3D V-cache technology, enhancing gaming performance. The Ryzen 7 9800X3D is expected in late October 2024, with Q1 2025 for the others.
AMD launches AI chip to rival Nvidia's Blackwell
AMD launched the Instinct MI325X AI chip to compete with Nvidia's GPUs, targeting a $500 billion AI market by 2028, while also introducing EPYC 5th Gen CPUs optimized for AI workloads.