July 4th, 2024

The Snapdragon X Elite's Adreno iGPU

Qualcomm launches Snapdragon X Elite with Adreno X1 iGPU for laptops, boasting enhanced performance. Adreno X1 offers competitive compute throughput, wide execution units, and optimized memory access, challenging Intel and AMD.

Read original articleLink Icon
The Snapdragon X Elite's Adreno iGPU

Qualcomm introduces the Snapdragon X Elite with the Adreno X1 iGPU, targeting the laptop market with enhanced performance. The Adreno X1 features a higher clock speed and improved memory subsystem, supporting up to 64 GB of DRAM capacity. Compared to Intel's Xe-LPG iGPU and AMD's RDNA 3 iGPU, the Adreno X1 offers competitive compute throughput and cache setups. Qualcomm's architecture includes Shader Processors with two Micro Shader Processor Texture Processors, providing wide execution units and improved register file capacity. The Adreno X1 struggles with INT32 adds but performs well with more complex operations. In terms of memory access, the Adreno X1 features a dedicated texture cache and cluster caches to optimize performance. Despite lower cache bandwidth compared to competitors, Qualcomm leverages fast LPDDR5X memory for efficient data access. Overall, the Snapdragon X Elite aims to deliver high performance for demanding PC games, competing closely with Intel and AMD in the laptop GPU market.

Related

Unisoc and Xiaomi's 4nm Chips Said to Challenge Qualcomm and MediaTek

Unisoc and Xiaomi's 4nm Chips Said to Challenge Qualcomm and MediaTek

UNISOC and Xiaomi collaborate on 4nm chips challenging Qualcomm and MediaTek. UNISOC's chip features X1 big core + A78 middle core + A55 small core with Mali G715 MC7 GPU, offering competitive performance and lower power consumption. Xiaomi's Xuanjie chip includes X3 big core + A715 middle core + A510 small core with IMG CXT 48-1536 GPU, potentially integrating a MediaTek baseband. Xiaomi plans a separate mid-range phone line with Xuanjie chips, aiming to strengthen its market presence. The successful development of these 4nm chips by UNISOC and Xiaomi marks progress in domestically produced mobile chips, enhancing competitiveness.

More ARM Linux Laptops Are on the Way

More ARM Linux Laptops Are on the Way

More ARM Linux laptops are emerging, including Tuxedo Computers' "Drako" with Qualcomm's Snapdragon X Elite chipset to rival Apple's M2. This signals progress in ARM-based Linux devices, supported by Qualcomm's collaboration with Linaro for smoother integration. Challenges persist in ensuring compatibility and driver support, akin to Windows ARM laptops and Apple silicon MacBooks.

Testing AMD's Giant MI300X

Testing AMD's Giant MI300X

AMD introduces Radeon Instinct MI300X to challenge NVIDIA in GPU compute market. MI300X features chiplet setup, Infinity Cache, CDNA 3 architecture, competitive performance against NVIDIA's H100, and excels in local memory bandwidth tests.

AMD MI300X performance compared with Nvidia H100

AMD MI300X performance compared with Nvidia H100

The AMD MI300X AI GPU outperforms Nvidia's H100 in cache, latency, and inference benchmarks. It excels in caching performance, compute throughput, but AI inference performance varies. Real-world performance and ecosystem support are essential.

Examining the Nintendo Switch (Tegra X1) Video Engine

Examining the Nintendo Switch (Tegra X1) Video Engine

The article analyzes Nintendo Switch's Tegra X1 SoC video engine, comparing its performance with desktop Maxwell's. Tegra X1 excels in HEVC support and efficiency, showcasing advancements in video quality and compression.

Link Icon 11 comments
By @dagmx - 5 months
It’s been interesting seeing the difference in architecture play out in benchmarks.

For context, there was a lot of hullabaloo a while ago when the Adreno 730 was posting super impressive benchmarks, outpacing Apple’s GPU and putting up a good fight against AMD and NVIDIA’s lower/mid range cards.

Since then, with the Snapdragon X, there’s been a bit of a deflation which has shown the lead flip dramatically when targeting more modern graphics loads. The Adreno now ranks behind the others when it comes to benchmarks that reflect desktop gaming, including being behind Apple’s GPU.

It’ll be interesting to see how Qualcomm moves forward with newer GPU architectures. Whether they’ll sacrifice their mobile lead in the pursuit of gaining ground for higher end gaming.

By @benreesman - 5 months
“In Adreno tradition, Adreno X1’s first level cache is a dedicated texture cache. Compute accesses bypass the L1 and go to the next level in the cache hierarchy. It’s quite different from current AMD, Nvidia, and Intel GPU architectures, which have a general purpose first level cache with significant capacity. On prior Adreno generations, the GPU-wide L2 cache would have to absorb all compute accesses. Adreno X1 takes some pressure off the L2 by adding 128 KB cluster caches.”

People have been tinkering with L1 cache conditionality since the L1i and L1d split in 1976 but the Qualcomm people are going hard on this and the jury seems out how it’s going to play.

The line between the L1 and the register file has been getting blurrier every year for over a decade and I increasingly have a heuristic around paying the most attention to L2 behavior until the profiles are in but I’m admittedly engaging in alchemy.

Can any serious chip people as opposed to an enthusiastic novice like myself weigh in on how the thinking is shaping up WRT this?

By @pjmlp - 5 months
> DirectX 12 Ultimate: Disabled

That right there is already a reason not to buy this in 2024.

DirectX 12 Ultimate is 4 years old by now, and with DirectX 12 the best it can do is a 10 years old 3D API.

This is basically a GPU for Office work.

By @gary_0 - 5 months
Re: the manual driver updates. Recently I put a clean Win11 install on an ASUS Meteor Lake laptop for someone, and Windows downloaded and installed all the latest drivers automatically (along with a bunch of fresh bloatware, natch). Maybe Qualcomm is working with Microsoft so their drivers will get updated the same way?
By @jeroenhd - 5 months
I wonder if there's performance being left on the table because of the way programs and games are designed. It's no secret Qualcomm's mobile chips will run like shit when you try to use desktop code on them, because they're designed differently. I wonder if we're seeing aspects of that here. It would explain why Qualcomm convinced their press team of impressive numbers that nobody in the real world has been able to replicate.

There was a whole comic about design differences when porting desktop style games qnd shaders to mobile (I can't find it for the life of me) which was a pretty good beginner's guide to porting that stuck with me.

By @mirsadm - 5 months
With my own use case I've noticed very poor compute shader performance on the Snapdragon GPUs. Even worse the drivers are completely unpredictable. The same shader will sometimes run 2x slower for seemingly no good reason at all. I didn't realise games these days relied so much on compute shaders. It's no suprise it doesn't perform as well as it should.
By @bhouston - 5 months
Nice! What are the comparisons with Apple’s you in the latest M-series chips?
By @rubymamis - 5 months
Why is there no comparison with Apple's iGPU?
By @perdomon - 5 months
How soon can I buy a handheld console with one of these inside, and can it run God of War?
By @smusamashah - 5 months
That's a mouthful of a name
By @jauntywundrkind - 5 months
ARM remains a shitty backwater of unsupportable crap ass nonsense being thrown over the wall.

Qualcomm bought Imageon from AMD in 2009. Sure, they've done some work, made some things somewhat better. But hearing that the graphics architecture is woefully out of date, with terrible compute performance is ghastly unsurprisingly. Trying to see thing thing run games is going to be a sad sad sad story. And that's only 50% the translation layers (which would be amazing if this were Linux and not a Windows or Android device).