March 16th, 2025

Raytracing on Intel's Arc B580 – By Chester Lam

Intel's Arc B580 GPU shows improved raytracing capabilities with 467.9 million rays per second but suffers from low frame rates (12 FPS) and memory latency issues, indicating performance challenges.

Read original articleLink Icon
Raytracing on Intel's Arc B580 – By Chester Lam

Intel's Arc B580 GPU has been evaluated for its raytracing capabilities, particularly in the context of rendering Cyberpunk 2077 with path tracing enabled. The B580's architecture includes dedicated raytracing accelerators (RTA) that enhance performance by increasing the traversal pipeline count from two to three, allowing for more efficient processing of rays. During testing, the B580 achieved a processing rate of 467.9 million rays per second, with each ray requiring an average of 39.5 traversal steps. The RTA's BVH cache has also been upgraded from 8 KB to 16 KB, improving latency and reducing pressure on the L1 cache. Despite these advancements, the GPU experienced low frame rates (12 FPS) and significant stalls in shader thread processing, indicating that the raytracing workload is demanding and that memory latency remains a critical issue. The architecture's ability to handle high thread counts was noted, but execution unit utilization was low, suggesting inefficiencies in instruction-level parallelism. Overall, while the B580 shows promise in raytracing performance, challenges such as memory access and shader execution efficiency need to be addressed for optimal performance in demanding applications.

- Intel's Arc B580 GPU features enhanced raytracing capabilities with increased traversal pipelines.

- The GPU processed 467.9 million rays per second during testing, but frame rates were low at 12 FPS.

- The BVH cache size was doubled to 16 KB, improving latency and cache performance.

- Significant stalls in shader thread processing indicate memory latency issues.

- Low execution unit utilization suggests inefficiencies in handling complex raytracing workloads.

Link Icon 7 comments
By @achierius - 27 days
It feels like just yesterday that Chips and Cheese started publishing (*checked and they started up in 2020 -- so not that long ago after all!), and now they've really become a mainstay in my silicon newsletter stack, up there with Semianalysis/Semiengineering/etc.

> Intel uses a software-managed scoreboard to handle dependencies for long latency instructions.

Interesting! I've seen this in compute accelerators before, but both AMD and Nvidia manage their long-latency dependency tracking in hardware so it's interesting to see a major GPU vendor taking this approach. Looking more into it, it looks like the interface their `send`/`sendc` instruction exposes is basically the same interface that the PE would use to talk to the NOC: rather than having some high-level e.g. load instruction that hardware then translates to "send a read-request to the dcache, and when it comes back increment this scoreboard slot", the ISA lets/makes the compiler state that all directly. Good for fine control of the hardware, bad if the compiler isn't able to make inferences that the hardware would (e.g. based on runtime data), but then good again if you really want to minimize area and so wouldn't have that fancy logic in the pipeline anyways.

By @im_down_w_otp - 27 days
I love these breakdown writeups so much.

I'm also hoping that Intel puts out an Arc A770 class upgrade in their B-series line-up.

My workstation and my kids' playroom gaming computer both have A770's, and they've been really amazing for the price I paid, $269 and $190. My triple screen racing sim has an RX 7900 GRE ($499), and of the three the GRE has surprisingly been the least consistently stable (e.g. driver timeouts, crashes).

Granted, I came into the new Intel GPU game after they'd gone through 2 solid years of driver quality hell, but I've been really pleased with Intel's uncharacteristic focus and pace of improvement in both the hardware and especially the software. I really hope they keep it up.

By @sergiotapia - 27 days
Was raytracing a psyop by Nvidia to lock out amd? Games today don't look that much nicer than 10 years ago and demand crazy hardware. Is raytracing a solution looking for a problem?

https://x.com/NikTekOfficial/status/1837628834528522586

By @rayiner - 27 days
This is so cool! I think this is a video of CyberPunk 2077 with path tracing on versus off: https://www.youtube.com/watch?v=89-RgetbUi0. It sees like a real, next-generation advance in graphics quality that we haven't seem in awhile.
By @KronisLV - 26 days
I actually have a B580 that I replaced my old A580 with.

I didn't manage to get it for MSRP (because living in Europe does tend to increase the price quite a bit, a regular RTX 3060 is over 300 EUR here), but I have to say that it's a pretty nice card, when most others seem quite overpriced or outside of my budget.

When paired with an 5800X the performance is good, the XeSS upscaling looks prettier than FSR and pretty close to DLSS, the framegen also seems to have higher quality than FSR (but more latency, from what I've seen), the hardware AV1 encoder is lovely and the other QSV ones are great, though I do wish that I could get a case big enough and a new PSU to have both A580 and B580 in the same computer and use the B580 for games and A580 for the other stuff (not quite sure how well that combination would work, if at all).

Either way, I'm happy that I got the card, especially with a decent CPU (even the A series with my previous Ryzen 5 4500 was an absolute mess, no software showed the CPU being maxed out but it very much was a bottleneck) and do kind of hope that I'll get the likes of performance that you get in War Thunder, or even GTA V Enhanced Edition for the years to come (yes, the raytracing works there as well) or even more recent games like Kingdom Come: Deliverance 2.

If the upscaling/framegen support was even better in most game engines and games, then it could be stretched further or at least used as a band aid for the likes of Delta Force or Forever Winter - games that come out with pretty bad optimization and are taxing on the hardware, with no good way to turn subjectively unnecessary effects or graphical features off, despite the underlying engines themselves being able to scale way down.

At the end of the day, even if Intel Arc won't displace any of the big players in the market, it should improve the market competitiveness which is good for the consumer.

By @christkv - 27 days
Arc will be successful because it will be in all mobile chips. The individual gpu market is smaller by a big factor and they are targeting the biggest part of that market with low cost.
By @api - 27 days
Intel Arc could be Intel's comeback if they play it right. AMD's got the hardware to disrupt nVidia but their software sucks and they have a bad reputation for that. Apple's high-end M chips are good but also expensive like nVidia (and sold only with a high-end Mac) and don't quite have the RAM bandwidth.