Raytracing on Intel's Arc B580 – By Chester Lam
Intel's Arc B580 GPU shows improved raytracing capabilities with 467.9 million rays per second but suffers from low frame rates (12 FPS) and memory latency issues, indicating performance challenges.
Read original articleIntel's Arc B580 GPU has been evaluated for its raytracing capabilities, particularly in the context of rendering Cyberpunk 2077 with path tracing enabled. The B580's architecture includes dedicated raytracing accelerators (RTA) that enhance performance by increasing the traversal pipeline count from two to three, allowing for more efficient processing of rays. During testing, the B580 achieved a processing rate of 467.9 million rays per second, with each ray requiring an average of 39.5 traversal steps. The RTA's BVH cache has also been upgraded from 8 KB to 16 KB, improving latency and reducing pressure on the L1 cache. Despite these advancements, the GPU experienced low frame rates (12 FPS) and significant stalls in shader thread processing, indicating that the raytracing workload is demanding and that memory latency remains a critical issue. The architecture's ability to handle high thread counts was noted, but execution unit utilization was low, suggesting inefficiencies in instruction-level parallelism. Overall, while the B580 shows promise in raytracing performance, challenges such as memory access and shader execution efficiency need to be addressed for optimal performance in demanding applications.
- Intel's Arc B580 GPU features enhanced raytracing capabilities with increased traversal pipelines.
- The GPU processed 467.9 million rays per second during testing, but frame rates were low at 12 FPS.
- The BVH cache size was doubled to 16 KB, improving latency and cache performance.
- Significant stalls in shader thread processing indicate memory latency issues.
- Low execution unit utilization suggests inefficiencies in handling complex raytracing workloads.
Related
Intel announces Arc B-series "Battlemage" discrete graphics with Linux support
Intel announced its Arc B-Series "Battlemage" graphics cards, featuring the B580 and B570 models with improved performance. Prices are $249 and $219, shipping next week and January 16, respectively.
Intel Arc B580 trades blows with the RTX 4060 and RX 7600 in early benchmarks
Intel's Arc B580 GPU shows up to 30% better performance than the A580, is 10% faster than the RTX 4060, and competes well against AMD's RX 7600, pending driver stability.
Intel Arc B580 Delivers Promising Linux GPU Compute Potential for Battlemage
The Intel Arc B580 graphics card demonstrates strong GPU compute potential on Linux, requiring Linux 6.12+ and Mesa 24.3+. Initial benchmarks show competitive performance, though some bugs are present.
Intel built best budget GPU for 1440p gaming
The Intel Arc B580, priced at $249, targets budget gamers with 12GB memory and improved performance, claiming 10% better framerates than Nvidia's RTX 4060, despite efficiency challenges and upcoming AMD competition.
Intel Arc B570 Graphics Performance on Linux Review
Intel launched the Arc B570 graphics card at $219+, featuring 18 Xe cores, 10GB VRAM, and open-source driver support for Linux, making it a strong budget option for users.
> Intel uses a software-managed scoreboard to handle dependencies for long latency instructions.
Interesting! I've seen this in compute accelerators before, but both AMD and Nvidia manage their long-latency dependency tracking in hardware so it's interesting to see a major GPU vendor taking this approach. Looking more into it, it looks like the interface their `send`/`sendc` instruction exposes is basically the same interface that the PE would use to talk to the NOC: rather than having some high-level e.g. load instruction that hardware then translates to "send a read-request to the dcache, and when it comes back increment this scoreboard slot", the ISA lets/makes the compiler state that all directly. Good for fine control of the hardware, bad if the compiler isn't able to make inferences that the hardware would (e.g. based on runtime data), but then good again if you really want to minimize area and so wouldn't have that fancy logic in the pipeline anyways.
I'm also hoping that Intel puts out an Arc A770 class upgrade in their B-series line-up.
My workstation and my kids' playroom gaming computer both have A770's, and they've been really amazing for the price I paid, $269 and $190. My triple screen racing sim has an RX 7900 GRE ($499), and of the three the GRE has surprisingly been the least consistently stable (e.g. driver timeouts, crashes).
Granted, I came into the new Intel GPU game after they'd gone through 2 solid years of driver quality hell, but I've been really pleased with Intel's uncharacteristic focus and pace of improvement in both the hardware and especially the software. I really hope they keep it up.
I didn't manage to get it for MSRP (because living in Europe does tend to increase the price quite a bit, a regular RTX 3060 is over 300 EUR here), but I have to say that it's a pretty nice card, when most others seem quite overpriced or outside of my budget.
When paired with an 5800X the performance is good, the XeSS upscaling looks prettier than FSR and pretty close to DLSS, the framegen also seems to have higher quality than FSR (but more latency, from what I've seen), the hardware AV1 encoder is lovely and the other QSV ones are great, though I do wish that I could get a case big enough and a new PSU to have both A580 and B580 in the same computer and use the B580 for games and A580 for the other stuff (not quite sure how well that combination would work, if at all).
Either way, I'm happy that I got the card, especially with a decent CPU (even the A series with my previous Ryzen 5 4500 was an absolute mess, no software showed the CPU being maxed out but it very much was a bottleneck) and do kind of hope that I'll get the likes of performance that you get in War Thunder, or even GTA V Enhanced Edition for the years to come (yes, the raytracing works there as well) or even more recent games like Kingdom Come: Deliverance 2.
If the upscaling/framegen support was even better in most game engines and games, then it could be stretched further or at least used as a band aid for the likes of Delta Force or Forever Winter - games that come out with pretty bad optimization and are taxing on the hardware, with no good way to turn subjectively unnecessary effects or graphical features off, despite the underlying engines themselves being able to scale way down.
At the end of the day, even if Intel Arc won't displace any of the big players in the market, it should improve the market competitiveness which is good for the consumer.
Related
Intel announces Arc B-series "Battlemage" discrete graphics with Linux support
Intel announced its Arc B-Series "Battlemage" graphics cards, featuring the B580 and B570 models with improved performance. Prices are $249 and $219, shipping next week and January 16, respectively.
Intel Arc B580 trades blows with the RTX 4060 and RX 7600 in early benchmarks
Intel's Arc B580 GPU shows up to 30% better performance than the A580, is 10% faster than the RTX 4060, and competes well against AMD's RX 7600, pending driver stability.
Intel Arc B580 Delivers Promising Linux GPU Compute Potential for Battlemage
The Intel Arc B580 graphics card demonstrates strong GPU compute potential on Linux, requiring Linux 6.12+ and Mesa 24.3+. Initial benchmarks show competitive performance, though some bugs are present.
Intel built best budget GPU for 1440p gaming
The Intel Arc B580, priced at $249, targets budget gamers with 12GB memory and improved performance, claiming 10% better framerates than Nvidia's RTX 4060, despite efficiency challenges and upcoming AMD competition.
Intel Arc B570 Graphics Performance on Linux Review
Intel launched the Arc B570 graphics card at $219+, featuring 18 Xe cores, 10GB VRAM, and open-source driver support for Linux, making it a strong budget option for users.