Intel announces Arc B-series "Battlemage" discrete graphics with Linux support
Intel announced its Arc B-Series "Battlemage" graphics cards, featuring the B580 and B570 models with improved performance. Prices are $249 and $219, shipping next week and January 16, respectively.
Read original articleIntel has announced its next-generation Arc B-Series "Battlemage" discrete graphics cards, succeeding the two-year-old Alchemist series. The new Battlemage cards, which include the B580 and B570 models, feature significant improvements in performance and efficiency. The B580 is equipped with 20 Xe cores, a 2670MHz graphics clock, and 12GB of GDDR6 memory, while the B570 has 18 Xe cores, a 2500MHz clock, and 10GB of memory. Intel claims up to a 70% performance increase per Xe core and a 50% improvement in performance per watt compared to the previous generation. Both models support open-source graphics drivers on Linux, with the B580 priced at $249 and the B570 at $219, set to ship next week and on January 16, respectively. The cards are designed for mid-range gaming, particularly at 1440p resolution. They utilize a PCIe 4.0 x8 interface and require an 8-pin power connector. While initial benchmarks show the B580 outperforming the Arc A750 by 24% and competing favorably against NVIDIA's RTX 4060, detailed performance metrics and Linux support specifics will be available after the review embargo lifts.
- Intel's Arc B-Series "Battlemage" graphics cards have been announced, succeeding the Alchemist series.
- The B580 and B570 models feature significant performance and efficiency improvements.
- Both models support open-source graphics drivers on Linux.
- The B580 is priced at $249 and the B570 at $219, with shipping dates set for next week and January 16, respectively.
- Initial benchmarks indicate the B580 outperforms the Arc A750 and competes well against NVIDIA's RTX 4060.
Related
Intel's big plan to take on Qualcomm; promises that x86 is here to stay
Intel has launched its Lunar Lake chips, the Core Ultra 200V, enhancing power efficiency and performance in mobile computing, with models available for preorder and general availability on September 24, 2024.
With Granite Rapids, Intel is back to trading blows with AMD
Intel launched Granite Rapids Xeon processors, featuring up to 128 cores and improved memory bandwidth, enhancing performance in HPC and AI applications, with more Xeon 6 models expected soon.
2 years after entering the graphics card game, Intel has nothing to show for it
Intel has failed to capture market share in the graphics card sector since 2022, facing challenges like driver issues and lack of new products, while Nvidia and AMD dominate the market.
Lunar Lake's iGPU: Debut of Intel's Xe2 Architecture
Intel launched its Lunar Lake mobile chips featuring the new Xe2 iGPU architecture, enhancing efficiency and performance with improved cache capacity, but still trailing behind AMD in compute throughput.
AMD Instinct MI325X to Feature 256GB HBM3E Memory, CDNA4-Based MI355X with 288GB
AMD announced updates to its Instinct GPUs, introducing the MI325X with 256GB memory and 6 TB/s bandwidth, and the MI355X with 288GB memory and 8 TB/s bandwidth, launching in 2025.
- Many users express disappointment over the limited VRAM (12GB), suggesting it is insufficient for modern gaming and machine learning applications.
- There is a strong interest in the cards' performance, particularly in relation to transcoding and Linux support, with some users eager to test their capabilities.
- Concerns about Intel's driver support and overall reliability compared to Nvidia persist, with some users sharing past experiences with Intel's graphics cards.
- Commenters are curious about the target audience for these GPUs, questioning whether they can compete effectively in the current market.
- Several users highlight the aggressive pricing strategy of Intel, hoping it will lead to better options in the budget segment.
At the very least, it's nice to have some decent BUDGET cards now. The ~$200 segment has been totally dead for years. I have a feeling Intel is losing a fair chunk of $ on each card though, just to enter the market.
-.-
I feel like _anyone_ who can pump out GPU's with 24GB+ of memory that are capable to use for py-stuff would benefit greatly.
Even if it's not as performant as the NVIDIA options - just to be able to get the models to run, at whatever speed.
They would fly off the shelves.
Well informed gamers know Intel's discrete GPU is hanging by a thread, so they're not hoping on that bandwagon.
Too small for ML.
The only people really happy seem to be the ones buying it for transcoding and I can't imagine there is a huge market of people going "I need to go buy a card for AV1 encoding".
I am hoping these are open in such a manner that they can be used in OpenBSD. Right now I avoid all hardware with a Nvidia GPU. That makes for somewhat slim pickings.
If the firmware is acceptable to the OpenBSD folks, then I will happly use these.
For power, it's 190W compared to 4060's 115 W.
EDIT: from [1]: B580 has 21.7 billion transistors at 406 mm² die area, compared to 4060's 18.9 billion and 146 mm². That's a big die.
It's a real shame, the single slot a380 is a great performance for price light gaming and general use card for small machines.
I haven't regretted the purchase at all.
Fortunately, having their Linux drivers be (mostly?) open source makes a purchase seem less risky.
Presumably graphics cards optimised for hairdressers and telephone sanitisers?
Other exciting tests will include things like fan control, since that’s still an issue with Arc GPUs.
Should make for a fun blog post.
Very happy with my A770. Godsend for people like me who want plenty VRAM to play with neural nets, but don't have the money for workstation GPUs or massively overpriced Nvidia flagships. Works painlessly with linux, gaming performance is fine, price was the first time I haven't felt fleeced buying a GPU in many years. Not having CUDA does lead to some friction, but I think nVidia's CUDA moat is a temporary situation.
Prolly sit this one out unless they release another SKU with 16G or more ram. But if Intel survives long enough to release Celestial, I'll happily buy one.
My experience on WH40K DT has taught me that upscaling is absolutely vital for a reasonable experience on some games.
But surely it's easy enough to compete on video ram - why not load their GPUs to the max with video ram?
And also video encoder cores - Intel has a great video encoder core and these vary little across high end to low end GPUs - so they could make it a standout feature to have, for example, 8 video encoder cores instead of 2.
It's no wonder Nvidia is the king because AMD and Intel just don't seem willing to fight.
However, this is going to go on clearance within 6 months. Good for consumers, bad for Intel.
Also keep in mind for any ML task Nvidia has the best ecosystem around. AMD and Intel are both like 5 years behind to be charitable...
How was driver support for their A-series?
Not a huge fan of the numbering system they've used. B > A doesn't parse as easily as 5xxx > 4xxx to me.
IIRC that was one of the original goals of geohot's tinybox project, though I'm not sure exactly where that evolved
Their dedication to Linux Support, combined with their good pricing makes this a potential buy for me in future versions. To be frank, I won't be replacing my 7900 XTX with this. Intel needs to provide more raw power in their cards and third parties need to improve their software support before this captures my business.
ah well. pretty sure it'll do for my needs.
Based on scaling by XMX/engine clock napkin math, the B580 should have 230 FP16 TFLOPS and 456 GB/s MBW theoretical. At similar efficiency to LNL Xe2, that should be about pp512 ~4700 t/s and tg128 ~77 t/s for a 7B class model. This would be about 75% of a 3090 for pp and 50% for tg (and of course, 50% of memory). For $250, that's not too bad.
I do want to note a couple things from my poking around. The IPEX-LLM [1] was very responsive, and was able to address an issue I had w/ llama.cpp within days. They are doing weekly update releases, so that's great. The IPEX stands for Intel Extension for PyTorch [2] and it is a mostly drop-in for PyTorch: "Intel® Extension for PyTorch* extends PyTorch* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of Intel® Advanced Vector Extensions 512 (Intel® AVX-512) Vector Neural Network Instructions (VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel Xe Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs through the PyTorch* xpu device."
All of this depends on Intel oneAPI Base Kit [3] which has easy Linux (and presumably Windows) support. I am normally an AUR guy on my Arch Linux workstation, but those are basically broken and I had much more success installing oneAPI Base Kit (w/o issues) directly in Arch Linux. Sadly, this is also where there are issues some of the code is either dependent on older versions of oneAPI Base Kit that are no longer available (vLLM requires oneAPI Base Toolkit 2024.1 - this is not available for download from the Intel site anymore) or in dependency hell (GPU whisper simply will not work, ipex-llm[xpu] has internal conflicts from the get go), so it's not all sunshine. On average, ROCm w/ RNDA3 is much more mature (while not always the fastest, most basic things do just work now).
[1] https://github.com/intel-analytics/ipex-llm
[2] https://github.com/intel/intel-extension-for-pytorch
[3] https://www.intel.com/content/www/us/en/developer/tools/onea...
I'm guessing their marketing department isn't known as the "A-team".
The lack of quantified stats on the marketing pages tells me Intel is way behind.
Related
Intel's big plan to take on Qualcomm; promises that x86 is here to stay
Intel has launched its Lunar Lake chips, the Core Ultra 200V, enhancing power efficiency and performance in mobile computing, with models available for preorder and general availability on September 24, 2024.
With Granite Rapids, Intel is back to trading blows with AMD
Intel launched Granite Rapids Xeon processors, featuring up to 128 cores and improved memory bandwidth, enhancing performance in HPC and AI applications, with more Xeon 6 models expected soon.
2 years after entering the graphics card game, Intel has nothing to show for it
Intel has failed to capture market share in the graphics card sector since 2022, facing challenges like driver issues and lack of new products, while Nvidia and AMD dominate the market.
Lunar Lake's iGPU: Debut of Intel's Xe2 Architecture
Intel launched its Lunar Lake mobile chips featuring the new Xe2 iGPU architecture, enhancing efficiency and performance with improved cache capacity, but still trailing behind AMD in compute throughput.
AMD Instinct MI325X to Feature 256GB HBM3E Memory, CDNA4-Based MI355X with 288GB
AMD announced updates to its Instinct GPUs, introducing the MI325X with 256GB memory and 6 TB/s bandwidth, and the MI355X with 288GB memory and 8 TB/s bandwidth, launching in 2025.