July 4th, 2024

Nvidia uses raytracing to model 5G network design

NVIDIA's cuMAC is a CUDA-based platform accelerating 5G/6G MAC layer scheduler functions with GPUs. It supports UE selection, PRB allocation, beamforming, and aims to integrate AI/ML enhancements for efficient scheduling.

Read original articleLink Icon
Nvidia uses raytracing to model 5G network design

Aerial cuMAC is a CUDA-based platform by NVIDIA for accelerating 5G/6G MAC layer scheduler functions with GPUs. It supports various scheduler functions like UE selection, PRB allocation, layer selection, MCS selection, and dynamic beamforming for coordinated cell scheduling. The platform provides a C/C++ API to offload scheduler functions from the L2 stack in DUs to GPUs. In the future, cuMAC aims to integrate AI/ML-based scheduler enhancements with GPU acceleration. The system involves components like the Aerial Scheduler Acceleration API, cuMAC-CP, cell group-based cuMAC API, and cuMAC multi-cell scheduler modules. The scheduling algorithms are implemented as CUDA kernels for multi-cell scheduling, offering benefits like globally optimized resource allocation in a cell group. Additionally, cuMAC supports HARQ re-transmissions and provides CPU reference code for verification. It also offers support for different CSI types, FP32, and FP16 implementations to reduce scheduler latency. Overall, cuMAC aims to enhance scheduling efficiency and performance in 5G/6G networks through GPU acceleration and advanced algorithms.

Related

Testing AMD's Giant MI300X

Testing AMD's Giant MI300X

AMD introduces Radeon Instinct MI300X to challenge NVIDIA in GPU compute market. MI300X features chiplet setup, Infinity Cache, CDNA 3 architecture, competitive performance against NVIDIA's H100, and excels in local memory bandwidth tests.

AMD MI300x GPUs with GEMM tuning improves throughput and latency by up to 7.2x

AMD MI300x GPUs with GEMM tuning improves throughput and latency by up to 7.2x

Nscale explores AI model optimization through GEMM tuning, leveraging rocBLAS and hipBLASlt for AMD MI300x GPUs. Results show up to 7.2x throughput increase and reduced latency, benefiting large models and enhancing processing efficiency.

The XAES-256-GCM extended-nonce AEAD

The XAES-256-GCM extended-nonce AEAD

XAES-256-GCM is a secure AEAD algorithm with 256-bit keys and 192-bit nonces, aiming for safety, compliance, and ease of use. It complements other AEAD implementations and receives support from various clients.

AMD MI300x GPUs with GEMM tuning improves throughput and latency by up to 7.2x

AMD MI300x GPUs with GEMM tuning improves throughput and latency by up to 7.2x

Nscale explores GEMM tuning impact on AI model optimization, emphasizing throughput and latency benefits. Fine-tuning parameters and algorithms significantly boost speed and efficiency, especially on AMD GPUs, showcasing up to 7.2x throughput improvement.

GPUs can now use PCIe-attached memory or SSDs to boost VRAM capacity

GPUs can now use PCIe-attached memory or SSDs to boost VRAM capacity

Major companies like AMD, Intel, and Nvidia are considering supporting Panmnesia's CXL IP for GPU memory expansion using PCIe-attached memory or SSDs. Panmnesia's low-latency solution outperforms traditional methods, showing promise for AI/HPC applications. Adoption by key players remains uncertain.

Link Icon 0 comments