July 10th, 2024

3D Gaussian Ray Tracing: Fast Tracing of Particle Scenes

The research paper presents a new method for rendering particle scenes using GPU ray tracing, enabling advanced lighting effects and supporting complex camera models. Experimental results demonstrate speed and accuracy for various applications.

Read original articleLink Icon
3D Gaussian Ray Tracing: Fast Tracing of Particle Scenes

The research paper titled "3D Gaussian Ray Tracing: Fast Tracing of Particle Scenes" introduces a novel approach to rendering particle-based radiance fields using ray tracing instead of rasterization. By leveraging GPU ray tracing hardware and bounding volume hierarchies, the method efficiently handles semi-transparent particles, enabling advanced lighting effects like shadows and reflections. The technique also supports complex camera models and time-dependent effects, such as rolling shutter distortions. Experimental results demonstrate the speed and accuracy of the approach, showcasing applications in computer graphics and vision, including novel view synthesis and scene reconstruction for autonomous vehicles and robotics. By eliminating the need for rectification steps and supporting training with distorted camera models, the method achieves high-quality outputs compared to traditional methods like 3D Gaussian Splatting. Overall, the research highlights the benefits of ray tracing for rendering particle scenes and its potential for advancing graphics and vision applications.

Link Icon 4 comments
By @vessenes - 3 months
This looks pretty interesting, for a few reasons: it can fit into existing ray tracing rendering pipelines, and you get most of the ray tracing benefits (reflection, shadows from geometry, refraction, depth of field, camera geometry) along with it. These are both pretty big.

Render quality is high / equivalent to MipNeRF (or however it’s capitalized). PSNR is equivalent or better, and the rendered output can be denoised with, say OptiX.

Some downsides/caveats — it works best if you retrain a little, so you won’t get the best quality if you’re pulling over mipnerf trained Gaussians, it’s slower to render than a straight rasterizer, like 50% slower, and of course these splats still don’t have geometry to them, as is much discussed elsewhere.

They spent a lot of work optimizing doing this for Nvidia’s RTX series, and the raytracing task is a little different than the typical one, which is to say it’s rare in ‘normal’ raytracing that you’re adding up the colors of 100s of transmissive, semi-transparent colors/radiances to get a single pixel; usually the bulk of the color from a raytraced scene comes from a smaller number of rays. If this method becomes popular, then NVIDIA could no doubt optimize the raytracing architectures further in the future and you’d get back some of that speed.

All this to say, I hope this gets rolled into existing engines, it’s practical engineering that would add a lot of options to workflows, and pretty neat!

By @robinhouston - 3 months
I’m intrigued by the anonymity of the author(s). I’m sure they have reasons for wanting to remain anonymous, but I can’t imagine what they might be.
By @billconan - 3 months
can this relight the existing scene?
By @echelon - 3 months
"Anonymous authors"?