August 12th, 2024

Gaussian Splatting Slam [CVPR 2024]

The "Gaussian Splatting SLAM" paper presents a real-time 3D reconstruction method using 3D Gaussians, achieving high-quality results for small and transparent objects with support from Dyson Technology Ltd.

Read original articleLink Icon
Gaussian Splatting Slam [CVPR 2024]

The paper "Gaussian Splatting SLAM," presented at CVPR 2024, introduces a novel approach to incremental 3D reconstruction using a single moving monocular or RGB-D camera. The method operates in real-time at 3 frames per second and employs 3D Gaussians as the sole representation for tracking, mapping, and rendering. Key innovations include a new camera tracking formulation that optimizes directly against 3D Gaussians, enhancing tracking speed and robustness. Additionally, the method incorporates geometric verification and regularization to address ambiguities in dense reconstruction. The full SLAM system demonstrates state-of-the-art performance in novel view synthesis and trajectory estimation, successfully reconstructing small and transparent objects. The authors, Hidenobu Matsuki, Riku Murai, Paul H. J. Kelly, and Andrew J. Davison, acknowledge support from Dyson Technology Ltd. and express gratitude to various contributors for their insights. The research showcases results from self-captured sequences, highlighting the method's capability to reconstruct scenes in real-time using RGB images from an Intel Realsense d455 camera.

- The Gaussian Splatting SLAM method operates in real-time at 3fps.

- It utilizes 3D Gaussians for efficient tracking and mapping.

- The system achieves high-quality reconstruction of small and transparent objects.

- The research was supported by Dyson Technology Ltd.

- The authors contributed equally to the work presented.

Link Icon 4 comments
By @dwrodri - 8 months
Tangentially related to the post: I have what I think is a related computer vision problem I would like to solve and need some pointers on how you would go about doing it.

My desk is currently set up such that I have a large monitor in the middle. I'd like to look at the center of the screen when taking calls. I'd also like it to appear as though I am looking straight into the camera, and the camera is pointed at my face. Obviously, I cannot physically place the camera right in front of the monitor as that would be seriously inconvenient. Some laptops solve but I don't think their methods apply here as the top of my monitor ends up being quite a bit higher than what would look "good" for simple eye correction.

I have multiple webcams that I can place around the monitor to my liking. I would like to have something similar to what is seen when you open this webpage, but for a video. hopefully at higher quality since I'm not constrained to a monocular source.

I've dabbled a bit with OpenCV in the past, but the most I've done is a little camera calibration for de-warping fisheye lenses. Any ideas on what work I should look into to get started with this?

In my head, I'm picturing two camera sources: one above and one below the monitor. The "synthetic" projected perspective would be in the middle of the two.

Is capturing a point cloud from a stereo source and then reprojecting with splats the most "straightforward" way to do this? Any and all papers/advice are welcome. I'm a little rusty on the math side but I figure a healthy mix of Szeliski's Computer Vision, Wolfram Alpha, a chatbot, and of course perseverance will get me there.

By @totalview - 8 months
I love the “3D Gaussian Visualisation” section that illustrates the difference between photos of the mono data and the splat data. The splats are like a giant point cloud under the hood, except unlike point clouds which have uniform size, different splats have different sizes.

This all is well and good when you are just using for a pretty visualization, but it appears gaussians have the same weakness as point clouds processed with structure from motion, in that you need lots of camera angles to get quality surface reconstruction accuracy.

By @andybak - 8 months
This claims to work with monocular or RGB+depth but the only live demo is for an Intel Realsense d455 RBGD camera. That seems a shame as it significantly raises the bar for people to try it out themselves. (Can you even still buy the d455?)
By @Dig1t - 8 months
I would love to use something like this to make a video game.

Are there any examples or algorithms that can turn this into 3D objects that could be used in a video game? Any examples of someone doing that?