July 29th, 2024

Sam 2: The next generation of Meta Segment Anything Model

Meta has launched SAM 2, a real-time object segmentation model for images and videos, enhancing accuracy and reducing interaction time. It supports diverse applications and is available under an Apache 2.0 license.

Read original articleLink Icon
Sam 2: The next generation of Meta Segment Anything Model

Meta has introduced SAM 2, the next generation of its Segment Anything Model, which now supports real-time object segmentation in both images and videos. This unified model is designed to achieve state-of-the-art performance and can segment any object, even those it has not previously encountered, without requiring custom adaptation. SAM 2 is being released under an Apache 2.0 license, allowing developers to utilize the model freely. Alongside the model, Meta is sharing the SA-V dataset, which contains approximately 51,000 real-world videos and over 600,000 spatio-temporal masks, significantly expanding the resources available for video segmentation tasks.

The model enhances the capabilities of its predecessor, SAM, by improving segmentation accuracy and reducing interaction time by threefold. SAM 2's architecture incorporates a memory mechanism that allows it to maintain context across video frames, addressing challenges such as object motion and occlusion. This advancement enables applications in various fields, including video editing, scientific research, and autonomous vehicles.

Meta emphasizes its commitment to open science by providing the SAM 2 code, weights, and evaluation tools, encouraging the AI community to explore new use cases. The model's potential applications range from creative video effects to aiding in medical procedures and environmental research. With SAM 2, Meta aims to further revolutionize the field of computer vision and inspire innovative solutions across multiple industries.

Link Icon 1 comments
By @gnabgib - 7 months
Discussion (59 points, 57 minutes ago) https://news.ycombinator.com/item?id=41104523