July 2nd, 2024

Meta 3D Gen

Meta introduces Meta 3D Gen (3DGen), a fast text-to-3D asset tool with high prompt fidelity and PBR support. It integrates AssetGen and TextureGen components, outperforming industry baselines in speed and quality.

Read original articleLink Icon
Meta 3D Gen

Meta has introduced Meta 3D Gen (3DGen), a fast pipeline for text-to-3D asset generation. This new tool offers high prompt fidelity and quality 3D shapes and textures in under a minute, supporting physically-based rendering (PBR) for real-world applications. 3DGen also allows generative retexturing of 3D shapes using textual inputs. It integrates Meta 3D AssetGen and Meta 3D TextureGen components, achieving a 68% win rate compared to single-stage models. The tool outperforms industry baselines in prompt fidelity and visual quality for complex textual prompts while being faster. Authors of related publications include Raphael Bensadoun, Tom Monnier, Yanir Kleiman, and others. Meta 3D AssetGen focuses on text-to-mesh generation, while Meta 3D TextureGen specializes in texture generation for 3D objects. These advancements showcase Meta's commitment to innovation in graphics and computer vision research.

Related

MeshAnything – Converts 3D representations into efficient 3D meshes

MeshAnything – Converts 3D representations into efficient 3D meshes

MeshAnything efficiently generates high-quality Artist-Created Meshes with optimized topology, fewer faces, and precise shapes. Its innovative approach enhances 3D industry applications by improving storage and rendering efficiencies.

HybridNeRF: Efficient Neural Rendering

HybridNeRF: Efficient Neural Rendering

HybridNeRF combines surface and volumetric representations for efficient neural rendering, achieving 15-30% error rate improvement over baselines. It enables real-time framerates of 36 FPS at 2K×2K resolutions, outperforming VR-NeRF in quality and speed on various datasets.

Figma AI: Empowering designers with intelligent tools

Figma AI: Empowering designers with intelligent tools

Figma AI enhances designers' workflow with AI-powered features like Visual Search, Asset Search, text tools, and content generation. It aims to streamline tasks, boost efficiency, and spark creativity while prioritizing data privacy.

Declaratively build your APIs in TypeScript and Python

Declaratively build your APIs in TypeScript and Python

Metatype is a declarative API development platform focusing on creating reliable, modular components. It offers Typegraphs for composing APIs, Typegate for GraphQL/REST, and Meta CLI for live reloading. Users can access documentation and a quick start guide on the Metatype GitHub repository.

Our guidance on using LLMs (for technical writing)

Our guidance on using LLMs (for technical writing)

The Ritza Handbook advises on using GPT and GenAI models for writing, highlighting benefits like code samples and overcoming writer's block. However, caution is urged against using GPT-generated text in published articles.

Link Icon 24 comments
By @wkat4242 - 5 months
I can't wait for this to become usable. I love VR but the content generation is just sooooo labour intensive. Help creating 3D models would help so much and be the #1 enabler for the metaverse IMO.
By @mintone - 5 months
I've been bullish[1] on this as a major aspect of generative AI for a while now, so it's great to see this paper published.

3D has an extremely steep learning curve once you try to do anything non-trivial, especially in terms of asset creation for VR etc. but my real interest is where this leads in terms of real-world items. One of the major hurdles is that in the real-world we aren't as forgiving as we are in VR/games. I'm not entirely surprised to see that most of the outputs are "artistic" ones, but I'm really interested to see where this ends up when we can give AI combined inputs from text/photos/LIDAR etc and have it make the model for a physical item that can be 3D printed.

[1] https://www.technicalchops.com/articles/ai-inputs-and-output...

By @iamleppert - 5 months
I tried all the recent wave of text/image to 3D model services, some touting 100 MM+ valuations and tens of millions raised and found them all to produce unusable garbage.
By @LarsDu88 - 5 months
This is crazy impressive, and the fact they have the whole thing running with a PBR texturing pipeline is really cool.

That being said, I wonder if the use of signed distance fields (SDFs) results in bad topology.

I saw a paper earlier this week that was recently released that seems to build "game-ready" topology --- stuff that might actually be riggable for animation. https://github.com/buaacyw/MeshAnything

By @explaininjs - 5 months
Looks fine, but you can tell the topology isn’t good based on the lack of wireframes.
By @w_for_wumbo - 5 months
I think this is another precursor step in recreating our reality digitally. As long as you're able to react to the persons' state, with enough metrics you're able to recreate environments and scenarios within a 'safe environment' for people to push through and learn to cope with the scenarios they don't feel safe to address in the 'real' world.

When the person then emerges from this virtual world, it'll be like an egg hatching into a new birth, having learned the lessons in their virtual cocoon.

If you don't like this idea, it's an interesting thought experiment regardless as we can't verify, we're not already in a form of this.

By @floppiplopp - 5 months
Interesting, but what are the practical uses of 3d assets beyond gaming, where does it create a real advantage over what we already use as visual information and user interfaces? I cannot see VR replacing the interactions we have. It requires cumbersome, expensive hardware, it floods the users with additional mostly useless information (image, sound, 3d itself) they have to process, it's slow and expensive to create and maintain, in short: it's inefficient compared to established tech, which will always run circles around the lame try of imitating real world interactions in a 3d virtual space. The potential availability of very expensively (in terms of computing power) generated assets doesn't change that. It's still hard to do right, and even if done right, it seems like only a gimmick hardly anyone can stomach for more then a couple of hours at best. It's information overload to most people, and they have better alternatives.
By @vletal - 5 months
Seeems like simple enough 3D-to-3D will be possible soon!

I'll use it to upscale 8x all meshes and textures in the original Mafia and Unreal Tournament, write a good bye letter to my family and disappear.

I think the kids will understand when they grow up.

By @GaggiX - 5 months
In the comparison between the models only Rodin seems to produce clean topology, hopefully in the future we will see a model with the strength of both, hopefully from Meta as Rodin is a commercial model.
By @999900000999 - 5 months
Would love for an artist to provide some input, but I imagine this could be really good if it generates models that you can edit or start from later .

Or, just throw a PS1 filter on top and make some retro games

By @anditherobot - 5 months
Can this potentially support :

- Image Input to 3D model Output

- 3D model(format) as Input

Question: What is the current state of the art commercially available product in that niche?

By @localfirst - 5 months
can somebody please please integrate SAM with 3d primitive RAGging? This is the golden chalice solution as a 3d modeler, having one of those "blobs" generated by Luma and likes aren't very useful
By @rebuilder - 5 months
I’m puzzled by the poor texture quality in these. The colours are just bad - it looks like the textures are blown out (the detail at the bright end clip into white) and much too contrasty ( the turkey does that transition from red to white via a band of yellow). I wonder why that is - was the training data just done on the cheap?
By @tiborsaas - 5 months
Probably this is the best way to build the Metaverse. Publish all the research, let people build products over it and soon we'll in need for a place and platform to make use of all the instant assets in virtual spaces.

Well played, Meta.

By @kgraves - 5 months
Can this be used for image to 3D generation? What is the SOTA in this area these days?
By @f0e4c2f7 - 5 months
Is there a way to try this yet?
By @polterguy1000 - 5 months
Meta 3D Gen represents a significant step forward in the realm of 3D content generation, particularly for VR applications. The ability to generate detailed 3D models from text inputs could drastically reduce the labor-intensive process of content creation, making it more accessible and scalable. However, as some commenters have pointed out, the current technology still faces challenges, especially in producing high-quality, detailed geometry that holds up under the scrutiny of VR’s stereoscopic depth perception. The integration of PBR texturing is a promising feature, but the real test will be in how well these models can be refined and utilized in practical applications. It’s an exciting development, but there’s still a long way to go before it can fully meet the needs of VR developers and artists.
By @carbocation - 5 months
For starters, I'd love to just see a rock-solid neural network replacement for screened poisson surface reconstruction. (I have seen MeshAnything and I don't think that's the end-game.)
By @Simon_ORourke - 5 months
Are those guys still banging on about that Metaverse? That's taken a decided back seat to all the AI innovation in the past 18 months.
By @timeon - 5 months
Why these pages want to bother visitors with popups? Just use "only essential" as default.
By @ziofill - 5 months
Is this going toward 3D games entirely "hallucinated"? That would be amazing.
By @nightowl_games - 5 months
Is this just a paper or can I run the program and generate some stuff?
By @surfingdino - 5 months
Not sure how adding Gen AI is going to make VR any better? I wanted to type "it's like throwing good money after bad", but that's not quite right. Both are black holes where VC money is turned into papers and demos.
By @antman - 5 months
Work like this is the only way to revive the now defunct Metaverse, I was wondering whether Meta would fund research such as this that could lower the financial barrier to entry for Metaverse participants