June 21st, 2024

MeshAnything – Converts 3D representations into efficient 3D meshes

MeshAnything efficiently generates high-quality Artist-Created Meshes with optimized topology, fewer faces, and precise shapes. Its innovative approach enhances 3D industry applications by improving storage and rendering efficiencies.

Read original articleLink Icon
MeshAnything – Converts 3D representations into efficient 3D meshes

MeshAnything is a model designed to extract meshes from 3D representations, mimicking human artists' work. It can be integrated into 3D asset production pipelines to create Artist-Created Meshes efficiently. Compared to existing methods, MeshAnything generates meshes with significantly fewer faces, improving storage and rendering efficiencies while maintaining precision. The model consists of a VQ-VAE and a shape-conditioned decoder-only transformer, enabling shape-conditioned autoregressive mesh generation. By focusing on optimized topology rather than complex 3D shape distributions, MeshAnything reduces training complexity and enhances scalability. The approach produces meshes aligned with specified shapes, demonstrating better topology and fewer faces compared to ground truth. This indicates the model's ability to construct meshes efficiently without overfitting. MeshAnything's potential lies in enhancing 3D industry applications by providing high-quality, controllable Artist-Created Mesh generation.

Related

20x Faster Background Removal in the Browser Using ONNX Runtime with WebGPU

20x Faster Background Removal in the Browser Using ONNX Runtime with WebGPU

Using ONNX Runtime with WebGPU and WebAssembly in browsers achieves 20x speedup for background removal, reducing server load, enhancing scalability, and improving data security. ONNX models run efficiently with WebGPU support, offering near real-time performance. Leveraging modern technology, IMG.LY aims to enhance design tools' accessibility and efficiency.

Lessons About the Human Mind from Artificial Intelligence

Lessons About the Human Mind from Artificial Intelligence

In 2022, a Google engineer claimed AI chatbot LaMDA was self-aware, but further scrutiny revealed it mimicked human-like responses without true understanding. This incident underscores AI limitations in comprehension and originality.

Mip-Splatting: Alias-Free 3D Gaussian Splatting

Mip-Splatting: Alias-Free 3D Gaussian Splatting

The paper introduces Mip-Splatting, enhancing 3D Gaussian Splatting by addressing artifacts with a 3D smoothing filter and a 2D Mip filter, achieving alias-free renderings and improved image fidelity in 3D rendering applications.

I am using AI to drop hats outside my window onto New Yorkers

I am using AI to drop hats outside my window onto New Yorkers

dropofahat.zone showcases a project by a New York City resident using AI to drop hats onto passersby from their window. Challenges include hat selection, window opening, and AI implementation. The goal is to redefine "Window Shopping."

Homegrown Rendering with Rust

Homegrown Rendering with Rust

Embark Studios develops a creative platform for user-generated content, emphasizing gameplay over graphics. They leverage Rust for 3D rendering, introducing the experimental "kajiya" renderer for learning purposes. The team aims to simplify rendering for user-generated content, utilizing Vulkan API and Rust's versatility for GPU programming. They seek to enhance Rust's ecosystem for GPU programming.

Link Icon 19 comments
By @modeless - 5 months
Nice looking results, hopefully not too cherry-picked. Every 3D model generation paper posted on HN has people complaining that the meshes are bad, so this kind of research is welcome and necessary for generated 3D assets to be used in actual games.

Weird custom non-commercial license unfortunately. Notes from the GitHub readme:

> It takes about 7GB and 30s to generate a mesh on an A6000 GPU

> trained on meshes with fewer than 800 faces and cannot generate meshes with more than 800 faces

By @jgord - 5 months
Certainly a lot of scope for this kind of thing .. people who do lidar scans or photogrammetry of buildings tend to end up with very large meshes or very large point clouds.. which means they need souped up PCs and expensive software to wrangle into some usable CAD format.

Its an area where things can be improved a lot imho - I did some work a while back fitting flat planes to pointclouds, and ended up with mesh model anything from 40x to 100x smaller data than the ptcloud dataset. see quato.xyz for samples where you can compare the cloud, the mesh produced.. and view the 3D model in recent browsers.

My approach had some similarity to gaussian splats... but using only planar regions .. great for buildings made of flat slabs, less so for smooth curves and foliage.

Applying their MeshAnything algo to fine meshes from photogrammetry scans of buildings would be of great benefit - probably getting those meshes down to a size where they can be shared as 3D webgl/threejs pages.

Even deciding on triangle points to efficiently tesselate / cover a planar region with holes etc, is basically a knapsack problem, which heuristics, monte-carlo and ML can improve upon.

By @bhouston - 5 months
Definitely the best result for low polygon creation I've seen. Great job!

Still triangles rather than polygons, but we are getting closer.

The end goal should be:

1) Polygons, mostly 4 sided, rather than triangles.

2) Edge smoothness/creases to separate hard coders from soft corners. (Which when combined with polygons enables SubD support: https://graphics.pixar.com/opensubdiv/docs/subdivision_surfa...)

3) UV for textures that are aligned with the natural flow of textures on those components.

4) Repeating textures (although sometimes not) that work with the UVs and combine to create PBR textures. (Getting closer all the time: https://gvecchio.com/stablematerials/)

After the above works, I think people should move on to inferring proper CAD models from an image. Basically infer all the constraints and the various construction steps.

By @ramshanker - 5 months
I am all in for any development in this domain. Just to spread some sense of scale, We recently processed (manually) the point cloud scan of one of the (<1% of whole complex) working Oil Refinery. The total volume of point cloud was 450GByte. Our previous project of slightly larger scope was 2.1TByte.

So the scale shown in this paper feels like toys! Not undermining the effort at all. We need to start somewhere anyway.

For the same reason, I feel puzzled looking at Industrial scenes in Video Games. They are like 3 order of magnitude simplified compared to a real plant.

By @wildpeaks - 5 months
Calling AI-generated meshes "Artist-created" just because it aims to look similar as human-made ones is misleading.
By @obsoletehippo - 5 months
I like how the Social Impact paragraph notes reduced labor costs, yay! Not e.g., reduced need for artists, so you're all out of a job.
By @flockonus - 5 months
MeshAnything generates meshes with hundreds of times fewer faces, significantly improving storage, rendering, and simulation efficiencies, while achieving precision comparable to previous methods.
By @42lux - 5 months
The converted meshes are not efficient. They are also full of n-gons so you need to retopo no matter what...
By @Paul_S - 5 months
Very good, hope they realise that you need tessellation for shading. Some of those models look a bit too optimised.
By @dagmx - 5 months
The topology is decent but no artist is creating meshes like this. The name feels mismatched. I’ve seen some better topology generation papers at siggraph last year which addressed quads better, though I’d need to dig through my archive to find it.

The triangle topologies in this paper made don’t follow the logical loops that an artist would work as. Generally it’s rare an artist would work directly in triangles, versus quads. But that aside, you’d place the loops in more logical places along the surface.

The face and toilet really stand out to me as examples of meshes that look really off.

Anyway, I think this is a good attempt at a reasonable topology generation, but the tag line is a miss.

By @iTokio - 5 months
Words have meanings, you can’t call AI generated meshes, “Artist created Meshes” not matter how good you think your results are.

Beside good topology is dependent on the use case, it’s very different if you are doing animation, a 3D print, a game or just a render.

By @Animats - 5 months
Hm. I tried the online demo,

https://huggingface.co/spaces/Yiwen-ntu/MeshAnything

on the provided sample "hat". I tried with and without checking "Preprocess with marching cubes" and "Random Sample". Both outputs had holes in the output mesh where the original did not.

Am I doing this wrong, or is the algorithm buggy?

By @emilk - 5 months
By @debugnik - 5 months
Calling these meshes "Artist-Created Meshes" is disgusting. I know researchers in this field want the word "artist" to follow the same fate as "computer" thanks to their work, but it's too soon to say the least. Can we get AI researchers? I bet RLHF can make their writing more humble than the current ones.

Sentiments aside, that's an impressive approach.

By @tamimio - 5 months
Looks interesting, I do have few complicated models will test it out and see.
By @RobotToaster - 5 months
Why do people keep making their own special licenses?

https://github.com/buaacyw/MeshAnything/blob/main/LICENSE.tx...

By @demondemidi - 5 months
Hugh Hoppe is rolling in his grave.
By @jahewson - 5 months
Stunning!
By @75viysoFET8228 - 5 months
the service needs to be better, please improve and errors in the configuration of the website