June 19th, 2024

Unique3D: Image-to-3D Generation from a Single Image

The GitHub repository hosts Unique3D, offering efficient 3D mesh generation from a single image. It includes author details, project specifics, setup guides for Linux and Windows, an interactive demo, ComfyUI, tips, acknowledgements, collaborations, and citations.

Read original articleLink Icon
Unique3D: Image-to-3D Generation from a Single Image

The GitHub repository contains the official implementation of Unique3D, a project dedicated to generating high-quality 3D meshes efficiently from a single image. It covers details about the authors, project specifics, features, preparation for inference, setup guidelines for Linux and Windows, an interactive Gradio demo for local inference, ComfyUI support, tips for improved outcomes, acknowledgements, collaborations, and citation details. For additional information or support regarding this project, users are encouraged to refer to the GitHub repository.

Related

MeshAnything – Converts 3D representations into efficient 3D meshes

MeshAnything – Converts 3D representations into efficient 3D meshes

MeshAnything efficiently generates high-quality Artist-Created Meshes with optimized topology, fewer faces, and precise shapes. Its innovative approach enhances 3D industry applications by improving storage and rendering efficiencies.

Show HN: Feedback on Sketch Colourisation

Show HN: Feedback on Sketch Colourisation

The GitHub repository contains SketchDeco, a project for colorizing black and white sketches without training. It includes setup instructions, usage guidelines, acknowledgments, and future plans. Users can seek support if needed.

HybridNeRF: Efficient Neural Rendering

HybridNeRF: Efficient Neural Rendering

HybridNeRF combines surface and volumetric representations for efficient neural rendering, achieving 15-30% error rate improvement over baselines. It enables real-time framerates of 36 FPS at 2K×2K resolutions, outperforming VR-NeRF in quality and speed on various datasets.

Neko: Portable framework for high-order spectral element flow simulations

Neko: Portable framework for high-order spectral element flow simulations

A portable framework named "Neko" for high-order spectral element flow simulations in modern Fortran. Object-oriented, supports various hardware, with detailed documentation, cloning guidelines, publications, and development acknowledgments. Additional support available for inquiries.

Homegrown Rendering with Rust

Homegrown Rendering with Rust

Embark Studios develops a creative platform for user-generated content, emphasizing gameplay over graphics. They leverage Rust for 3D rendering, introducing the experimental "kajiya" renderer for learning purposes. The team aims to simplify rendering for user-generated content, utilizing Vulkan API and Rust's versatility for GPU programming. They seek to enhance Rust's ecosystem for GPU programming.

Link Icon 7 comments
By @jsheard - 4 months
It's getting tiring seeing 3D model generation papers throwing around "high quality" to describe their output then glossing over nearly all of the qualities of a high quality 3D model in actual production contexts. Have they figured out how to produce usable topology yet? They don't talk about that, so probably not.

3D artists are begging for AI tools which automate specific tedious but necessary tasks like retopo and UV unwrapping, but tools like the OP do the opposite, skipping over those details to produce a poorly executed "final" result and leaving the user to reverse engineer the model in an attempt to salvage the mess it made.

If gen3D is going to be a thing then they need to listen to the people actually doing 3D work, not just chase benchmarks invented by other gen3D researchers. Some commentary on a similar paper about how they are trying to solve the wrong problems: https://x.com/rms80/status/1801362145600254211

By @Daub - 4 months
As someone who teaches 3D, a 'high quality' model would need to have clean topology: all quads which flow around the form in a predictable and rational manner. From this, I would expect a clean texture map. I am fairly certain that current technology is not up to this.

I have seen a few of these papers, and (from my limited experience) very rarely is the 3d model avauable for review.

By @ninetyninenine - 4 months
Really good. This is just geometric analysis though. Geometric in the sense that the model likely doesn't understand what it's rendering. All it sees is some shape.

The next step is geometry with organized contours that make sense, meaning that the model needs to cohesively understand the picture and not just the geometry. For example, if a person in the picture is wearing armor the model generates two separate models overlayed on one another, the armor and the mesh.

By @Geee - 4 months
Great to see these getting better and better. This might actually be usable for geometry generation if it's possible to increase the resolution. It seems that a simple super-resolution pass could help with this. For now, using this mesh as a reference model would help a lot in a typical 3D modeling process.

Those textures are completely useless, because they have all the light and view-dependency baked in. It's not really possible to extract a diffuse texture from this. There has been some work on generating material BRDFs [0], but I've not seen great results yet.

[0] for example, https://sheldontsui.github.io/projects/Matlaber

By @theendisney - 4 months
The demo page has demo images but the results are not cached. While im probably not an interesting customer I got bored waiting. Not something worth spending cpu cycles on.
By @nicman23 - 4 months
finally my 2d waifus are going to be 3d