July 20th, 2024

Diffusion Texture Painting

Researchers introduce Diffusion Texture Painting, a method using generative models for interactive texture painting on 3D meshes. Artists can paint with complex textures and transition seamlessly. The innovative approach aims to inspire generative model exploration.

Read original articleLink Icon
Diffusion Texture Painting

Researchers from NVIDIA, the University of Toronto, and the Vector Institute have introduced a novel technique called Diffusion Texture Painting. This method utilizes 2D generative diffusion models to enable interactive texture painting on 3D meshes. Unlike existing systems, artists can paint with complex image textures and seamlessly transition between different textures in real-time. The approach involves a stamp-based method that leverages pre-trained models to inpaint patches, providing control over brush stroke shape and texture orientation. By adapting the inference of generative models, the system ensures stable texture brush identity while allowing for infinite variations of the source texture. This innovative method marks the first use of diffusion models for interactive texture painting, aiming to inspire further exploration of generative models in artist-driven workflows. The application scenarios include creating bold garment prints, detailing photogrammetry assets with forest textures, replicating physical toy textures on 3D models, and prototyping fantasy environments like gingerbread houses. The research paper will be presented at SIGGRAPH 2024.

Related

Eight million pixels and counting: improving texture atlas allocation in Firefox (2021)

Eight million pixels and counting: improving texture atlas allocation in Firefox (2021)

Improving texture atlas allocation in WebRender with the guillotiere crate reduces texture memory usage. The guillotine algorithm was replaced due to fragmentation issues, leading to a more efficient allocator. Visualizing the atlas in SVG aids debugging. Rust's simplicity and Cargo fuzz testing are praised for code development and robustness. Enhancements in draw call batching and texture upload aim to boost performance on low-end Intel GPUs by optimizing texture atlases.

How to generate realistic people in Stable Diffusion

How to generate realistic people in Stable Diffusion

The tutorial focuses on creating lifelike portrait images using Stable Diffusion. It covers prompts, lighting, facial details, blending faces, poses, and models like F222 and Hassan Blend 1.4 for realistic results. Emphasis on clothing terms and model licenses is highlighted.

GPU-Friendly Stroke Expansion

GPU-Friendly Stroke Expansion

The paper introduces a GPU-friendly technique for stroke expansion in vector graphics, optimizing GPU rendering with parallel algorithms and minimal preprocessing. It addresses efficient rendering of stroked paths, enhancing performance in vector graphics.

Meta 3D Gen

Meta 3D Gen

Meta introduces Meta 3D Gen (3DGen), a fast text-to-3D asset tool with high prompt fidelity and PBR support. It integrates AssetGen and TextureGen components, outperforming industry baselines in speed and quality.

Magic Insert: Style-Aware Drag-and-Drop

Magic Insert: Style-Aware Drag-and-Drop

Google researchers have introduced Magic Insert, a method for realistic drag-and-drop subject transfers between images with different styles. It outperforms traditional methods, offers flexibility, and enhances creativity in image manipulation.

Link Icon 0 comments