Fuse3D : Generating 3D Assets Controlled by Multi-Image Fusion

Proc. SIGGRAPH Asia 2025

Xuancheng Jin1, Rengan Xie1, Wenting Zheng1, Rui Wang1, Hujun Bao1, Yuchi Huo1,2,
1State Key Laboratory of CAD&CG Zhejiang University, 2Zhejiang Lab

Fuse3D generates 3D assets controlled by multi-image fusion.

Abstract

Recently, generating 3D assets with the control of condition images has achieved impressive quality. However, existing 3D generation methods are limited to handling a single control objective and lack the ability to utilize multiple images to independently control different regions of a 3D asset, which hinders their flexibility in applications. We propose Fuse3D, a novel method that enables generating 3D assets under the control of multiple images, allowing for the seamless fusion of multi-level regional controls from global views to intricate local details. First, we introduce a Multi-Condition Fusion Module to integrate the visual features from multiple image regions. Then, we propose a method to automatically align user-selected 2D image regions with their associated 3D regions based on semantic cues. Finally, to resolve control conflicts and enhance local control features from multi-condition images, we introduce a Local Attention Enhancement Strategy that flexibly balances region-specific feature fusion. Overall, we introduce the first method capable of controllable 3D asset generation from multiple condition images. The experimental results indicate that Fuse3D can flexibly fuse multiple 2D image regions into coherent 3D structures, resulting in high-quality 3D assets.

Interactive Generation

Our approach provides interactive control of 3D generation by adjusting the Enhancement Factor, which allows users to control how features from multiple images are blended in the final 3D asset.

Control Generation with the Enhancement Factor

Comparisons

Generated assets(higher enhance factor).
Generated assets(lower enhance factor).

Application

Generate Texture

Given an untextured mesh, our method generates a coherent textured result by fusing multiple local condition images, preserving diverse features.

Mesh Editing

Our approach enables localized mesh editing by transferring visual features from selected 2D regions onto the target mesh.