TriTex: Learning Texture from a Single Mesh via Triplane Semantic Features

1Tel-Aviv University 2NVIDIA    

CVPR 2025

TriTex is a method for semantic-aware texture transfer between meshes. It levereges a pretrained feature extractor to train on a single example and generalize at test time to a novel target mesh.

Teaser

Abstract

As 3D content creation continues to grow, transferring semantic textures between 3D meshes remains a significant challenge in computer graphics. While recent methods leverage text-to-image diffusion models for texturing, they often struggle to preserve the appearance of the source texture during texture transfer. We present TriTex, a novel approach that learns a volumetric texture field from a single textured mesh by mapping semantic features to surface colors. Using an efficient triplane-based architecture, our method enables semantic-aware texture transfer to a novel target mesh. Despite training on just one example, it generalizes effectively to diverse shapes within the same category. Extensive evaluation on our newly created benchmark dataset shows that TriTex achieves superior texture transfer quality and fast inference times compared to existing methods. Our approach advances single-example texture transfer, providing a practical solution for maintaining visual coherence across related 3D models in applications like game development and simulation.

Results

source shapetarget shape

source shapetarget shape

source shapetarget shape

source shapetarget shape

source shapetarget shape

source shapetarget shape

source shapetarget shape

source shapetarget shape

source shapetarget shape

source shapetarget shape

source shapetarget shape

source shapetarget shape

Method

Method fig

Training Pipeline

  • Given an input textured mesh and its pre-extracted Diff3F features, we project six orthographic views to create an initial triplane.
  • This triplane is processed using triplane-aware convolutional blocks, which, along with the texture MLP, define a coloring neural field. This field, together with the input geometry, is used to render the colored mesh.
  • Appearance losses are applied between the true mesh appearance and the rendered appearance.

Inference Pipeline

  • Given a new mesh (left), our pre-trained model maps its semantic properties to colors (right), transferring the texture from the original textured mesh learned during the training phase.
  • The output maintains the style and appearance characteristics learned during the training process while applying them to the new geometry.

BibTeX

@misc{cohenbar2025tritexlearningtexturesingle,
      title={TriTex: Learning Texture from a Single Mesh via Triplane Semantic Features},
      author={Dana Cohen-Bar and Daniel Cohen-Or and Gal Chechik and Yoni Kasten},
      year={2025},
      eprint={2503.16630},
      archivePrefix={arXiv},
      primaryClass={cs.GR},
      url={https://arxiv.org/abs/2503.16630},
      }