Neural image style transfer enables everyone to create artistic images. Nevertheless, there are no equally successful approaches for style transfer for the 3D domain.
A recent paper addresses this need by introducing a method for 3D object stylization informed by a reference textured 3D shape.
The model imitates the overall geometric style of the target shape by predicting a part-aware affine transformation field that warps a source shape. In order to transfer geometric style together with texture style, the geometric style network is jointly optimized with a pre-trained image style transfer network using losses defined over multi-view rendering produced by a differentiable renderer.
A user study confirmed that the suggested approach produces better results than a strong baseline. The study yields a shape creation tool that can be used by naive users for 3D content creation.
We propose a method to create plausible geometric and texture style variations of 3D objects in the quest to democratize 3D content creation. Given a pair of textured source and target objects, our method predicts a part-aware affine transformation field that naturally warps the source shape to imitate the overall geometric style of the target. In addition, the texture style of the target is transferred to the warped source object with the help of a multi-view differentiable renderer. Our model, 3DStyleNet, is composed of two sub-networks trained in two stages. First, the geometric style network is trained on a large set of untextured 3D shapes. Second, we jointly optimize our geometric style network and a pre-trained image style transfer network with losses defined over both the geometry and the rendering of the result. Given a small set of high-quality textured objects, our method can create many novel stylized shapes, resulting in effortless 3D content creation and style-ware data augmentation. We showcase our approach qualitatively on 3D content stylization, and provide user studies to validate the quality of our results. In addition, our method can serve as a valuable tool to create 3D data augmentations for computer vision tasks. Extensive quantitative analysis shows that 3DStyleNet outperforms alternative data augmentation techniques for the downstream task of single-image 3D reconstruction.
Research paper: Yin, K., Gao, J., Shugrina, M., Khamis, S., and Fidler, S., “3DStyleNet: Creating 3D Shapes with Geometric and Texture Style Variations”, 2021. Link to the article: https://arxiv.org/abs/2108.12958
Link to the project site: https://nv-tlabs.github.io/3DStyleNet/