Animatable clothed human avatars are needed in many 3D content generation applications. However, currently, they require artist work or expensive 4D scans.
Therefore, a recent paper looks for a way to create a model that can produce reasonable pose-dependent clothing deformation. The researchers propose to use point clouds, commonly utilized to represent rigid objects.
A new shape representation of dense point clouds is proposed. Smooth local point features on a 2D manifold enable arbitrarily dense up-sampling during inference. In order to enable cross-garment modeling and generalization to unseen outfits, a novel geometry feature tensor is suggested.
The model can also be used to animate a single human scan from an unseen subject and clothing. Evaluation with both captured and synthetic datasets confirm state-of-the-art performance and the ability to produce expressive local garment details.
Currently it requires an artist to create 3D human avatars with realistic clothing that can move naturally. Despite progress on 3D scanning and modeling of human bodies, there is still no technology that can easily turn a static scan into an animatable avatar. Automating the creation of such avatars would enable many applications in games, social networking, animation, and AR/VR to name a few. The key problem is one of representation. Standard 3D meshes are widely used in modeling the minimally-clothed body but do not readily capture the complex topology of clothing. Recent interest has shifted to implicit surface models for this task but they are computationally heavy and lack compatibility with existing 3D tools. What is needed is a 3D representation that can capture varied topology at high resolution and that can be learned from data. We argue that this representation has been with us all along — the point cloud. Point clouds have properties of both implicit and explicit representations that we exploit to model 3D garment geometry on a human body. We train a neural network with a novel local clothing geometric feature to represent the shape of different outfits. The network is trained from 3D point clouds of many types of clothing, on many bodies, in many poses, and learns to model pose-dependent clothing deformations. The geometry feature can be optimized to fit a previously unseen scan of a person in clothing, enabling the scan to be reposed realistically. Our model demonstrates superior quantitative and qualitative results in both multi-outfit modeling and unseen outfit animation. The code is available for research purposes.
Research paper: Ma, Q., Yang, J., Tang, S., and Black, M. J., “The Power of Points for Modeling Humans in Clothing”, 2021. Link to the article: https://arxiv.org/abs/2109.01137
Link to the site of project: https://qianlim.github.io/POP