FreeCloth: Free-form Generation Enhances Challenging Clothed Human Modeling

Arxiv 2024

TL;DR: we propose to model clothed human with a novel hybrid framework, which combines the strengths of LBS-based deformation and free-form generation. Our method effectively captures the intricate geometric details of loose clothing, achieving superior visual fidelity and realism, particularly on the most challenging cases.

Abstract

Achieving realistic animated human avatars requires accurate modeling of pose-dependent clothing deformations. Existing learning-based methods heavily rely on the Linear Blend Skinning (LBS) of minimally-clothed human models like SMPL to model deformation. However, these methods struggle to handle loose clothing, such as long dresses, where the canonicalization process becomes ill-defined when the clothing is far from the body, leading to disjointed and fragmented results. To overcome this limitation, we propose a novel hybrid framework to model challenging clothed humans. Our core idea is to use dedicated strategies to model different regions, depending on whether they are close to or distant from the body. Specifically, we segment the human body into three categories: unclothed, deformed, and generated. We simply replicate unclothed regions that require no deformation. For deformed regions close to the body, we leverage LBS to handle the deformation. As for the generated regions, which correspond to loose clothing areas, we introduce a novel free-form, part-aware generator to model them, as they are less affected by movements. This free-form generation paradigm brings enhanced flexibility and expressiveness to our hybrid framework, enabling it to capture the intricate geometric details of challenging loose clothing, such as skirts and dresses. Experimental results on the benchmark dataset featuring loose clothing demonstrate that our method achieves state-of-the-art performance with superior visual fidelity and realism, particularly in the most challenging cases.

Method Overview

Given an unclothed and posed body, and a specific garment type, our goal is to create a realistic clothed human. We first segment the human parts into three different regions: unclothed parts (yellow) need no deformation, deformed parts (blue), and generated parts (green). The hybrid framework comprises two essential modules: (1) an LBS-based local deformation network to obtain pose-dependent deformed points that are close to the human body, and (2) a free-form generator that focuses on generating the more loose clothing regions. By merging the unclothed, deformed, and generated points, we ultimately obtain the complete point cloud of a clothed human.

Comparison with State-of-the-art

Qualitative Results

More Results

BibTeX