Synthesizing Long-Term 3D Human Motion and Interaction in 3D Scenes
UC San Diego1
UC Berkeley2
Shanghai Jiao Tong University3
NVIDIA4
Abstract:

Synthesizing 3D human motion plays an important role in many graphics applications as well as understanding human activity. While many efforts have been made on generating realistic and natural human motion, most approaches neglect the importance of modeling human-scene interactions and affordances. On the other hand, affordance reasoning (e.g., standing on the floor or sitting on the chair) has mainly been studied with static human pose and gestures, and it has rarely been addressed with human motion. In this paper, we propose to bridge human motion synthesis and scene affordance reasoning. We present a hierarchical generative framework which synthesizes long-term 3D human motion conditioning on the 3D scene structure. We also further enforce multiple geometry constraints between the human mesh and scene point clouds via optimization to improve realistic synthesis. Our experiments show significant improvements over previous approaches on generating natural and physically plausible human motion in a scene.


Video:

Methods:
We first generate the sub-goal bodies on the sub-goal positions. Sub-goal bodies are in gray color. Then we divide it into several short-term start/end pairs and synthesize short-term motion each. Finally we use an optimization process to help to connect all these short-term motion to a long-term motion.
Qualitative Results:

Paper:

Bibtex:
    @misc{wang2020synthesizing,
      title={Synthesizing Long-Term 3D Human Motion and Interaction in 3D Scenes}, 
      author={Jiashun Wang and Huazhe Xu and Jingwei Xu and Sifei Liu and Xiaolong Wang},
      year={2020},
      eprint={2012.05522},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
    }