😴NAP: Neural 3D Articulation Prior

Jiahui Lei1        Congyue Deng2        Bokui Shen2        Leonidas Guibas2        Kostas Daniilidis1

Abstraction

We propose Neural 3D Articulation Prior (NAP), the first 3D deep generative model to synthesize 3D articulated object models. Despite the extensive research on generating 3D objects, compositions, or scenes, there remains a lack of focus on capturing the distribution of articulated objects, a common object category for human and robot interaction. To generate articulated objects, we first design a novel articulation tree/graph parameterization and then apply a diffusion-denoising probabilistic model over this representation where articulated objects can be generated via denoising from random complete graphs. In order to capture both the geometry and the motion structure whose distribution will affect each other, we design a graph-attention denoising network for learning the reverse diffusion process. We propose a novel distance that adapts widely used 3D generation metrics to our novel task to evaluate generation quality, and experiments demonstrate our high performance in articulated object generation. We also demonstrate several conditioned generation applications, including Part2Motion, PartNet-Imagination, Motion2Part, and GAPart2Object.

Introduction

Method overview

Denoising Network

Unconditional Generation

Applications

Part2Motion

Part Condition Completion Completion

Motion2Part

Motion Condition Completion Completion

PartNet Imagination

Static Object Rigid Group Imagination

GAPart2Object

GAPart Completion Completion