Building Rearticulable Models for Arbitrary 3D Objects from 4D Point Clouds

1University of Illinois Urbana-Champaign
* Equal Advising
CVPR 2023

Given a short point cloud sequence of arbitrary articulated object, our method outputs an animatable 3D model, which can be retargeted to novel poses.

Abstract

We build rearticulable models for arbitrary everyday man-made objects containing an arbitrary number of parts that are connected together in arbitrary ways via 1 degree-of-freedom joints. Given point cloud videos of such everyday objects, our method identifies the distinct object parts, what parts are connected to what other parts, and the properties of the joints connecting each part pair. We do this by jointly optimizing the part segmentation, transformation, and kinematics using a novel energy minimization framework. Our inferred animatable models, enables retargeting to novel poses with sparse point correspondences guidance. We test our method on a new articulating robot dataset, and the Sapiens dataset with common daily objects, as well as real-world scans. Experiments show that our method outperforms two leading prior works on various metrics.

Video

Problem formulation

The articulation model comprises of n parts that form a kinematic tree. A coordinate-based semantic field f is used to parameterize the part segmentation, and motions are parameterized by screw representation. The objective is to estimate these parameters using an analysis-by-synthesis approach by energy minimization. The energy measures the geometric and motion compatibility between inferred model and observed point cloud sequence.

setting

Framework

Given it is challenging to optimize directly, we propose a relaxation-projection approach. In relaxation stage, we estimate model parameters without constraints, and we cast the solution to a valid kinematic tree in the projection stage and taken it as initialization for re-optimization.

Robot Results

Input

Part Seg

Skeleton

Reanimate

Click here for more results

Daily Object (Sapien) Results

Input

Model

Reanimate

Click here for more results

Real Scan Results

Input

Model

Reanimate

BibTeX

@inproceedings{liu2023reart,
  title={Building Rearticulable Models for Arbitrary 3D Objects from 4D Point Clouds},
  author={Liu, Shaowei and Gupta, Saurabh and Wang, Shenlong},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2023}
}