Joint Hand Motion and Interaction Hotspots Prediction from Egocentric Videos

1University of Illinois Urbana-Champaign, 2Intel Labs 3UC San Diego
(* the work was done at an internship at Intel Labs)
CVPR 2022

Input: observationxxxxxxxxxxxxxxxxxxOutput: future prediction


Given observation frames of the past, we predict future hand trajectories (green and red lines) and object interaction hotspots (heatmaps).

Abstract

We propose to forecast future hand-object interactions given an egocentric video. Instead of predicting action labels or pixels, we directly predict the hand motion trajectory and the future contact points on the next active object (i.e., interaction hotspots). This relatively low-dimensional representation provides a concrete description of future interactions. To tackle this task, we first provide an automatic way to collect trajectory and hotspots labels on large-scale data. We then use this data to train an Object-Centric Transformer (OCT) model for prediction. Our model performs hand and object interaction reasoning via the self-attention mechanism in Transformers. OCT also provides a probabilistic framework to sample the future trajectory and hotspots to handle uncertainty in prediction. We perform experiments on the Epic-Kitchens-55, Epic-Kitchens-100, and EGTEA Gaze+ datasets, and show that OCT significantly outperforms state-of-the-art approaches by a large margin.

Video

Automatic training data generation

We use off-the-shelf tools to collect future hand trajectory and interaction hotspots automatically.

Automatic generated Epic-Kitchen training labels


Click here for more results

Automatic generated EGTEA Gaze+ training labels


Click here for more results

Object-Centric Transformer (OCT)

The OCT has an encoder-decoder architecture. We extract per-frame features and concatenate it with detected bounding boxes as input to the Transformer encoder. We take the output from the encoder and previously predicted hand locations as input to the decoder. The decoder output is sent to hand-CVAE and object-CVAE to get final results.

Trajectory estimation comparison

We evaluate future hand trajectory estimation performance on EPIC-KITCHENS-100 dataset.

Interaction hotspots comparison

We evaluate interaction hotspots performance on EPIC-KITCHENS-100 dataset.

Diverse future prediction

We visualize diverse future predictions on EPIC-KITCHENS-100 dataset.

Cross-dataset generalization

We trained on Epic-Kitchen 100 dataset and tested on EGTEA Gaze+ dataset.

BibTeX

@inproceedings{liu2022joint,
  title={Joint Hand Motion and Interaction Hotspots Prediction from Egocentric Videos},
  author={Liu, Shaowei and Tripathi, Subarna and Majumdar, Somdeb and Wang, Xiaolong},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2022}
}