Abstract

Temporal segmentation of long videos is an important problem, that has largely been tackled through supervised learning, often requiring large amounts of annotated training data. In this paper, we tackle the problem of selfsupervised temporal segmentation that alleviates the need for any supervision in the form of labels (full supervision) or temporal ordering (weak supervision). We introduce a self-supervised, predictive learning framework that draws inspiration from cognitive psychology to segment long, visually complex videos into constituent events. Learning involves only a single pass through the training data. We also introduce a new adaptive learning paradigm that helps reduce the effect of catastrophic forgetting in recurrent neural networks. Extensive experiments on three publicly available datasets - Breakfast Actions, 50 Salads, and INRIA Instructional Videos datasets show the efficacy of the proposed approach. We show that the proposed approach outperforms weakly-supervised and unsupervised baselines by up to 24% and achieves competitive segmentation results compared to fully supervised baselines with only a single pass through the training data. Finally, we show that the proposed self-supervised learning paradigm learns highly discriminating features to improve action recognition.
To appear in CVPR 2019

Overall Approach

>

Visualization of the Proposed Approach

Visualizations of the approach when presented with a video. The input frame is shown at the top, the gradient of the prediction error in the middle and a key frame from the segmented event at the bottom.

Code, Paper and Extras

  • Find training/evaluation code on Github.
  • Find the paper here

Bibtex

@InProceedings{Aakur_2019_CVPR,
author = {Aakur, Sathyanarayanan N. and Sarkar, Sudeep},
title = {A Perceptual Prediction Framework for Self Supervised Event Segmentation},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}

Example Results

Example predictions from the model.