OREGON STATE UNIVERSITY

You are here

Activities as time series of human postures

TitleActivities as time series of human postures
Publication TypeConference Paper
Year of Publication2010
AuthorsBrendel, W., and S. Todorovic
Conference NameProceedings of the 11th European conference on Computer vision: Part II
Pagination721–734
Date Published09/2010
PublisherSpringer-Verlag
Conference LocationHersonissos, Greece
ISBN Number3-642-15551-0, 978-3-642-15551-2
Abstract

This paper presents an exemplar-based approach to detecting and localizing human actions, such as running, cycling, and swinging, in realistic videos with dynamic backgrounds. We show that such activities can be compactly represented as time series of a few snapshots of human-body parts in their most discriminative postures, relative to other activity classes. This enables our approach to efficiently store multiple diverse exemplars per activity class, and quickly retrieve exemplars that best match the query by aligning their short time-series representations. Given a set of example videos of all activity classes, we extract multiscale regions from all their frames, and then learn a sparse dictionary of most discriminative regions. The Viterbi algorithm is then used to track detections of the learned codewords across frames of each video, resulting in their compact time-series representations. Dictionary learning is cast within the largemargin framework, wherein we study the effects of l1 and l2 regularization on the sparseness of the resulting dictionaries. Our experiments demonstrate robustness and scalability of our approach on challenging YouTube videos.

URLhttp://dl.acm.org/citation.cfm?id=1888028.1888083