Note: This is the final schedule
Program 08:30-08:40 Welcome and opening
08:40-09:10 Invited Talk, Category-agnostic and Category-specific approaches to video segmentation.
Jitendra Malik (University of California - Berkeley)
Improved Image Boundaries for Better Video Segmentation
Anna Khoreva, Rodrigo Benenson, Fabio Galasso, Matthias Hein, Bernt Schiele (MPI Informatics, OSRAM, University of Saarland)
Point-wise mutual information-based video segmentation with high temporal consistency
Margret Keuper, Thomas Brox (University of Freiburg)
09:35-10:05 Invited talk, Towards semantic optical flow: Jointly estimating image motion, object segmentation, and scene structure
Michael Black (MPI Intelligent Systems)
Causal Motion Segmentation in Moving Camera Videos
Pia Bideau, Erik Learned-Miller (University of Massachusetts, Amherst)
Can Ground Truth Label Propagation from Video Help Semantic Segmentation?
Siva Karthik Mustikovela, Michael Ying Yang, Carsten Rother (Technical University Dresden, University of Twente)
10:30-11:05 Coffee Break
11:05-11:20 Breaking Talk:
FlowNet 2.0: Learning Fast and Accurate Optical Flow Estimation
Thomas Brox (University of Freiburg)
A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation
Federico Perazzi, Jordi Pont-Tuset, Brian McWilliams, Luc Van Gool, Markus Gross, Alexander Sorkine-Hornung (Walt Disney Research, ETH - Zurich)
3D Point Cloud Video Segmentation based on Interaction Analysis,
Xiao Lin, Josep R.Casas, Montse Pardas (Technical University of Catalonia)
11:45-12:15 Invited Talk, Video Segmentation and Inference by Composition
Michal Irani (Weizmann Institute)
Deep learning based fence segmentation and removal from an image using a video sequence
Sankaraganesh Jonna, Krishna K. Nakka, and Rajiv R. Sahay (Indian Institute of Technology Kharagpur)
Clockwork Convnets for Video Semantic Segmentation?
Evan Shelhamer, Kate Rakelly, Judy Hoffman, Trevor Darrell (University of California - Berkeley)
12:40 - 13:10
, Learning articulated object classes from video
(University of Edinburgh)
13:10-14:00 Panel Discussions
Paper submission deadline:
August 6th, 2016 Extended to August 9th, 2016
Acceptance notification: August 31st, 2016
Camera-ready Paper deadline: September 5th, 2016
Workshop date: October 10th, 2016
Authors should use the ECCV style files to prepare your submission. We welcome submissions for:
Regular papers: not more than 14 pages plus references. Regular papers will be lightly reviewed by the workshop organizers and should include the author names and affiliations (single-blind review).
For regular papers, there will be two options:
(1) The papers can be part of the conference proceedings with Springer. For this option a camera-ready paper will need to be submitted by September 5th. Papers will count as a full publication and cannot be submitted to a later venue anymore.
(2) Authors can decide before the conference proceedings deadline not to include their paper in the official proceedings. Such papers remain unpublished work and can be submitted to other venues.
Submissions should be in pdf format, and made by email to firstname.lastname@example.org .
Call for papers
Interest in the video segmentation problem has grown dramatically in recent years, resulting in a significant recent
body of work along with advances in both methods and datasets. Video segmentation could be of crucial importance
for building 3D object models from video, understanding dynamic scenes, robot-object interaction and a number of
high-level vision tasks.
This second workshop would focus on how video segmentation could support learning. We plan to bring together
a broad and representative group of video segmentation researchers. As in the case of our first workshop at ECCV’14,
we would expect a lively and impactful discussion on the main theme and on a wide range of topics: May video segmentation
provide weak supervision over large datasets? What role can segmentation and the perceptual organization
play in learning for recognition? Should video segmentation be used to pre-process or to feedback recognition and
reconstruction models? What are the commonalities and differences among the many different definitions of the video
segmentation problem, and what terminology should we use to distinguish them? What is the best way to measure
performance? What is the state of existing standard datasets and how can they be improved?
The workshop will consist of a combination of invited talks and oral presentations of submitted works, as well as a
panel discussion. Paper submissions will be subject to review, based on the quality of the work and its relevance to the
workshop topic. Authors of accepted papers will have the option to submit a full paper for inclusion in the workshop
proceedings if they choose.
Topics of interest include:
- Video segmentation in support of learning, incl. weak supervision over large datasets
- Semantic video segmentation and semantic labels
- The role of segmentation and the perceptual organization in learning for recognition
- Video segmentation algorithms and their performance analysis, including novel optimization techniques
- Comparisons between image and video segmentation techniques
- Proposals of taxonomies and terminologies for video segmentation, including the discussion of annotations and
- Novel datasets for performance evaluation and/or empirical analyses of existing methods
- Supervised and unsupervised methods, as well as techniques for interactive segmentation
- Motion Segmentation
- Segmentation of videos with depth information, such as in RGB-D videos
- Other applications of video segmentation, incl. activity detection and recognition, activity and saliency maps,
compression, representation, intrinsic image decomposition and material segmentation