Video understanding is at the forefront of modern computer vision. The lay and technical communities alike are drowning with videos---e.g., YouTube reports 72 hours of video uploaded each minute. Transforming the videos into usable form, such as video-to-text, is paramount to making good use of this rich data. However, the computer vision community has struggled with an appropriate representation on which to base the video analysis methods. Most modern techniques are based on low-level features, which have little or no semantic interpretation and depend on large annotated data sets to perform well. In contrast, video segmentation as an early processing step presents a complementary, more semantically rich, low-level representation on which to base further processing. Yet, the adoption of segmentation in video has lagged behind that of segmentation in images, likely due to a lack of critical analysis of video segmentation methods and no methods that can perform well on long video streams. In this talk, I will present the recent work in my group that addresses these limitations of video segmentation as an early step in video understanding. The first part of the talk will discuss an approximation framework for streaming hierarchical video segmentation, which bounds the required memory (to a small constant) for processing and retains the high quality performance of the full-video segmentation method. Second, I will also discuss a new approach for flattening the video segmentation hierarchy based on the notion of uniform entropy, which helps avoid a representational explosion by using the whole hierarchy for later processing. Time permitting; I will discuss the results of human perception experiments that had humans watching the hierarchical segmentation videos and trying to decipher the contents to better understand how much information is retained at the various hierarchical levels.
· Cordelia Aitkin and John Wilder. "Information Gathering and Saccadic Decision Making".
· Amr Bakry and Ahmed Elgammal. “New Framework for Lipreading and Speaker Identification”.
· Mark Dilsizian and Dimitris Metaxas. “Pose Reconstruction for Activity Recognition”.
· Joshua J. Dobias, Geetika Baghel, Daniel P. Moritz, Mark F. Theiler and Thomas V. Papathomas. “Estimating Depth Magnitude for Flat, Forced and Reverse Perspective Stimuli”.
· Tarek El-Gaaly, Haopeng Zhang, Ahmed Elgammal and Zhiguo Jiang. “Joint Object and Pose Recognition Using Homeomorphic Manifold Analysis”.
· Vicky Froyen, Jacob Feldman and Manish Singh. “Anomalous 3D Structure-from-Motion Arises from Accretion-Deletion and Figure-Ground Cues”.
· Edinah K. Gnang and Ahmed Elgammal. “On Representing Shapes”.
· Ruoyuan Gao, James MacGlashan, Monica Babes-Vroman, Kevin Winner, Marie desJardins, Michael Littman and Smaranda Muresan. “Learning to Follow Natural Language Instructions”.
· Nora Isacoff and Karen Froud. “An ERP Study of Taxonomic and Thematic Categorization in Preschoolers”.
· Marley D. Kass, Andrew H. Moberly and John P. McGann. “Olfactory Sensory Neuron Physiology and Exposure-Induced Plasticity are Altered in Olfactory Marker Protein Knockout Mice”.
· Parneet Kaur, Prateek Prasanna and Kristin Dana. “Computer Vision for Automated Bridge Deck Inspection”.
· Brian Keane, Jamie Joseph and Steven Silverstein. “Perceptual and Conceptual Disorganization in Schizophrenia: Tw Sides of the Same Coin”?
· Gaurav Kharkwal and Karin Stromswold. “Syntactic Position Effect on Nonword Processing”.
· Seha Kim, Manish Singh and Jacob Feldman. “T-Junction Interpretation and Propagation in Line Drawing”.
· Nicholas Kleene and Melchi Michel. “Estimating Transsaccadic Memory Capacityi Using Visual Search”.
· Lara Martin and Matthew Stone. “Using Prosodic Cues to Improve Voice Recognition”.
· Brian McMahan and Matthew Stone. “Modeling Pragmatic Effects on Perceptual Content”.
· Jillian Nguyen, Rob Isenhower, Polina Yanovich, Jay Ravaliya, Thomas V. Papathomas and Elizabeth Torres. “Quantifying Changes in the Kinesthetic Percept Under a 3D Perspective Visual Illusion”.
· Peter Pantelis, Steven Cholewiak, Tim Gerstner, Gaurav Kharkwal, Kevin Sanik, Ari Weinstein, Chia-Chien Wu and Jacob Feldman. “Evolving Virtual Autonomous Agents for Experiments in Intentional Reasoning”.
· Kimele Persaud, Pernille Hemmer and Josue Reyes. “The Influence of Knowledge and Expectations for Colors on Episodic Memory”.
· Nick Ross, Elio Santos and Cordelia Aitkin. “Predictive Saccadic and Smooth Pursuit Eye Movements”.
· Babak Saleh, Ali Farhadi and Ahmed Elgammal. “Object-based Abnormalilty Detection in Images”.
· Kevin Sanik and Manish Singh. “Changes in Shape: How Well do Line Drawings Depict Them”.
· Daglar Tanrikulu, Vicky Froyen, Jacob Feldman and Manish Singh. “Geometric Figure-Ground Cues Override Standard Depth from Accretion-Deletion”.
· Ari Weinstein and Michael Littman. “Open-Loop Planning in Large-Scale Stochastic Domains”.
· John Wilder, Manish Singh, and Jacob Feldman. “Detecting Shapes in Noise: The Role of Contour-Based and Region-Based Representations”.
· Polina Yanovich, Rob Isenhower and Elizabeth Torres. “Real-time Adaptation of External Media and Sensory-Motor Control in Closed-Loop as Gateway into the Hidden Potential of the Non-Verbal Autistic Child”.
Min Zhao, Andre G. Marquez, Pernille Hemmer, and Eileen Kowler. “Inferring Strategies of Maze Navigation from Movements of Eye and Arm”.