University of Minnesota
Perceiving Object Size
Whenever we navigate a scene, grasp an object, or even assess threat, accurate size information is critical to appropriate actions. Yet the computational and neural solutions to size extraction are in large part unknown. This claim might seem surprising. Shouldn't it be easy to extract information about size from the retinal image? There are two general problems. The first has been appreciated for centuries--a big far object and a small near object can project to the same size retinal image--thus, knowledge of depth is required to resolve ambiguity about physical size. But extracting depth information from scenes is complex, involving the integration of multiple sources of information. A second problem is determining the 2D extent of the retinal image of an object. This requires estimating an object's boundaries--a computational problem requiring grouping and selection of features, and whose difficulty was not fully appreciated until the advent of computer vision. These two problems suggest that one might see evidence of informational coupling between lower-level cortical brain areas representing 2D retinotopic spatial information and higher-level regions associated with scene context and depth. I'll describe neuroimaging results showing how the 2D spatial extent of activity in human primary visual cortex (V1) is modulated by 3D depth information from a scene.