Abstract
This paper introduces a new neuro-cognitive Visual Attention Model, called VAM. It is a model of visual attention control of segmentation, object recognition, and space-based motor action. VAM is concerned with two main functions of visual attention-that is “selection-for-object-recognition” and “selection-for-space-based-motor-action”. The attentional control processes that perform these two functions restructure the results of stimulus-driven and local perceptual grouping and segregation processes, the “visual chunks”, in such a way that one visual chunk is globally segmented and implemented as an “object token”. This attentional segmentation solves the “inter- and intra-object-binding problem”. It can be controlled by higher-level visual modules of the what-pathway (e.g. V4/IT) and/or the where-pathway (e.g. PPC) that contain relatively invariant “type-level” information (e.g. an alphabet of shape primitives, colors with constancy, locations for space-based motor actions). What-based attentional control is successful if there is only one object in the visual scene whose type-level features match the intended target object description. If this is not the case, where-based attention is required that can serially scan one object location after another.

This publication has 114 references indexed in Scilit: