Animated agents for procedural training in virtual reality: Perception, cognition, and motor control

Abstract
This paper describes Steve , an animated agent that helps students learn to perform physical , procedural tasks . The student and Steve cohabit a three - dimensional , simulated mock - up of the student's work environment . Steve can demonstrate how to perform tasks and can also monitor students while they practice tasks , providing assistance when needed . This paper describes Steve's architecture in detail , including perception , cognition , and motor control . The perception module monitors the state of the virtual world , maintains a coherent representation of it , and provides this information to the cognition and motor control modules . The cognition module interprets its perceptual input , chooses appropriate goals , constructs and executes plans to achieve those goals , and sends out motor commands . The motor control module implements these motor commands , controlling Steve's voice , locomotion , gaze , and gestures , allowing Steve to manipulate objects in the virtual world .