Training Agents to Perform Sequential Behavior

Abstract
This article is concerned with training an agent to perform sequential behavior. In previous work, we have been applying reinforcement learning techniques to control a reactive agent. Obviously, a purely reactive system is limited in the kind of interactions it can learn. In particular, it can learn what we call pseudosequences—that is, sequences of actions in which each action is selected on the basis of current sensory stimuli. It cannot learn proper sequences, in which actions must be selected also on the basis of some internal state. Moreover, it is a result of our research that effective learning of proper sequences is improved by letting the agent and the trainer communicate. First, we consider trainer-to-agent communication, introducing the concept of reinforcement sensor, which lets the learning robot explicitly know whether the last reinforcement was a reward or a punishment. We also show how the use of this sensor makes error recovery rules emerge. Then we introduce agent-to-trainer communication, which is used to disambiguate ambiguous training situations—that is, situations in which the observation of the agent's behavior does not provide the trainer with enough information to decide whether the agent's move is right or wrong. We also show an alternative solution to the problem of ambiguous situations, which involves learning to coordinate behavior in a simpler, unambiguous setting and then transferring what has been learned to a more complex situation. All the design choices we make are discussed and compared by means of experiments in a simulated world.

This publication has 9 references indexed in Scilit: