Abstract
Several problems in the optimal control of dynamic systems are considered. When observed, a system is classifiable into one of a finite number of states and controlled by making one of a finite number of decisions. The sequence of observed states is a stochastic process dependent upon the sequence of decisions, in that the decisions determine the probability laws that operate on the system. Costs are associated with the sequence of states and decisions. It is shown that, for the problems considered, the optimal rules for controlling the system belong to a subclass of all possible rules and, within this subclass, the optimal rules can be derived by solving linear programming problems.