Abstract
The purpose of this paper is to describe in detail a computational procedure for solving optimal control problems called state increment dynamic programming. This procedure has computational requirements which are considerably less than those of conventional dynamic programming, but it still retains the general applicability and other desirable features of the conventional method. In one example the reduction in high-speed memory requirement is from l0\deg to about 100 locations. Similar reductions in computing time can also be obtained in some cases. The new procedure, like the conventional method, is based on iterative application of Bellman's principle of optimality. It differs from the conventional method in the choice of the time interval over which a given control is applied. Instead of using a fixed interval, the new procedure determines the time interval as the minimum time required for at least one of the state variables to change by one increment. As a result of this choice of interval, the next state for any given control is known to lie within a small neighborhood of the point at which control is applied. By using this result it is possible to compute optimal control in units, called "blocks," that cover a relatively long time interval but a small distance along each state variable. By using only one or two high-speed memory locations per state in the block-an almost arbitrarily small number of locations-it is possible to compute optimal control throughout the block. Specialized computations near the boundaries of the block permit the optimal trajectories any desired degree of freedom in passing from block to block. This technique has been applied in a computer program which calculates minimum fuel trajectories for supersonic (Mach 3) aircraft under a variety of conditions and constraints. The basic program can be used both for a detailed evaluation of an aircraft design and for real-time control of an aircraft.

This publication has 3 references indexed in Scilit: