Abstract
A penalty-type decision-theoretic approach to Nonlinear Programming Problems with stochastic constraints is introduced. The Stochastic Program SP is replaced by a Deterministic Program DP by adding a term to the objective function to penalize solutions which are not “feasible in the mean.” The special feature of our approach is the choice of the penalty function PE, which is given in terms of the relative entropy functional, and is accordingly called entropic penalty. It is shown that PE has properties which make it suitable to treat stochastic programs. Some of these properties are derived via a dual representation of the entropic penalty which also enable one to compute PE more easily, in particular if the constraints in SP are stochastically independent. The dual representation is also used to express the Deterministic Problem DP as a saddle function problem. For problems in which the randomness occurs in the rhs of the constraints, it is shown that the dual problem of DP is equivalent to Expected Utility Maximization of the classical Lagrangian dual function of SP, with the utility being of the constant-risk-aversion type. Finally, mean-variance approximations of PE and the induced Approximating Deterministic Program are considered.