Sharp Bounds on the Value of Perfect Information

Abstract
We present sharp bounds on the value of perfect information for static and dynamic simple recourse stochastic programming problems. The bounds are sharper than the available bounds based on Jensen's inequality. The new bounds use some recent extensions of Jensen's upper bound and the Edmundson-Madansky lower bound on the expectation of a concave function of several random variables. Bounds are obtained for nonlinear return functions and linear and strictly increasing concave utility functions for static and dynamic problems. When the random variables are jointly dependent, the Edmundson-Madansky type bound must be replaced by a less sharp “feasible point” bound. Bounds that use constructs from mean-variance analysis are also presented. With independent random variables the calculation of the bounds generally involves several simple univariate numerical integrations and the solution of several similar nonlinear programs. These bounds may be made as sharp as desired with increasing computational effort. The bounds are illustrated on a well-known problem in the literature and on a portfolio selection problem.