Bounds on the Expectation of a Convex Function of a Random Variable: With Applications to Stochastic Programming

Abstract
This paper is concerned with the determination of tight lower and upper bounds on the expectation of a convex function of a random variable. The classic bounds are those of Jensen and Edmundson-Madansky and were recently generalized by Ben-Tal and Hochman. This paper indicates how still sharper bounds may be generated based on the simple idea of sequentially applying the classic bounds to smaller and smaller subintervals of the range of the random variable. The bounds are applicable in the multivariate case if the random variables are independent. In the dependent case bounds based on the Edmundson-Madansky inequality are not available; however, bounds may be developed using the conditional form of Jensen's inequality. We give some examples to illustrate the geometrical interpretation and the calculations involved in the numerical determination of the new bounds. Special attention is given to the problem of maximizing a nonlinear program that has a stochastic objective function.