Abstract
The paradox of almost certain rejection of the null hypothesis in the Chi Square test-of-fit, when many observations are used, has been pointed out by Cochran [4], and largely removed by Lehmann and Hodges [7]. The same paradox arises in most tests of goodness-of-fit. In this paper the Kolmogorov-Smirnov tests are modified to remove this difficulty and some properties of this modification are investigated. In particular, a rigorous method for choosing sample size (Theorem 3.2 and corollaries) is presented. Given independent random variables $X_1, \cdots, X_n$ with common distribution function $F$, suppose that we desire to determine whether or not $F$ is in some class $\mathscr{H}_0$. If we are only interested in whether $F$ is close, in some sense, to some distribution function in $\mathscr{H}_0$, we can let $\mathscr{H}^\ast_0 \supset \mathscr{H}_0$, where $\mathscr{H}^\ast_0$ is the class of distribution functions "close" to those in $\mathscr{H}_0$, and test the more reasonable hypothesis that $F \epsilon \mathscr{H}^\ast_0$. In what follows we consider tests based on the uniform metric $d_1$, given by $d_1(F, H) = \sup_x |F(x) - H(x)|$, where $\mathscr{H}_0$ consists of a single distribution function $F_0$.