Abstract
Although the literature on contraceptive failure is vast and is expanding rapidly, our understanding of the relative efficacy of methods is quite limited because of defects in the research design and in the analytical tools used by investigators. Errors in the literature range from simple arithmetical mistakes to outright fraud. In many studies the proportion of the original sample lost to follow‐up is so large that the published results have little meaning. Investigators do not routinely use life table techniques to control for duration of exposure; many employ the Pearl index, which suffers from the same problem as does the crude death rate as a measure of mortality. Investigators routinely calculate ‘method’ failure rates by eliminating ‘user’ failures from the numerator (pregnancies) but fail to eliminate ‘imperfect’ use from the denominator (exposure); as a consequence, these ‘method’ rates are biased downward. This paper explores these and other common biases that snare investigators and establishes methodological guidelines for future research.