A proposal for interpreting and reporting negative studies
- 1 May 1986
- journal article
- clinical trial
- Published by Wiley in Statistics in Medicine
- Vol. 5 (3), 203-209
- https://doi.org/10.1002/sim.4780050302
Abstract
An issue of continuing interest is the interpretation and reporting of ‘negative’ studies, namely studies that do not find statistically significant differences. The most common approach is the design-power method which determines, irrespective of the observed difference, what differences the study could have been expected to detect. We propose an alternative approach, the application of equivalence testing methods, where we define equivalence to mean that the actual difference lies within some specified limits. This approach, in contrast to the design-power approach, provides a way of quantifying (with p-values) what was actually determined from the study instead of saying what the study may or may not have accomplished with some degree of certainty (power). For example, a possible outcome of the equivalence testing approach is the conclusion at the 5 per cent level that two means (or proportions) do not differ by more than some specified amount. The equivalence testing approach applies to any study design. We illustrate the method with a cancer clinical trial and an epidemiologic case-control study. In addition, for those studies in which one cannot specify limits a priori, we propose the use of equivalence curves to summarize and present the study results.Keywords
This publication has 19 references indexed in Scilit:
- Relation of Serum Vitamins a and E and Carotenoids to the Risk of CancerNew England Journal of Medicine, 1984
- A new statistical procedure for testing equivalence in two-group comparative bioavailability trialsJournal of Pharmacokinetics and Biopharmaceutics, 1984
- A new procedure for testing equivalence in comparative bioavailability and other clinical trialsCommunications in Statistics - Theory and Methods, 1983
- “Proving the null hypothesis” in clinical trialsControlled Clinical Trials, 1982
- The Importance of Beta, the Type II Error and Sample Size in the Design and Interpretation of the Randomized Control TrialNew England Journal of Medicine, 1978
- Use of Confidence Intervals in Analysis of Comparative Bioavailability TrialsJournal of Pharmaceutical Sciences, 1972