Abstract
Scholars in the last half of the 20th century forged our modern commitment to evidence in evaluating clinical practices. They were courageous people, iconoclasts for their time, insisting that the scientific method was a necessary and plausible tool for judging the value of what we did for and to patients. Scientific evaluation of clinical practice was necessary, they argued, because unguided human observers are frail meters of truth—too prone to see what they expect to see, too likely to confuse effort with results or to attribute outcomes to visible causes rather than hidden ones, too trusting in small numbers and local opinion. Only formal scientific designs and strong statistical methods, they claimed, can protect the human mind from its own biases and adjust for hidden uncontrolled influences, sorting signals from noise. Scientific evaluation of practice is plausible, they argued, because the hypothetico-deductive method and proper statistical theory can be applied, with only modest adjustments, to the world of clinical process, just as it can be in a laboratory. And they taught us how to do that.