Signal versus noise in the evidence base for medicine: an alternative to hierarchies of evidence?

Abstract
Clinical practice frequently generates questions that are not easily answered by randomized trials. On conventional hierarchies of evidence, 'weaker' study designs are often more feasible. Also, much research is not well designed. Yet we still need to make best use of the available evidence. Systematic reviews must therefore address the danger of underestimating the evidence from relevant literature if it includes only that of a certain methodological quality. This would run the risk of missing or distorting the true message that the review is trying to identify. We propose a classification of research which does not reject studies on the basis of design alone, but recognizes the importance of assessing the message or 'signal' within each piece of research. It explicitly introduces judgement into the appraisal and synthesis of evidence, and affords more flexibility in attaching weight to evidence that might otherwise be lost. It includes an assessment of methodological quality, balancing this against the weight of its message, rather than rejecting studies which are below a certain threshold for quality. Fundamentally flawed research will still be rejected, but more commonly papers can still be used, tempering the importance that we attach to their signal by the amount of 'noise' around that signal. The balance of these two elements may be termed the 'signal to noise ratio'.