The Effect of Panel Membership and Feedback on Ratings in a Two-Round Delphi Survey

Abstract
Past observational studies of the RAND/UCLA Appropriateness Method have shown that the composition of panels affects the ratings that are obtained. Panels of mixed physicians make different judgments from panels of single specialty physicians, and physicians who use a procedure are more likely to rate it more highly than those who do not. To determine the effect of using physicians and health care managers within a panel designed to assess quality indicators for primary care and to test the effect of different types of feedback within the panel process. A two-round postal Delphi survey of health care managers and family physicians rated 240 potential indicators of quality of primary care in the United Kingdom to determine their face validity. Following round one, equal numbers of managers and physicians were randomly allocated to receive either collective (whole sample) or group-only (own professional group only) feedback, thus, creating four subgroups of two single-specialty panels and two mixed panels. Overall, managers rated the indicators significantly higher than physicians. Second-round scores were moderated by the type of feedback received with those receiving collective feedback influenced by the other professional group. This paper provides further experimental evidence that consensus panel judgments are influenced both by panel composition and by the type of feedback which is given to participants during the panel process. Careful attention must be given to the methods used to conduct consensus panel studies, and methods need to be described in detail when such studies are reported.