A set of 15 self-administered case histories were developed, each consisting of a short case followed by a standard format on which desired tests were checked. After pilot testing the case histories within a group of doctors, the authors selected the ten cases with the highest item-total correlations that also provided a broad clinical spectrum. Using a different group of 19 doctors, test-ordering on the questionnaire was compared with actual test-ordering in clinical practice. Questionnaire test-ordering did not reflect practice behavior; in fact, the relationship tended to be inverse (r = -0.43: P < 0.10). Adjusting for casemix variation by including only those practice cases with diagnoses similar to questionnaire cases did not improve its performance (r = -0.50: P < 0.05). These findings suggest that test-ordering on case history questionnaires may not reflect actual practice behavior. Conclusions about test-ordering behavior and management strategies to alter it should not be based on results from questionnaires that have not been validated against actual practice.