Abstract
Three verbal item types employed in standardized aptitude tests were administered in four formats—a conventional multiple-choice format and three formats requiring the examinee to produce rather than simply to recognize correct answers. For two item types—Sentence Completion and Antonyms—the response format made no difference in the pattern of correlations among the tests. Only for a multiple-answer open-ended Analogies test were any systematic differences found; even the interpretation of these is uncertain, since they may result from the speededness of the test rather than from its response requirements. In contrast to several kinds of problem-solving tasks that have been studied, discrete verbal item types appear to measure essentially the same abilities regardless of the format in which the test is administered.