The Forgotten Dimension: The Information Content of Objective Questions
Alan Dugdale
Griffith University, Gold Coast, Australia.
DOI: 10.4236/ce.2015.616178   PDF   HTML   XML   3,767 Downloads   4,303 Views   Citations


Objective examination questions are used for assessment in many disciplines and levels. They allow extensive coverage, reliability, re-usability and administrative ease, but the amount of information they gather about student knowledge is not widely known. In this paper, I show that small changes in question format make large differences to the information gained, and therefore to the efficiency of the assessment process. The basic objective question (BOQ) can be extended beyond the true/false response to test probabilities in complex situations. When objective questions are used to assess students’ abilities, the information gained should be sufficient in quantity and coverage and at the appropriate DOK (Depth of Knowledge) to give a robust and valid assessment. Objective questions take time and expertise to set and validate, so we should seek methods that meet these criteria with the least administrative load. The basic objective question is a statement followed by two or more responses; one response must be marked as correct. The Type 1 MCQ (Multiple Choice Question), multiple true/false question (MTFQ), extended matched question (EMQ) and others are all variants of the BOQ, but they differ widely in the amount of information they gain. For a given input of statements and responses, the Type 1 MCQ delivers the least information about the student’s knowledge, the MTFQ and the BOQ deliver almost four times as much and the EMQ about six times as much. The expected level of student knowledge also affects the information gained. If questions are set so that most (say 90%) of students get the correct answers, then the information gained, and the discriminatory power of the test is much less than when the test is set for lower expected levels. The format used for objective questions and the expected level of correct responses govern their efficiency in collecting pass/fail information. The quality of that information such as depth of knowledge (DOK) depends on the wording and situations modeled for the questions and this is independent of their format.

Share and Cite:

Dugdale, A. (2015) The Forgotten Dimension: The Information Content of Objective Questions. Creative Education, 6, 1760-1767. doi: 10.4236/ce.2015.616178.

Conflicts of Interest

The authors declare no conflicts of interest.


[1] (2007). Confidence Marking and Degrees of Certitude.
[2] Carrière, B., Gagnon, R., Charlin, B., Downing, S., & Bordage, G. (2009). Assessing Clinical Reasoning in Pediatric Emergency Medicine: Validity Evidence for a Script Concordance Test. Annals of Emergency Medicine, 53, 647-652.
[3] Dugdale, A. E. (2013). Towards More Efficient Assessments: Increasing Information from Objective Questions. Creative Education, 4, 39-41.
[4] Kelly, S., & Dennick, R. (2009). Evidence of Gender Bias in True-False-Abstain Medical Examinations. BMC Medical Education, 9, 32.
[5] Moulton, R. (2012). A Short, Simple Introduction to Information Theory.
[6] Palmer, E. J., & Devitt, P. G. (2007). Assessment of Higher Order Cognitive Skills in Undergraduate Education: Modified Essay or Multiple Choice Questions? BMC Medical Education, 7, 49.
[7] Pamphlett, R. (2005). t Takes Only 100 True-False Items to Rest Medical Students: True or False? Med Teach, 25, 468-470.
[8] Rotthoff, T., Baehring, T., Dicken, H.-D., Fahron, U., Richter, B., Fischer, M. R., & Scherbaum, W. A. (2006). Comparison between Long-Menu and Open-Ended Questions in Computerized Medical Assessments. A Randomized Controlled Trial. BMC Medical Education, 6, 50.
[9] Schuwirth, L. W. T., & van der Vleutin, C. P. M. (2004). Different Written Assessment Methods: What Can Be Said about Their Strengths and Weaknesses. Medical Education, 38, 974-979.
[10] Shannon, C. E. (1948). A Mathematical Theory of Communication. The Bell System Technical Journal, 27, 379-423.
[11] van Bruggen, L., Manrique-van Woudenbergh, M., Spierenburg, E., & Vos, J. (2012). Preferred Question Types for Computer Based Assessment of Clinical Reasoning: A Literature Survey. Perspectives on Medical Education, 1, 162-171.

Copyright © 2022 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.