10 rules for writing efficient multiple-choice questions

Par Jean-Philippe Bradette
5 min
Image de couverture

The elaboration of these 10 golden guidelines is no mere coincidence. Although multiple-choice questions (MCQs) are the most commonly used questions, they are most often poorly written. Research shows that 60% of multiple-choice questions are flawed, and compromise the accuracy of the assessment data. Ouch!

With the power of artificial intelligence, we can only wonder about the future of automatically generating multiple-choice questions. Of course, we gain in efficiency, but does the student benefit?

Unfortunately, poor wording can distort the results of your evaluations and fail to accurately reflect your learners' true knowledge. To avoid wasting time and to provide you with useful data, your MCQs need to be clearly aligned with the performance settings, tied to learning goals and respectful of writing guidelines.

To this end, I've put together 10 rules to improve the accuracy of your questions. They will ensure that your questions are correctly understood by your learners, and that multiple-choice answers are clear and free of clues. And before we go any further, a quick reminder about the structure of a MCQ.

Question's anatomy

1. Aligning MCQs with learning objectives

Writing MCQs in relation to pedagogical objectives is a very important element in obtaining more valid assessments. Your MCQs should reflect what the participants should be capable of doing.

2. Use MCQs to assess comprehension and critical thinking skills

MCQs are handy for assessing memorization, but you need to go further by asking learners to interpret facts, evaluate situations, explain cause and effect, deduce and predict outcomes. Compose questions that ask people to reach decisions and solve problems as they would in real life.

3. Create realistic questions aligned with the working context

Learning is associative. Whenever we learn something, we unconsciously associate that specific learning with the stimuli of a given situation. When learning and performance contexts include similar stimuli, they can be described as being "aligned".

4. Avoid trying to catch your learners at fault

Your aim is to help your learners! Never use questions or answer options that could mislead a learner. If a question or its options can be interpreted in two different ways, or if the difference between the options is too subtle, find a way to rewrite it.

5. Distractors must always be plausible

Distractors are the most difficult part of a MCQ to write. A distractor is an answer that is unquestionably false. Each distractor must be plausible for learners who have not yet acquired the knowledge the test is designed to measure. For those who have acquired the knowledge required to answer the question, distractors are clearly wrong answers.

6. Ensure that answer options are of the same length

This can be difficult to achieve, but experts can use the length of answers as a clue to the right answer. Often, the longest answer is the right one. When I can't get four answers of the same length, I use two short and two long.

7. Avoiding all-or-none answer choices

Probably the most common mistake. It's often an easy answer option when we have no idea about distractors, but most of the time it leads to confusion for the learner. It also encourages guesswork.

8. Avoid negations

Avoid using negations such as "don't". Learners often find it difficult to understand questions formulated in a negative way.

Example: "In this work situation, what feedback is not relevant?"

Instead, reverse the question and write it in a positive way: "What feedback is relevant in this work situation?"

Occasionally, the use of negation is inevitable. In this case, they should be highlighted in bold, capitalized, italicized or underlined, so that the candidate doesn't misunderstand the intention of your question.

9. Avoid clues in your answer options

Avoid writing answer options that give clues to the learner. For example, unrealistic distractors, repetition of words in the question and choices or choices with different lengths.

10. Encourage the use of three answer options

There is a lot of research that promotes the use of three answer choices (2 distractors and one correct answer). Three-option questions are just as reliable and valid as four, or five-option questions, and therefore save designers a lot of time. Furthermore, they enable learners to answer three-option questions more quickly than four, or five-option questions.

Remember, there will always be mistakes in a first version. Mistakes in your questions can cause learners to under-perform and score lower than they would have if the mistakes hadn't been there. You can easily validate the performance of your questions by having you colleagues review them, or by referring to the success rate. The success rate is displayed in the B12 application for each question, enabling the designer to assess how well the question is understood and to align it with the level of complexity.


The rule of thumb for success rates is as follows:

● A very low success rate per question means that the level of difficulty is too high.

● A very high success rate per question means that the level of difficulty is too low.

● The overall success rate can also indicate whether it is a weakness in knowledge that needs to be addressed.

In short, it is easy to create a question, but it is not easy to create a good question. This is even truer in a world where content generation is easier than ever. We need to recognize and, above all, create quality items, and these 10 writing rules are a great place to start


Abedi, J. (2006). Language issues in item development. In S. M. Downing and T. M. Haladyna (Eds.), Handbook of Test Development. Routledge.

Downing, S. M. (2005). The effects of violating standard item writing principles on tests and students: The consequences of using flawed test items on achievement examinations in medical education. Advances in Health Sciences Education, 10(2), 133-143.

Haladyna, T. M. (2004). Developing and validating multiple-choice test items (3rd ed.). Lawrence Erlbaum Associates Publishers.

Marsh, E. J. & Cantor, A. D. (2014). Learning from the test: Dos and don’ts for using multiple-choice tests. In McDaniel, M. A., Frey, R. F., Fitzpatrick, S. M., & Roediger, H. L. (Eds.), Integrating Cognitive Science with Innovative Teaching in STEM Disciplines. Washington University, Saint Louis, Missouri.

Nedeau-Cayo, R., Laughlin, D., Rus, L., Hall, J. (2013). Assessment of item-writing flaws in multiple-choice questions. Journal for Nurses in Professional Development, 29(2), 52-57.

Rodriguez, M.C. (2005), Three Options Are Optimal for Multiple-Choice Items: A Meta-Analysis of 80 Years of Research. Educational Measurement: Issues and Practice, 24: 3-13. https://doi.org/10.1111/j.1745-3992.2005.00006.x

Faites partie de la communauté Apprentx!