General security

Lessons from Writing Multiple-Choice Test InfoSec Questions

John G. Laskey
April 15, 2016 by
John G. Laskey

Writing multiple-choice test questions for InfoSec certification exams is challenging. By multiple-choice, I mean those questions that ask students to identify the only correct answer from a list of four, sometimes more, options.

I came to this work through my background of InfoSec. But I soon found my acquired knowledge alone was insufficient to write effective questions. Though the work has not changed my core beliefs, it has certainly prompted me to seek out better justifications for them, and has underlined the importance of linking opinion to referenced fact. Crafting InfoSec test questions has helped me sharpen my understanding of InfoSec by requiring me to thoroughly research every right (and wrong) choice.

FREE role-guided training plans

FREE role-guided training plans

Get 12 cybersecurity training plans — one for each of the most common roles requested by employers.

The tester tested

In creating multiple-choice test questions, the questioner is his/herself very much tested.

This should reassure anyone who has sat an InfoSec multiple-choice test. You will not have stopped to think about the trials and tribulations of the anonymous folk who draft those fiendish-looking choices, upon which your future career could depend. But considering the approach to creating these sorts of questions can help us all to better understand the need for clarity and for reliable referencing when called on to explain the complex - sometimes even basic – principles of InfoSec.

No tricks

Ambiguity is, as a strict rule, never allowed in a multiple-choice question. Nor are questions that may be seen as setting out to trick or deceive the student. Here's an example, from the non-InfoSec world: which month has 28 days? You may well answer February, but the correct answer - after you've had enough time to really think it through is (of course) they all do… A question could also be rightly seen as unfair to students if it contains complex expressions or unfamiliar terms that were not used or explained in the supporting syllabus.

Straight to the point

There must always be a point to a question. It can never be drafted as if the exam were a general knowledge InfoSec quiz: instead, each question must have a solid grounding within the course syllabus covered in the exam. Even a general question about InfoSec - which you might expect everyone taking the exam to know - would be excluded if it did not directly reference the learning material.

Making both 'right' and 'wrong' credible

I've written elsewhere about Generally Accepted System Security Principles (GASSP) which (all InfoSec practitioners note) can prove very elusive when challenged. This can be awkward when the challenger is a non-InfoSec person who wants absolute answers. As InfoSec professionals, we may hold as self-evident a lot of first principles that work well enough when we apply them. For instance, in the definition and management of risk. However, all definitions are subject to change, to interpretation by multiple schools of thought and, in some cases, are subject to ongoing debate. Consider the differing views on the definitions 'Information Security', 'computer security' and 'cybersecurity' (or is it 'Cyber Security'?) and how they generate long – and quite pointless – arguments. Yet the requirement to justify a right (or wrong) answer in a multiple-choice question requires its writer to seek out universally accepted principles, in spite of this naturally changing landscape. The effort required to construct a good question can therefore serve as a template for checking through all of our InfoSec opinions.

Traps are banned

When attending in-house reviews of exam questions, I've found my fellow question-setters very good at challenging material that could in any way deceive students into making a wrong choice. They are also very keen (and impressively good at) trimming questions down to a minimum number of words, so students will not be penalized by having to puzzle over wordy, or worse, nonsensical constructions. This is a skill which requires good command of language, something I've always felt key when explaining complex terminologies to those less familiar with InfoSec. Question setters also have to develop a conscious guard against posing questions that might be okay if they are made inside of a formal presentation, perhaps to help develop an argument (e.g. "what is the point of InfoSec?"). We also have to adjust to a concise phrasing that avoids ambiguity and minimizes the possibilities of argument about the right (or indeed the wrong) choices. This all requires adjusted thinking, and takes time to grasp.

The art of distraction

Plausible (but also always wrong) choices in multiple-choice are called distractors. Though wrong, they have to be made credible enough to appear likely to someone who has completed the syllabus, yet at the same time not be so obviously false that a student could immediately rule them out. Also, distractors cannot be worded in such a way that they might be seen as tricking students into picking them out. Nor can they contain any words that might hint at the right answer. More difficult still (as the pool of questions increases) is the need to ensure questions don't include any information that could help a student to correctly answer another question in the same paper.

When writing questions, it is tempting to justify a distractor as wrong just because there are no references that positively say it is wrong. But this is not enough: a distractor that might be right when applied in even rare cases is not a good distractor, and will be rejected in review. A fellow question-setter has termed this the "it depends" test, where a question-setters apparently well-crafted right/wrong answers are seen by another reviewer as not being right (or wrong) in every case. This ability requires deep knowledge of a range of InfoSec issues.

Sometimes, making it clear to students that the answer sought is the most likely (e.g. "which is the BEST type of encryption...") can solve the question-setter's problem. But the 'wrong' questions must always be credible. So the choices to a question "what is the best way to protect data from being intercepted" cannot, for instance, include such 'wrong' answers as: "don't transmit data", or "put it in a heavy locked box": when put alongside the best correct answer (e.g. "encryption") these are quite obviously wrong. That would enable the student to increase the chances of picking out a right answer by applying logic, rather than their knowledge of the syllabus. The credibility of the whole exam would therefore be broken.

By now, you should be getting the idea that a question-setter's task is something of a challenge!

Conclusions

The exercise in researching good multiple-choice questions helps ensure our opinions are grounded in fact, while also turning up new facts to help us strengthen our opinions. Even if you do not have the opportunity to craft multiple-choice questions yourself, a knowledge of how they are made gives you some insight into the thinking behind them. I hope you will find as I have that, however challenging, they will never be unfair, use ambivalent wording, include 'trick' questions or contain confusing language. Also, that the right answers will always be right, the wrong answers always be wrong.

FREE role-guided training plans

FREE role-guided training plans

Get 12 cybersecurity training plans — one for each of the most common roles requested by employers.


John G. Laskey
John G. Laskey

John Laskey is a US-based security consultant who previously worked in the British government, where he was responsible for securing systems and advising senior managers about major programs. In the US, John has taught the ISO 27001 standard and is now helping develop and market new InfoSec products and services. He is a member of ISSA (New England Chapter).