28823 members and growing – the largest networking group in the maritime industry!

LoginJoin

Monday, January 18, 2021

Maritime Logistics Professional

Improving the Utility of Multiple Choice Questions in Maritime Training

Posted to Maritime Training Issues with Murray Goldberg (by on April 15, 2013

Multiple Choice Question (MCQ) tests are one of the oldest and most widely used assessment techniques in existence. Yet they are also one of the most highly maligned. However, written carefully, and used appropriately as one component of a multidimensional assessment program, MCQs can be a real asset to maritime assessment. This third and final article in the series provides some practical tips on how to write effective and useful MCQs.

Blog Notifications: For the latest maritime training articles, visit our company blog here. You can receive notifications of new articles on our company blog by following the blog.

Maritime Mentoring: International Maritime Mentoring Community - Find a Mentor, Be a Mentor

Improving the Utility of Multiple Choice Questions in Maritime Training

In the first two articles of this series (here and here)  I began a discussion on the use of multiple choice questions (MCQs) in maritime assessment. MCQ tests are one of the oldest and most widely used assessment techniques in existence. Yet they are also one of the most highly maligned. Indeed, there are countless examples of poorly created and improperly used MCQs. Part of the their “bad rap” comes from the fact that, in some ways, it is easier to get MCQs wrong than it is for other assessment types. However, MCQs are actually a very useful assessment technique with benefits not found in other assessment techniques. Therefore,  written carefully, and used appropriately as one component of a multidimensional assessment program, MCQs can be a real asset to maritime assessment.

My first article covered some of the strengths of MCQs. The second article in this series looked at their use as one component of a comprehensive assessment program and also provided some words of caution on cultural and gender differences in the use of MCQs. This third and final article in the series attempts to provide some practical tips on how to write effective and useful MCQs.

Rules for Writing “Good” Multiple Choice Questions

I have had the misfortune of seeing more poor implementations and uses of multiple choice questions than I have good ones. That is unfortunate, as I wrote above, because this tends to give all MCQ assessments a very bad reputation. This does not have to be the case, and in fact should not be the case because, used properly, MCQs have benefits that other assessment techniques do not possess. As I wrote in the previous articles in this series, each assessment technique has strengths and limitations. The fortunate part is that these strengths and limitations generally do not overlap very much. Thus by combining assessment techniques, a comprehensive assessment program can take advantage of the diversity of strengths and accommodate the limitations of individual techniques. If we leave MCQs out of the equation, we are missing their strengths. So - how do we write and use them well?

Rule Number 0: Never use MCQs as the sole assessment technique

This point was already covered in depth in the previous article, so I will not cover it here. However, this is such an important point that I could not write an article on MCQs without making this statement. Therefore, if you have not read the previous article, please do so now to understand this important point.

Rule Number 1: Randomize and supervise

I was recently in a conversation with a training manager at a medium-sized OSV operator who was describing a problem they had with training. The issue was that their employees were using a CBT (computer based training program) on-board, but did not seem to be learning the knowledge required. On examination, what was happening was that during a “training session” employees would sit down at the computer, call up the CBT, proceed straight to the MCQ assessment, and then pull out a piece of paper with a series of A’s, B’s, C’s and D’s on it. Then, as you might guess, they would answer the questions presented in the CBT using the answers on their piece of paper.

The problem was that the CBT had been in use on board for a number of years and presented the exact same series of questions each time an assessment was delivered. There are two problems here.

First, the trainees were (in the past) allowed to complete the assessment without a trainer or supervisor present. This gave them the opportunity to copy down the questions and answers, determine the correct choices, and then share them with the rest of the crew. Trainees should generally not be allowed to complete or review their assessment without some authority present. This prevents them from making a copy of the questions for future use. No matter what kind of assessment technique you use, supervision is always a requirement.

The second problem is that unless trainees are taking an assessment concurrently in a group (and cannot talk with one another), the same assessment should never be used more than once. This way, it is impossible for crew members to share correct answers with one another since future trainees will be subject to different assessments. Fortunately, most modern learning management systems provide randomized examinations. As an example, the LMS made by the company I work for works as follows. There is a database of MCQs, with each question tagged by category. So, there may be (for example) 50 questions in each of 10 categories covering the knowledge underlying 10 basic competencies required by trainees. In addition, there may be another 50 vessel-specific questions in categories for each of the 20 vessels in the fleet. Now, each time a trainee sits down to write an exam, some number of questions (let’s say 5) are randomly selected from each of the 10 competency categories, and another 10 are selected from the vessel-specific category currently being tested. In this way, each trainee receives a 60-question exam which is guaranteed to cover the knowledge from all 10 competencies as well as vessel-specific details. Some pairs of tests will contain overlapping questions, but the tests will all have a different mix of questions presented in a different order. This greatly reduces the possibility of answer sharing.

For the best security, a policy of supervision and exam randomization should be adopted. This way each exam is different, and trainees are only able to view exam questions (either during the exam or afterward during review) in the presence of an authority.

In the image below, you see some of the many question categories in the BC Ferries question database for the various positions at the company. Each time an exam is given questions are drawn randomly from the appropriate categories, thus ensuring equally difficult exams covering all the needed information - but never repeating the same exam twice.

Rule Number 2: Gather feedback, review, and then update. Repeat.

BC Ferries has approximately 15,000 questions in their LMS database. All of these questions were written by subject matter experts. Yet even so, when a new question is first used, there is a pretty good probability that it is not yet perfect. We have not kept good statistics on this, but my guess is that 1 in 10 questions needs some revision after it is first written. Once a question is accepted as “good”, it is a valuable asset and should be treated as such (see rule 1, above). But how do we get to the point where questions are “good” (well written, unambiguous, etc)?

At BC ferries, we have found that the most effective way of doing this is to engage the entire BC Ferries community in vetting questions. In practice, this means that trainers and even trainees are encouraged to provide feedback on every question they encounter. To help them do so, simple mechanisms are used to allow them to provide feedback and to allow the training managers to update the questions based on the feedback.

To this end, the LMS ensures that each exam contains a feedback icon next to every question in the exam. If the trainer or trainee feels that there is a problem with the question, they simply click the feedback icon (after the exam, not during), and indicate the problem. Their contact information and the question identifier is forwarded, along with the comment, to the training manager via e-mail. A link in the e-mail takes the manager directly to the editing interface for that question where, if necessary, the problem can be addressed.

This simple technique has proven to be one of the most effective and valuable means of ensuring that all of the questions in the BCF question database are unambiguous, current, and test an important fact.

Rule #3: Test one item of knowledge, and decide what that item is before writing the question

This may seem like a strange bit of advice, but it is critical and often overlooked. When writing a MCQ, first decide what specific bit of knowledge that MCQ will test. Decide what common misunderstandings are present and use them as detractors (answer choices which are false). Most importantly, remember that the question is not there to test reading ability, the ability to decipher and avoid “tricks”, nor aptitude with the english language. It is there to test one bit of knowledge. If you write a question which requires several abilities or knowledge of several facts to answer correctly, then getting it wrong does not tell you where the gap in knowledge is, nor even if there IS a gap in knowledge. So before sitting down to write your question, decide specifically what single fact will be tested, and vet your question to ensure only that fact is tested.

Rule #4: Use simple, straightforward language in the question and all the answers (and hey - no tricks!)

This follows from rule number 3, but there are additional reasons for this rule. As follows from above, if you use complex language, you are testing the candidate’s ability to read and understand complex language. If you want to test this, then do so - but in its own question.

More importantly, by using anything other than very straight-forward language, you are going to create a significant impediment to any candidate whose first language is not english. Many times, apparent cultural differences in exam performance can actually be attributed to language differences. These can be largely avoided by using the most direct and simple language to ask the question and present the answer choices.

Further to this point, avoid the common desire to create “trick” questions - ones whose answer choices can very easily be mistaken for correct or incorrect when they are not. The only time to use such a question is when being able to decipher subtle differences in correctness is important for safe or efficient performance. But otherwise, they unnecessarily convolute the question and, as above, test something other than the desired knowledge.

The bottom line is ask the question directly, simply and unambiguously. List your answer choices with the same clarity.

Rule #5: Use a variety of questions of varying difficulty for the same topic

For any complex knowledge, it is a good idea to present a set of questions which each test that knowledge at different levels of depth. For example, some questions should test the knowledge at a high level, others at a more detailed level, and still others should test the knowledge at a level of high expertise. Doing so ensures that your exam results will provide not only a yes/no assessment of whether the knowledge is understood, but will also give you a good idea as to what depth the knowledge is understood (and thus the size of the “gap” in knowledge).

The alternative, which is often mistakenly used, is to only use questions which require full understanding to be answered correctly. Doing so yields bimodal exam distributions - that is, the exam tends to be done either very well, or very poorly by trainees. This will still answer the question of whether the candidate has the required knowledge, but will not tell you how “close” those who do not have it are to having it.

If you use an LMS which randomizes, be sure to create question categories for each level of question difficulty. This way your exams will always contain a balanced mix of easier and more difficult questions.

Rule #6: Construct your answer choices carefully

Here is another oft-abused rule. There are several pieces of advice here.

First - some basics. Most experts agree that 3-5 answer choices are best. Fewer and it is easy to perform well on the exam by guessing. Too many and it unnecessarily complicates the question.

Second - never user silly answers as detractors (incorrect answer choices). There is no valid reason to do so other than humor. But remember that you are taking up the time of a trainee to read the nonsensical answer at a time when they are being assessed in a time-limited situation.

Third - detractors should always consist of common misunderstanding that you want to ensure are not mistaken for correct answers. There should be a reason for every detractor that is part of the question. For example, if you want to ensure that the trainee knows the overall length of the vessel within a certain tolerance, then the detractors should be plausible choices, but be outside that tolerance. Likewise, if it is a common mistake of trainees to cite vessel length in incorrect units, it is reasonable to provide detractors with the correct number, but incorrect units (feet instead of meters). Test what they need to know, but only what they need to know.

And finally - be absolutely sure that all of the answer choices (the correct answer and all the detractors) are of roughly the same length, the same detail, and use the same language patterns. It is very common to see  questions where the correct answer can be picked out without reading the question or the answer simply because it is longer, or uses different phrasing than the incorrect answers.

Conclusion

Writing good MCQ tests is not rocket science, but it does take a little time, knowledge, and feedback to get it right. Once you have a bank of carefully constructed, vetted and guarded questions, you’ll be able to use them long into the future - and can build on them incrementally. They are a valuable resource which should be carefully cultivated.

# # #

About The Author:

Murray Goldberg is the founder and President of Marine Learning Systems (www.marinels.com), the creator of MarineLMS - the learning management system designed specifically for maritime industry training. Murray began research in eLearning in 1995 as a faculty member of Computer Science at the University of British Columbia. He went on to create WebCT, the world’s first commercially successful LMS for higher education; serving 14 million students in 80 countries. Murray has won over a dozen University, National and International awards for teaching excellence and his pioneering contributions to the field of educational technology. Now, in Marine Learning Systems, Murray is hoping to play a part in advancing the art and science of learning in the maritime industry.

Blog Notifications: For the latest maritime training articles, visit our company blog here. You can receive notifications of new articles on our company blog by following the blog.

Maritime Mentoring: International Maritime Mentoring Community - Find a Mentor, Be a Mentor

Tags: Shipyard Seattle Todd Vigor