28814 members and growing – the largest networking group in the maritime industry!

LoginJoin

Tuesday, November 24, 2020

Maritime Logistics Professional

Understanding Testing in Maritime Training, and its Important Implications for How We Train (and Retrain)

Posted to Maritime Training Issues with Murray Goldberg (by on March 10, 2014

Most maritime trainers would argue that they understand how to assess candidates and the point of doing so. However, the truth is more subtle than most are aware and even small misconceptions can lead to very poor assessment and remedial training practices. Fortunately, it is not complicated - but it is important.

Maritime Training: The full library of maritime training articles can be found here.

Blog Notifications: For the latest maritime training articles, visit our company blog here. You can receive notifications of new articles on our company blog by following the blog.

Maritime Mentoring: International Maritime Mentoring Community - Find a Mentor, Be a Mentor


Understanding Testing in Maritime Training, and its Important  Implications for How We Train (and Retrain)


This article continues the discussion about the “point” of assessments (tests, exams, etc) for the measurement of competency and knowledge in the maritime industry. Although most would argue that they already know the point, it is actually more subtle than most are aware. And it turns out that even small misconceptions can lead to very poor assessment practices, and very poor remedial training for those who fail assessments. Yet few people have the required understanding. Fortunately, it is not complicated.

In the first article, we asked and answered the question “what is the point of assessments”? In this article, we will provide a brief summary of the answer and then continue by discussing the implications for maritime training.

So, let’s get to it.

WHAT IS THE POINT OF TESTING? - A SUMMARY

A quick summary of the first article follows - though I encourage you to read that article if you have not already done so.

When asked why we assess, most will provide an answer along the following lines:

 “to see if the candidate knows the required knowledge or can perform the desired skill”.

This is a misleading answer, because it causes us to conclude that if someone masters a test, they are fully competent - possessing all the required knowledge and skills. This is a false conclusion - one which leads to poor assessment and remedial training practices.

The issue is that testing can never comprehensively assess all required knowledge or the ability to perform a skill under all expected conditions. It can test whether some knowledge is known, or if some skill can be performed under specific and limited test conditions, but it can never test the complete breadth of knowledge or competency required by a worker in the maritime industry - it would take a lifetime to conduct such a test. So, what does testing actually achieve?

Testing does not evaluate complete understanding. Instead, each test is an audit process of sorts, where instead of evaluating all knowledge (or all skills under all circumstances) we pick and choose - sampling a subset of the knowledge and skills the trainee possesses. But if the test is constructed and delivered well, we actually learn more from the test than whether the candidate is competent WRT those items specifically covered in the test. We also gain (or don’t gain, depending on the outcome) a “reasonable assurance” about the trainee’s breadth of knowledge and skills for those items we have not tested explicitly. How can this be?

The assumption we are making is that people, in general, are uniformly competent. That is, if they know the answer to the set of questions on the test, then they will likely know the answers to other questions that were not on the test. This is a reasonable assumption, but only under the condition that they did not know what was going to be on the test, and therefore were incentivised to study all knowledge/skills equally. This is so important that if we cannot rely on it being true, testing becomes far less useful as a measure of knowledge or competence. As was said in the previous article:

“I have always believed that the real value of testing is the incentive it creates for the learner to learn as well as they can and as much as they can. If they want to pass the test that they know is coming, and they have no idea what parts of the ‘course’ are going to be on that test (i.e. what items will be “sampled”), then they have no choice but to study everything equally, as well as they can.”

So - with the knowledge that testing is an audit process, what are the implications?

THE IMPLICATIONS OF ASSESSMENTS BEING “AUDIT” PROCESSES

As indicated above, we want to be able to assume that a candidate is roughly equally competent when it comes to all knowledge or skills that were part of training, regardless of which subset we actually chose to test. All of the implications below flow from the need to rely on that assumption. Let’s look at some examples.

First the obvious one: Don’t tell them what is on the test

The main implication is to ensure that a candidate is never privy to the contents of the assessment ahead of time. In the previous article we spoke about the resulting need to randomize tests. But another implication comes in the form of the answer to the question asked by almost every candidate: “what is going to be on the exam”? As a university faculty member for 10 years, I lost count long ago of the number of times I have been asked that question. When, as a trainer, you are asked that question, avoid the temptation to tell them what is going to be emphasized on the exam. It is natural to want to be helpful by answering, and in my experience, most trainers will answer the question. But don’t! It causes the trainee to skew their studying, leaving the other materials less studied - or not studied at all. Instead, the correct answer is: “you are responsible for all topics covered during training”. My student’s didn’t love it when I said that, but I was looking for well-prepared students, not love.

Avoid your “tried and true” assessment scenarios - shake things up!

Similarly, if part of your assessment practice includes competency checks of sorts (which, of course, it does), make sure you vary the scenarios you provide and the demonstrations of competency you require. It is human nature to settle on a few “favorite” questions or scenarios you present to your candidates and then use those “tried and true” assessments year after year. But if you do this, you will be training your candidates to your questions - not to prepare broadly and deeply. Again, within the scope of the knowledge they need or the competencies they must be proficient at, make sure they understand that any and all will be candidates for testing. That is the only way to ensure that a candidate is as prepared as they can possibly be.

Separate the responsibilities of training and assessment

As instructors, we take pride in our knowledge and in our ability to prepare our trainees well. We want to see them succeed. If we, as trainers, are also responsible for assessment, we are in a position of conflict. Consciously or subconsciously, we may feel that a poor assessment result reflects badly on the training we have provided. It becomes easy to confuse preparing trainees for their job with preparing them to pass the assessment. We must *never* confuse these two. In the best training and assessment systems, we make sure we can never fall into this trap of “teaching to the test” by separating the responsibilities of training and assessment.

What does this mean? Logically it means that the trainer should be the “supplier” of trained candidates. The assessor should be the “consumer” of trained candidates. Like all savvy consumers, the assessor should take nothing for granted, never rely on claims of quality by the supplier (trainer), and always apply their own critical eye to the quality of the “goods”. In practice, this can only be fully achieved if the assessor is not the trainer. This way, not only does the candidate not know what is going to be on the exam, but neither does the trainer! Now we are assured that the trainee is doing their best to be fully prepared, the trainer is doing their best to fully prepare the trainee (and not supply faulty “goods”), and the assessor will do their best to ensure that any substandard candidates are placed on a remedial path.

Though I realize this is sometimes difficult in practice, it should be done when possible. When not possible, the trainer must be aware of the potential for conflict - which will go some way to reducing the negative effect.

Look at your remedial training practices

Much is invested in our trainees. Thus when a trainee fails a test, it is in everyone’s best interests to put that trainee on a remedial training program that will bring them to where they need to be. The question is, what remedial process should we follow with this person?

The incorrect (but very common) procedure is to list those areas on the test that the trainee did poorly on, review those topics, and then re-test those areas. This is problematic because (from the above) doing poorly on some parts of the test almost certainly means that the trainee would have done equally poorly on a similar percentage of knowledge/skills which may be just as important, but not tested.

Therefore, the correct follow-up procedure is to ask the trainee to re-learn all of the trained knowledge and competencies to a higher level (ideally with some expert assistance), and then to re-test the trainee with a completely new test (covering a different sampling of items). And, of course, make sure the trainee knows that the “make up” exam will be completely different. Sometimes this practice can be assisted with the application of a much more comprehensive follow-up test. This can be useful, as it can provide some additional information as to the locations of the largest learning gaps. But avoid the temptation to only retrain and retest those areas of knowledge and competency that the candidate tested poorly on initially. By doing so, you are masking hidden problems that will almost certainly create safety or performance issues down the road.

Conclusion

Remember - the goal here is to encourage complete and effective training, practice and study - NOT to enable candidates to pass the test. It is easy to miss the distinction. If your training and assessment practices incentivise and support comprehensive and effective learning, the test results will take care of themselves.

# # #

About The Author:

Murray Goldberg is the founder and President of Marine Learning Systems (www.marinels.com), the creator of MarineLMS - the learning management system designed specifically for maritime industry training. Murray began research in eLearning in 1995 as a faculty member of Computer Science at the University of British Columbia. He went on to create WebCT, the world’s first commercially successful LMS for higher education; serving 14 million students in 80 countries. Murray has won over a dozen University, National and International awards for teaching excellence and his pioneering contributions to the field of educational technology. Now, in Marine Learning Systems, Murray is hoping to play a part in advancing the art and science of learning in the maritime industry.

Maritime Training: The full library of maritime training articles can be found here.

Blog Notifications: For the latest maritime training articles, visit our company blog here. You can receive notifications of new articles on our blog by following the blog.

Maritime Mentoring: International Maritime Mentoring Community - Find a Mentor, Be a Mentor