Tag: assessment

Converting Traditional Multiple Choice Questions to Scenario-Based Questions

The traditional multiple choice questions we use in assessment are often abstract and measure only whether people recall facts they heard in the last 5 minutes. Converting these questions to scenario-based questions can increase the level of difficulty, measure higher level thought, and provide relevant context.

Converting to Scenario-Based Questions

Example Question

Let’s say you’re creating training for managers on how to provide reasonable accommodations for employees. You drafted a set of traditional multiple choice questions as a quiz for the end of the course, but they’re all very low level. You want to improve the quality of your assessment with some scenarios.

This is a question from your current quiz that measures recall of a fact from the training. The rest of the assessment is similar.

Example 1 (Original)

What reasonable accommodation is recommended for a temporary disability or medical issue affecting work?

  1. None; reasonable accommodations are only used for permanent or long-term disabilities.
  2. Unpaid time off can be offered as an accommodation for temporary issues.
  3. Paid time off should be offered, even if it exceeds the amount of paid time off other employees receive.

Align to Objectives

What are your objectives? Does your assessment align to them? If not, rewrite it.

In this example, the objective is “The learner will follow the procedure for providing reasonable accommodations.” The objective is application level; you need to apply this procedure. (You could argue for analysis or evaluation here too, but let’s assume it’s application.)

The question assesses recall; the objective requires application. Therefore, this question should be rewritten at a higher level.

When would people use this?

The first step to shifting from traditional to scenario-based assessment is asking when people would use the information. When would managers need to know about handling temporary disabilities? A common situation would be due to an illness or surgery. Maybe an employee needs a reduced schedule due to fatigue from chemo. Maybe an employee needs time off to recover from back surgery.

For each multiple choice question, ask yourself how learners would use that information on the job. When would they need to differentiate between those options?

If you can’t come up with any situation in which people would need this information on the job, why are you asking that question? If you have a question with just irrelevant information, skip down to the section on complete rewrites below.

Scenario as Introduction

One method to revise the question is to add a scenario to introduce the choices. This provides context. It shifts the question from just recalling information to using that information to make a decision.

Let’s see how this works with the previous example. The scenario introduces the question. The choices are essentially the same as before, but now it’s a decision about how to work with an employee you manage. Instead of measuring recall, this question measures if learners can apply the reasonable accommodations policy.

Example 1 (Revised)

Simon, a graphic designer on the team you manage, is having surgery. He requested 2 weeks time off to recover after his surgery. How should you respond?

  1. Let Simon know he can use his accrued vacation time. Reasonable accommodations are only used for permanent or long-term disabilities.
  2. Provide two weeks unpaid time off.
  3. Provide two weeks paid time off.

Notice that this scenario isn’t long; it’s only 2 more sentences than the original question.

Complete Replacement

Sometimes adding a scenario at the beginning won’t work, and you need a complete rewrite of the question. If the question is something unrelated to your objectives or that people will never use on the job, you have to start over and replace the question.

Look at this example. Would a manager ever need to know this history on the job? Will they be more effective at offering accommodations if they can memorize this date?

Example 2 (Original)

In what year was the Americans with Disabilities Act or ADA passed by Congress?

  1. 1985
  2. 1990
  3. 1995
  4. 2000

We have all seen questions like this on quizzes before. They’re easy to write, but they don’t assess anything meaningful. Replacing it with a scenario-based question would give you a more accurate assessment.

Example 2 (Replacement)

One of your employees, Miranda, brought documentation from her ophthalmologist about her vision and how it affects her driving. Her night vision is deteriorating. Miranda has requested a change in her work schedule. She wants to start and end her work day later to avoid driving in the early morning when it’s still dark. How do you respond?

  1. Agree to adjust Miranda’s schedule.
  2. Tell Miranda to contact HR to start the official accommodation process.
  3. Tell Miranda that the schedule change is not possible since it creates too much burden on the rest of the team.

What Do You Want to Learn?

What else would you like to learn about writing these kinds of assessment questions? Do you have questions I could answer in a future post? Let me know in the comments.

Mini-Scenarios for Assessment

Scenario-based learning often means complex branching or simulations, but it doesn’t always have to be that way. You can use mini-scenarios to make your assessments more relevant and valuable. One of the big advantages of using mini-scenarios is that they’re fast and easy to build. You don’t need any special tools; any tool that can create a multiple choice question can be used for mini-scenarios.

Traditional Assessment or Mini Scenario

Imagine you’re creating a course for managers about motivation and using rewards effectively. You could ask a fairly typical comprehension question like this:

What is the best strategy for encouraging long-term behavior change in your employees?

  1. Threaten punishment for anyone not changing their behavior
  2. Offer a small reward for changing behavior
  3. Offer a large reward for changing behavior

If you have introduced this concept already, this question probably aligns to that content and to your learning objective. However, it’s very abstract. Compare that question to this one:

Andrew is a sales manager who has been struggling to motivate his team to better performance. He sent his team to a conference where they learned about sharing stories about previous happy customers to improve sales. A few salespeople really like using this technique, but he wants everyone to start using it more. In the long term, he wants to change their attitudes about the technique.

What should Andrew do to encourage his team?

  1. Threaten punishment for anyone not using storytelling
  2. Offer a small reward for using storytelling
  3. Offer a large reward for using storytelling

In the second example, a mini-scenario sets up the question. This provides context and makes it a concrete situation with a problem to solve rather than an abstract comprehension question. Now this is about applying the concept in a relevant situation rather than just remembering what you read.

Using a mini-scenario added a total of four sentences to the question. This is actually longer than many of my mini-scenarios; one or two sentences are often enough. You could use this assessment question in any tool, even using the built-in quizzing for many LMSs. It doesn’t take much more time to write than a traditional multiple choice question.

I have often found mini-scenarios useful for helping clients try out scenario-based learning without having to commit to something more complex and expensive. This is a way they can dip their toe in the water without having to do a completely scenario-based course. Even in a fairly traditional linear e-learning course, using mini-scenarios can make your knowledge checks more engaging and effective.

Image Credit: Storyblocks (7-day free trial, unlimited downloads $99/year)

ID and E-Learning Links (8/23/15)

Posted from Diigo. The rest of my favorite links are here.

Intrinsic and Instructional Feedback in Learning Scenarios

A few years ago, I was a judge for a competition on scenario-based learning. While there were a few terrific submissions, I thought many of the courses missed the whole point of scenario-based learning. They started out fine: they provided some sort of realistic context and asked learners to make a decision. Then, instead of showing them the consequences of their decision, they just provided feedback as if it was any other multiple choice assessment. “Correct, that is the best decision.” Blah. Boring. And ineffective.

In her book Scenario-based e-Learning: Evidence-Based Guidelines for Online Workforce Learning, Ruth Clark labels the two types of feedback “intrinsic” and “instructional.” Instructional feedback is what we see all the time in e-learning; it’s feedback that tells you what was right or wrong and possibly guides or coaches you about how to improve.

With intrinsic feedback, the learning environment responds to decisions and action choices in ways that mirror the real world. For example, if a learner responds rudely to a customer, he will see and hear an unhappy customer response. Intrinsic feedback gives the learner an opportunity to try, fail, and experience the results of errors in a safe environment.

Intrinsic feedback is one of the features of scenario-based learning that sets it apart from traditional e-learning. When you show learners the consequences of their actions, they can immediately see why it matters. The principles or process that you’re teaching isn’t just abstract content anymore; it’s something with real world implications and it matters if they get it wrong. It’s more engaging to receive intrinsic feedback. Learners are also more likely to remember the content because they’ve already seen what could happen if they don’t make the right choices.

Intrinsic feedback can take a number of forms. Customer reactions (verbal and nonverbal), patient health outcomes improving, sales figures dropping, a machine starting to work again, and other environmental responses can be intrinsic feedback. The example below contains three pieces of intrinsic feedback, all on the left side: a facial expression, a conversation response, and a motivation meter at the bottom.

Screenshot of a branching scenario with intrinsic and instructional feedbackIn this example, learners are trying to convince someone to make healthier eating choices using motivational interviewing. Motivation level is an “invisible” factor, so I made it visible with a motivation indicator in the lower left corner. As learners make better choices and the patient feels more motivated to change, the motivation meter shows their progress.

Scenarios can also use instructional feedback. In the above example, a coach at the top provides instructional feedback and guidance on learners’ choices. Clark recommends using both intrinsic and instructional feedback in most situations.

One issue with instructional feedback is that it can break the realism of a scenario. Using a coach can help alleviate that problem, as can having learners ask for advice from people inside a scenario (a manager, an HR rep, another worker, etc.). Using a conversational tone for the instructional feedback also helps keep it within the scenario. Instructional feedback in a scenario often doesn’t need to explicitly say that a choice was correct or incorrect; that’s clear enough from the intrinsic feedback. Focus your instructional feedback on explaining why a choice was effective or how it could have been better.

Feedback can also be delayed rather than happening immediately. Clark recommends more immediate feedback for novices but delayed feedback for experts or more advanced learners. Depending on the audience, for some branching scenarios I do immediate intrinsic feedback for each choice learners make. When learners make bad choices that cause them fail and they need to restart the scenario, they receive instructional feedback with guidance on how to improve on their next attempt. They might be able to make two or three bad choices in a row before they hit a dead end in the scenario, so the instructional feedback is delayed. It keeps the momentum of the scenario moving forward but still provides support to learners to help them improve.

If you’re building scenario-based learning, don’t leave out the intrinsic feedback! Your learners will thank you.

CCK08: Connectivism, Equity, and Equality

In Groups Vs Networks: The Class Struggle Continues, Stephen Downes makes this statement about assessment:

I want to change the system of assessment in schools because right now we have tests and things like that that are scrupulously fair, particularly distance learning where we outline the objectives the performance metrics and the outcomes and all of that. I want to scrap that system. I want testing to be done by at random by comments from your peers and other people and strangers based on no criteria whatsoever and applied unequally and unfairly.

I found this a little jarring at first. Don’t we want things to be fair, to apply the same rules to everyone?

But applying the rules uniformly to everyone isn’t fair. The rules of baseball require that people run between the bases. Would you ask someone in a wheelchair to get up and run though, just because the rules say so? No, of course not. It’s absurd, not fair.

Most of the time, our educational system is set up with equality held up as the ideal. Everyone should be treated equally; we should hold everyone to the same standards. No exceptions should be made for individuals to bend the rules. In the US, NCLB is a prime example of this: every child is expected to meet the grade level goals, regardless of learning or other disabilities. We start from the assumption that everyone will learn and be assessed equally.

A better ideal for the system would be equity. We can move the emphasis away from applying the rules consistently across the board to giving people what they need as individuals to be successful. We should recognize that people do have obstacles to overcome and provide support for them to get around those obstacles. Being in a wheelchair means someone won’t run, but it certainly doesn’t mean they can’t participate in any sports.

The ALA article Equality and Equity of Access: What’s the Difference? describes equality as “fairness as uniform distribution” and equity as “fairness as justice.”

It occurred to me as I read Stephen’s ideas about assessment that connectivism may be a better way to get to the ideal of equity. It’s better for equity and accessibility when you don’t start from the assumption that everyone will learn and be assessed in the same way. If we start with the assumption that individuals will find their own path in learning, and that our job is to give them lots of opportunities and ways to participate, we’re more likely to help people get past their obstacles.

The CCK08 class is modeling that approach of letting people find their own path and giving them a chance for equity. Everything Stephen talks about with valuing diversity over uniformity reinforces that idea. The 2000 people can figure out what works best for them–lots of time in the Moodle forums or none, multiple blog posts or just reading and lurking, concept maps or word clouds, live sessions or only asynchronous. It’s what allows me to still be a participant in this class even though I knew I’d be out for a few weeks while I moved. I could take that break when I needed and step back in now.

I don’t know whether anyone in the course is visually or hearing impaired, but I can’t see any reason why they couldn’t find ways to actively participate and learn. Not everything is accessible to everyone, but you don’t need to see every image or hear the audio presentations to find value in the course.

I do wonder though–with the course so open and flexible, and with so many people participating, how much diversity is actually represented by the participants of the CCK08 class?