Tag: assessment

ID and eLearning Links (12/3/17)

  • This is the link I send people to debunk the blanket claims about “people forget X% after Y time.” The reality is that how much people forget depends on who your audience is, what they’re learning, and how you train them.

    tags:training instructionaldesign research myth

    • The amount a learner will forget varies depending on many things. We as learning professionals will be more effective if we make decisions based on a deep understanding of how to minimize forgetting and enhance remembering.

    • To be specific, when we hear statements like, “People will forget 60% of what they learned within 7 days,” we should ignore such advice and instead reflect on our own superiority and good looks until we are decidedly pleased with ourselves.

    • Many of the experiments reviewed in this report showed clearly that learning methods matter. For example, in the Bahrick 1979 study, the best learning methods produced an average forgetting score of -29% forgetting, whereas the worst learning methods produced forgetting at 47%, a swing of 76% points.

  • Mini-scenarios and branching scenarios provide better assessment than traditional multiple choice, but this provides some other options for deeper assessment that can still be scored by a computer.

    tags:assessment scenario e-learning instructionaldesign feedback

Posted from Diigo. The rest of my favorite links are here.

Instructional Design and E-Learning Links

Converting Traditional Multiple Choice Questions to Scenario-Based Questions

The traditional multiple choice questions we use in assessment are often abstract and measure only whether people recall facts they heard in the last 5 minutes. Converting these questions to scenario-based questions can increase the level of difficulty, measure higher level thought, and provide relevant context.

Converting to Scenario-Based Questions

Example Question

Let’s say you’re creating training for managers on how to provide reasonable accommodations for employees. You drafted a set of traditional multiple choice questions as a quiz for the end of the course, but they’re all very low level. You want to improve the quality of your assessment with some scenarios.

This is a question from your current quiz that measures recall of a fact from the training. The rest of the assessment is similar.

Example 1 (Original)

What reasonable accommodation is recommended for a temporary disability or medical issue affecting work?

  1. None; reasonable accommodations are only used for permanent or long-term disabilities.
  2. Unpaid time off can be offered as an accommodation for temporary issues.
  3. Paid time off should be offered, even if it exceeds the amount of paid time off other employees receive.

Align to Objectives

What are your objectives? Does your assessment align to them? If not, rewrite it.

In this example, the objective is “The learner will follow the procedure for providing reasonable accommodations.” The objective is application level; you need to apply this procedure. (You could argue for analysis or evaluation here too, but let’s assume it’s application.)

The question assesses recall; the objective requires application. Therefore, this question should be rewritten at a higher level.

When would people use this?

The first step to shifting from traditional to scenario-based assessment is asking when people would use the information. When would managers need to know about handling temporary disabilities? A common situation would be due to an illness or surgery. Maybe an employee needs a reduced schedule due to fatigue from chemo. Maybe an employee needs time off to recover from back surgery.

For each multiple choice question, ask yourself how learners would use that information on the job. When would they need to differentiate between those options?

If you can’t come up with any situation in which people would need this information on the job, why are you asking that question? If you have a question with just irrelevant information, skip down to the section on complete rewrites below.

Scenario as Introduction

One method to revise the question is to add a scenario to introduce the choices. This provides context. It shifts the question from just recalling information to using that information to make a decision.

Let’s see how this works with the previous example. The scenario introduces the question. The choices are essentially the same as before, but now it’s a decision about how to work with an employee you manage. Instead of measuring recall, this question measures if learners can apply the reasonable accommodations policy.

Example 1 (Revised)

Simon, a graphic designer on the team you manage, is having surgery. He requested 2 weeks time off to recover after his surgery. How should you respond?

  1. Let Simon know he can use his accrued vacation time. Reasonable accommodations are only used for permanent or long-term disabilities.
  2. Provide two weeks unpaid time off.
  3. Provide two weeks paid time off.

Notice that this scenario isn’t long; it’s only 2 more sentences than the original question.

Complete Replacement

Sometimes adding a scenario at the beginning won’t work, and you need a complete rewrite of the question. If the question is something unrelated to your objectives or that people will never use on the job, you have to start over and replace the question.

Look at this example. Would a manager ever need to know this history on the job? Will they be more effective at offering accommodations if they can memorize this date?

Example 2 (Original)

In what year was the Americans with Disabilities Act or ADA passed by Congress?

  1. 1985
  2. 1990
  3. 1995
  4. 2000

We have all seen questions like this on quizzes before. They’re easy to write, but they don’t assess anything meaningful. Replacing it with a scenario-based question would give you a more accurate assessment.

Example 2 (Replacement)

One of your employees, Miranda, brought documentation from her ophthalmologist about her vision and how it affects her driving. Her night vision is deteriorating. Miranda has requested a change in her work schedule. She wants to start and end her work day later to avoid driving in the early morning when it’s still dark. How do you respond?

  1. Agree to adjust Miranda’s schedule.
  2. Tell Miranda to contact HR to start the official accommodation process.
  3. Tell Miranda that the schedule change is not possible since it creates too much burden on the rest of the team.

What Do You Want to Learn?

What else would you like to learn about writing these kinds of assessment questions? Do you have questions I could answer in a future post? Let me know in the comments.

Mini-Scenarios for Assessment

Scenario-based learning often means complex branching or simulations, but it doesn’t always have to be that way. You can use mini-scenarios to make your assessments more relevant and valuable. One of the big advantages of using mini-scenarios is that they’re fast and easy to build. You don’t need any special tools; any tool that can create a multiple choice question can be used for mini-scenarios.

Traditional Assessment or Mini Scenario

Imagine you’re creating a course for managers about motivation and using rewards effectively. You could ask a fairly typical comprehension question like this:

What is the best strategy for encouraging long-term behavior change in your employees?

  1. Threaten punishment for anyone not changing their behavior
  2. Offer a small reward for changing behavior
  3. Offer a large reward for changing behavior

If you have introduced this concept already, this question probably aligns to that content and to your learning objective. However, it’s very abstract. Compare that question to this one:

Andrew is a sales manager who has been struggling to motivate his team to better performance. He sent his team to a conference where they learned about sharing stories about previous happy customers to improve sales. A few salespeople really like using this technique, but he wants everyone to start using it more. In the long term, he wants to change their attitudes about the technique.

What should Andrew do to encourage his team?

  1. Threaten punishment for anyone not using storytelling
  2. Offer a small reward for using storytelling
  3. Offer a large reward for using storytelling

In the second example, a mini-scenario sets up the question. This provides context and makes it a concrete situation with a problem to solve rather than an abstract comprehension question. Now this is about applying the concept in a relevant situation rather than just remembering what you read.

Using a mini-scenario added a total of four sentences to the question. This is actually longer than many of my mini-scenarios; one or two sentences are often enough. You could use this assessment question in any tool, even using the built-in quizzing for many LMSs. It doesn’t take much more time to write than a traditional multiple choice question.

I have often found mini-scenarios useful for helping clients try out scenario-based learning without having to commit to something more complex and expensive. This is a way they can dip their toe in the water without having to do a completely scenario-based course. Even in a fairly traditional linear e-learning course, using mini-scenarios can make your knowledge checks more engaging and effective.

Image Credit: Storyblocks (7-day free trial, unlimited downloads $149/year)

ID and E-Learning Links (8/23/15)

Posted from Diigo. The rest of my favorite links are here.

Intrinsic and Instructional Feedback in Learning Scenarios

A few years ago, I was a judge for a competition on scenario-based learning. While there were a few terrific submissions, I thought many of the courses missed the whole point of scenario-based learning. They started out fine: they provided some sort of realistic context and asked learners to make a decision. Then, instead of showing them the consequences of their decision, they just provided feedback as if it was any other multiple choice assessment. “Correct, that is the best decision.” Blah. Boring. And ineffective.

In her book Scenario-based e-Learning: Evidence-Based Guidelines for Online Workforce Learning, Ruth Clark labels the two types of feedback “intrinsic” and “instructional.” Instructional feedback is what we see all the time in e-learning; it’s feedback that tells you what was right or wrong and possibly guides or coaches you about how to improve.

With intrinsic feedback, the learning environment responds to decisions and action choices in ways that mirror the real world. For example, if a learner responds rudely to a customer, he will see and hear an unhappy customer response. Intrinsic feedback gives the learner an opportunity to try, fail, and experience the results of errors in a safe environment.

Intrinsic feedback is one of the features of scenario-based learning that sets it apart from traditional e-learning. When you show learners the consequences of their actions, they can immediately see why it matters. The principles or process that you’re teaching isn’t just abstract content anymore; it’s something with real world implications and it matters if they get it wrong. It’s more engaging to receive intrinsic feedback. Learners are also more likely to remember the content because they’ve already seen what could happen if they don’t make the right choices.

Intrinsic feedback can take a number of forms. Customer reactions (verbal and nonverbal), patient health outcomes improving, sales figures dropping, a machine starting to work again, and other environmental responses can be intrinsic feedback. The example below contains three pieces of intrinsic feedback, all on the left side: a facial expression, a conversation response, and a motivation meter at the bottom.

Screenshot of a branching scenario with intrinsic and instructional feedbackIn this example, learners are trying to convince someone to make healthier eating choices using motivational interviewing. Motivation level is an “invisible” factor, so I made it visible with a motivation indicator in the lower left corner. As learners make better choices and the patient feels more motivated to change, the motivation meter shows their progress.

Scenarios can also use instructional feedback. In the above example, a coach at the top provides instructional feedback and guidance on learners’ choices. Clark recommends using both intrinsic and instructional feedback in most situations.

One issue with instructional feedback is that it can break the realism of a scenario. Using a coach can help alleviate that problem, as can having learners ask for advice from people inside a scenario (a manager, an HR rep, another worker, etc.). Using a conversational tone for the instructional feedback also helps keep it within the scenario. Instructional feedback in a scenario often doesn’t need to explicitly say that a choice was correct or incorrect; that’s clear enough from the intrinsic feedback. Focus your instructional feedback on explaining why a choice was effective or how it could have been better.

Feedback can also be delayed rather than happening immediately. Clark recommends more immediate feedback for novices but delayed feedback for experts or more advanced learners. Depending on the audience, for some branching scenarios I do immediate intrinsic feedback for each choice learners make. When learners make bad choices that cause them fail and they need to restart the scenario, they receive instructional feedback with guidance on how to improve on their next attempt. They might be able to make two or three bad choices in a row before they hit a dead end in the scenario, so the instructional feedback is delayed. It keeps the momentum of the scenario moving forward but still provides support to learners to help them improve.

If you’re building scenario-based learning, don’t leave out the intrinsic feedback! Your learners will thank you.