Tag: feedback

Immediate and Delayed Consequences in Branching Scenarios

In branching scenarios, we can use a combination of immediate and delayed consequences and feedback. Consequences are what happens as a result of decisions; feedback is what we tell learners after decisions.

Immediate & Delayed Consequences

Use Immediate Consequences Often

Immediate consequences are the intrinsic effects of decisions. A customer who responds angrily, software that doesn’t produce the desired result, or time lost on a project could all be immediate consequences. These consequences don’t directly tell the learner, “Sorry, that was incorrect.” Learners have to perceive and understand the cues in the scenario. They have to draw conclusions based on those cues.

If your learners will need to follow cues when they apply what they’re learning, it’s helpful to provide real-world consequences in your scenario. It’s beneficial to practice interpreting cues.

Immediate consequences that simulate real-world cues can also be more engaging than the omniscient narrator dictating what you did right or wrong.  It keeps learners in the mindset of the story without hitting them over the head with a reminder that they’re learning something.

Use Immediate Feedback with Novices

Immediate feedback is different from intrinsic consequences. This is the instructional feedback or coaching that directly tells learners why their decisions are right or wrong. While this can pull people out of the “flow” of a story, immediate feedback can be helpful in some situations.

First, novice learners who are still building mental models of a topic may benefit more from immediate feedback. Novices may not have the expertise to sort through real-world cues and draw accurate conclusions from them.  Therefore, it may be more important to provide immediate feedback after each decisions in a branching scenario if your audience is new to the topic.

In his research report “Providing Learners with Feedback,” Will Thalheimer explains the benefits of immediate feedback for novices.

“On the surface of it, it just doesn’t make sense that when a learner is piecing together arrays of building blocks into a fully-formed complex concept, they wouldn’t need some sort of feedback as they build up from prerequisite concepts. If the conceptual foundation they build for themselves is wrong, adding to that faulty foundation is problematic. Feedback provided before these prerequisite mental modelettes are built should keep learners from flailing around too much. For this reason, I will tentatively recommend immediate feedback as learners build understanding.”

Provide Instructional Feedback Before a Retry

I always use feedback before restarting a scenario.  If a learner has reached an unsatisfactory ending in a scenario, it’s beneficial to do a short debrief of their decisions and what went wrong. Especially for more experienced learners, some of that feedback may be delayed from when they made the decision. You can summarize the feedback for several previous decisions on the path that led to the final decision.

This feedback should happen before they are faced with the same scenario decisions again. Otherwise, they could make the same mistakes again (reinforcing those mistakes) or simply guess without gaining understanding.

Thalheimer’s research also supports this.

“When learners get an answer wrong or practice a skill inappropriately, we ought to give them feedback before they attempt to re-answer the question or re-attempt the skill. This doesn’t necessarily mean that we should give them immediate feedback, but it does mean that we don’t want to delay feedback until after they are faced with additional retrieval opportunities.”

Use Delayed Feedback with Experienced Learners

Thalheimer notes that delayed feedback may be more effective for retention (i.e., how much do learners remember). That effect might be due to the spacing effect (that is, reviewing content multiple times, spaced out over times, is better for learning than cramming everything into a single event). The delay for feedback doesn’t have to be long; one study mentioned in Thalheimer’s report showed that delaying feedback by 10 seconds improved outcomes.

Delayed feedback may also be more appropriate for experienced learners who are improving existing skills rather than novices building new skills. Experienced learners already have mental models in place, so they don’t have the same needs for immediate correction as novices. They can get the benefit of delayed feedback.

Use Delayed Feedback with Immediate Consequences

In branching scenarios, we can use a combination of immediate intrinsic consequences (e.g., an angry customer response) and delayed instructional feedback (e.g., you didn’t acknowledge the customer’s feelings). Feedback before a retry or restart could count as delayed if it includes feedback for multiple decisions. If you let learners make 2 or 3 wrong choices before a restart, the combined feedback will effectively be delayed.

Use Delayed Consequences When Realistic

We  don’t always immediately know our mistakes are wrong in real life. Sometimes the consequence isn’t obvious right away. Sometimes it seems like a gain the short run, but causes problems in the long run. If that’s the kind of situation you’re training for, letting people continue on the wrong path for a little while makes sense. Neither limited branching nor immediate failure allow you to show delayed consequences.

Providing these delayed consequences has the advantage of better learning from delayed feedback, plus it creates a more realistic and engaging story. Delayed consequences shouldn’t be forced into a scenario where it’s not realistic, but they are a good way to show the long-term effects of actions.

Think about how delayed consequences could be shown in these examples:

  • A bartender gives away many free drinks. The immediate consequence is that the customers are happy, but the delayed consequence is a loss of profit for the bar.
  • A sales associate sells a customer a product that is less expensive but meets the customer’s needs. The immediate consequence is that the sales associate makes less commission that day, but the delayed consequence is that the customer is loyal and refers 2 friends. In this case, the total commission earned is higher even though the immediate sale was lower.
  • A doctor could skip a screening question with a patient. The immediate consequence is finding something that looks like the problem, but the delayed consequence is the actual underlying problem remaining.
  • A manager asks an ID to create training. The ID gets started building it right away, trusting that the team requesting the training knows their needs. The immediate consequence is a happy manager, but the delayed consequence is ineffective training that doesn’t actually solve the business problem.
  • If you’re teaching ethics, a small ethical lapse early in the scenario might not seem like a big deal. The immediate consequence might be meeting a deadline or increased recognition.  In the long run, that small lapse leads a continued need to cover up your actions. The Lab: Avoiding Research Misconduct is an example with delayed consequences in some paths.

Looking for More?

Read more about branching scenarios:

 

Show, Don’t Tell For Scenario Feedback

One of the most common mistakes I see in scenario-based learning is using feedback to tell learners what was right or wrong instead of showing them.

Take the following example of a branching scenario to practice counseling someone on dietary choices. One mistake learners can make in the scenario is setting a goal that is too difficult. If the learners recommend a goal of cutting out all added sugar and soda, you could simply tell them they’re wrong and why it’s a bad choice like this:

“Sorry, that’s incorrect. If a goal is too difficult, it can reduce motivation. A smaller interim goal may have a better chance of success.”

In scenarios, it’s better to avoid explicitly stating that a choice is right or wrong. That breaks the realism of the scenario and makes it an academic exercise rather than a practice simulation. Instead of just telling learners that it’s a bad choice, you can show them the consequences of their decision. In this example, I used both the dialog showing the response of the person being counseled and his facial expression.

Frustrated college student saying, "o soda or added sugar at all? Wow, that sounds too hard. I don't think I could do that."

I selected a character from the eLearning Brothers library and picked five poses with a range of expressions from upset to happy. This is one place where it’s critical to have photos showing more than the standard stock photo happy expressions.  For each response in the branching scenario, I determined the motivation level on a five-point scale and matched the corresponding photo to the response.

1 person, 5 expressions

For many scenarios, the dialog and expression of the person would be enough to show whether or not the choice was right, wrong, or somewhere in between. Sometimes you need additional feedback though. Because this scenario deals with an invisible factor (motivation), I created an additional consequence with a motivation meter. The level of motivation increases and decreases depending on the choices the learner makes. This is another way to show consequences within the context of the scenario without becoming so academic as to say “Sorry, that’s incorrect.”

scenario2

If your learners are novices, you may still need to provide coaching or instructional feedback about their choices. I prefer to use a coach for that instructional feedback to maintain some realism, and I always pair that instructional feedback with consequences that are shown to the learners.

How do you handle feedback in branching scenarios? Do you have a great example of how you showed learners consequences rather than simply telling them they were right or wrong?

Intrinsic and Instructional Feedback in Learning Scenarios

A few years ago, I was a judge for a competition on scenario-based learning. While there were a few terrific submissions, I thought many of the courses missed the whole point of scenario-based learning. They started out fine: they provided some sort of realistic context and asked learners to make a decision. Then, instead of showing them the consequences of their decision, they just provided feedback as if it was any other multiple choice assessment. “Correct, that is the best decision.” Blah. Boring. And ineffective.

In her book Scenario-based e-Learning: Evidence-Based Guidelines for Online Workforce Learning, Ruth Clark labels the two types of feedback “intrinsic” and “instructional.” Instructional feedback is what we see all the time in e-learning; it’s feedback that tells you what was right or wrong and possibly guides or coaches you about how to improve.

With intrinsic feedback, the learning environment responds to decisions and action choices in ways that mirror the real world. For example, if a learner responds rudely to a customer, he will see and hear an unhappy customer response. Intrinsic feedback gives the learner an opportunity to try, fail, and experience the results of errors in a safe environment.

Intrinsic feedback is one of the features of scenario-based learning that sets it apart from traditional e-learning. When you show learners the consequences of their actions, they can immediately see why it matters. The principles or process that you’re teaching isn’t just abstract content anymore; it’s something with real world implications and it matters if they get it wrong. It’s more engaging to receive intrinsic feedback. Learners are also more likely to remember the content because they’ve already seen what could happen if they don’t make the right choices.

Intrinsic feedback can take a number of forms. Customer reactions (verbal and nonverbal), patient health outcomes improving, sales figures dropping, a machine starting to work again, and other environmental responses can be intrinsic feedback. The example below contains three pieces of intrinsic feedback, all on the left side: a facial expression, a conversation response, and a motivation meter at the bottom.

Screenshot of a branching scenario with intrinsic and instructional feedbackIn this example, learners are trying to convince someone to make healthier eating choices using motivational interviewing. Motivation level is an “invisible” factor, so I made it visible with a motivation indicator in the lower left corner. As learners make better choices and the patient feels more motivated to change, the motivation meter shows their progress.

Scenarios can also use instructional feedback. In the above example, a coach at the top provides instructional feedback and guidance on learners’ choices. Clark recommends using both intrinsic and instructional feedback in most situations.

One issue with instructional feedback is that it can break the realism of a scenario. Using a coach can help alleviate that problem, as can having learners ask for advice from people inside a scenario (a manager, an HR rep, another worker, etc.). Using a conversational tone for the instructional feedback also helps keep it within the scenario. Instructional feedback in a scenario often doesn’t need to explicitly say that a choice was correct or incorrect; that’s clear enough from the intrinsic feedback. Focus your instructional feedback on explaining why a choice was effective or how it could have been better.

Feedback can also be delayed rather than happening immediately. Clark recommends more immediate feedback for novices but delayed feedback for experts or more advanced learners. Depending on the audience, for some branching scenarios I do immediate intrinsic feedback for each choice learners make. When learners make bad choices that cause them fail and they need to restart the scenario, they receive instructional feedback with guidance on how to improve on their next attempt. They might be able to make two or three bad choices in a row before they hit a dead end in the scenario, so the instructional feedback is delayed. It keeps the momentum of the scenario moving forward but still provides support to learners to help them improve.

If you’re building scenario-based learning, don’t leave out the intrinsic feedback! Your learners will thank you.

ID and e-Learning Links (2/2/14)

  • Gavin Henrick on Moodle Repositories, including tables comparing features

    tags: moodle

  • Example of a branching scenario activity with a coach and a meter showing progress on every screen, built in ZebraZapps

    tags: scenarios e-learning zebrazapps branching

  • Research on feedback’s effect on performance. Feedback is generally helpful but can be detrimental, especially for more complex tasks. Goal setting can help mitigate the risks of feedback interventions.

    tags: feedback research learning

  • Research on the effects of feedback interventions. Feedback is not always beneficial for learning; in some cases, it can actually depress performance.

    tags: feedback instructionaldesign learning research

    • The MCPL literature suggests that for an FI to directly improve learning, rather than motivate learning, it has to help the recipient to reject erroneous hypotheses. Whereas correcting errors is a feature of some types of FI messages, most types of FI messages do not contain such information and therefore should not improve learning—a claim consistent with CAI research.

      Moreover, even in learning situations where performance seems to benefit from FIs, learning through FIs may be inferior to learning through discovery (learning based on feedback from the task, rather than on feedback from an external agent). Task feedback may force the participant to learn task rules and recognize errors (e.g., Frese & Zapf, 1994), whereas FI may lead the participant to learn how to use the FI as a crutch, while shortcutting the need for task learning (cf. J. R. Anderson, 1987).

    • In the MCPL literature, several reviewers doubt whether FIs have any learning value (Balzer et al., 1989; Brehmer, 1980) and suggest alternatives to FI for increasing learning, such as providing the learner with more task information (Balzer et al., 1989). Another alternative to an FI is designing work or learning
      environments that encourage trial and error, thus maximizing learning from task feedback without a direct intervention (Frese & Zapf, 1994).
  • Female voice over and audio editing for e-learning. Demos are on the website. She has done dialog for more conversational courses in the past, although that demo isn’t on her public website.

    tags: voiceover audio

  • Calculator for pricing custom e-learning based on a number of factors (graphics and multimedia, interactivity, instructional design)

    tags: e-learning pricing

  • This isn’t really about SCORM, but a question on pricing e-learning courses for perpetual licenses rather than annual per user fees

    tags: e-learning pricing

    • Most perpetual license deals I’ve seen in the eLearning space are usually priced at 3.5x the annual user price plus another 10-15% of the contract value for course maintenance and support.
  • Example of why experts often can’t teach well (or write good courses without an ID) based on research of NICU nurses who knew how to recognize infections but were such experts that their knowledge had become automatic and intuitive for them.

    tags: sme training research

    • When you’re a domain expert in your field, it’s difficult to step back and remember what it was like to be a beginner. Once we have knowledge, it’s very hard to remember what life was like without it.

      Instead of placing the burden of training on a subject-matter expert, it’s often more effective to establish a collaboration between subject-matter experts and trainers who are experts in breaking down information, recognizing the critical elements, and putting it back together in a way that’s digestible for people who aren’t experts.

Posted from Diigo. The rest of my favorite links are here.