Tag: Will Thalheimer

ID and eLearning Links (10/15/17)

Posted from Diigo. The rest of my favorite links are here.

Instructional Design and E-Learning Links

Immediate and Delayed Consequences in Branching Scenarios

In branching scenarios, we can use a combination of immediate and delayed consequences and feedback. Consequences are what happens as a result of decisions; feedback is what we tell learners after decisions.

Immediate & Delayed Consequences

Use Immediate Consequences Often

Immediate consequences are the intrinsic effects of decisions. A customer who responds angrily, software that doesn’t produce the desired result, or time lost on a project could all be immediate consequences. These consequences don’t directly tell the learner, “Sorry, that was incorrect.” Learners have to perceive and understand the cues in the scenario. They have to draw conclusions based on those cues.

If your learners will need to follow cues when they apply what they’re learning, it’s helpful to provide real-world consequences in your scenario. It’s beneficial to practice interpreting cues.

Immediate consequences that simulate real-world cues can also be more engaging than the omniscient narrator dictating what you did right or wrong.  It keeps learners in the mindset of the story without hitting them over the head with a reminder that they’re learning something.

Use Immediate Feedback with Novices

Immediate feedback is different from intrinsic consequences. This is the instructional feedback or coaching that directly tells learners why their decisions are right or wrong. While this can pull people out of the “flow” of a story, immediate feedback can be helpful in some situations.

First, novice learners who are still building mental models of a topic may benefit more from immediate feedback. Novices may not have the expertise to sort through real-world cues and draw accurate conclusions from them.  Therefore, it may be more important to provide immediate feedback after each decisions in a branching scenario if your audience is new to the topic.

In his research report “Providing Learners with Feedback,” Will Thalheimer explains the benefits of immediate feedback for novices.

“On the surface of it, it just doesn’t make sense that when a learner is piecing together arrays of building blocks into a fully-formed complex concept, they wouldn’t need some sort of feedback as they build up from prerequisite concepts. If the conceptual foundation they build for themselves is wrong, adding to that faulty foundation is problematic. Feedback provided before these prerequisite mental modelettes are built should keep learners from flailing around too much. For this reason, I will tentatively recommend immediate feedback as learners build understanding.”

Provide Instructional Feedback Before a Retry

I always use feedback before restarting a scenario.  If a learner has reached an unsatisfactory ending in a scenario, it’s beneficial to do a short debrief of their decisions and what went wrong. Especially for more experienced learners, some of that feedback may be delayed from when they made the decision. You can summarize the feedback for several previous decisions on the path that led to the final decision.

This feedback should happen before they are faced with the same scenario decisions again. Otherwise, they could make the same mistakes again (reinforcing those mistakes) or simply guess without gaining understanding.

Thalheimer’s research also supports this.

“When learners get an answer wrong or practice a skill inappropriately, we ought to give them feedback before they attempt to re-answer the question or re-attempt the skill. This doesn’t necessarily mean that we should give them immediate feedback, but it does mean that we don’t want to delay feedback until after they are faced with additional retrieval opportunities.”

Use Delayed Feedback with Experienced Learners

Thalheimer notes that delayed feedback may be more effective for retention (i.e., how much do learners remember). That effect might be due to the spacing effect (that is, reviewing content multiple times, spaced out over times, is better for learning than cramming everything into a single event). The delay for feedback doesn’t have to be long; one study mentioned in Thalheimer’s report showed that delaying feedback by 10 seconds improved outcomes.

Delayed feedback may also be more appropriate for experienced learners who are improving existing skills rather than novices building new skills. Experienced learners already have mental models in place, so they don’t have the same needs for immediate correction as novices. They can get the benefit of delayed feedback.

Use Delayed Feedback with Immediate Consequences

In branching scenarios, we can use a combination of immediate intrinsic consequences (e.g., an angry customer response) and delayed instructional feedback (e.g., you didn’t acknowledge the customer’s feelings). Feedback before a retry or restart could count as delayed if it includes feedback for multiple decisions. If you let learners make 2 or 3 wrong choices before a restart, the combined feedback will effectively be delayed.

Use Delayed Consequences When Realistic

We  don’t always immediately know our mistakes are wrong in real life. Sometimes the consequence isn’t obvious right away. Sometimes it seems like a gain the short run, but causes problems in the long run. If that’s the kind of situation you’re training for, letting people continue on the wrong path for a little while makes sense. Neither limited branching nor immediate failure allow you to show delayed consequences.

Providing these delayed consequences has the advantage of better learning from delayed feedback, plus it creates a more realistic and engaging story. Delayed consequences shouldn’t be forced into a scenario where it’s not realistic, but they are a good way to show the long-term effects of actions.

Think about how delayed consequences could be shown in these examples:

  • A bartender gives away many free drinks. The immediate consequence is that the customers are happy, but the delayed consequence is a loss of profit for the bar.
  • A sales associate sells a customer a product that is less expensive but meets the customer’s needs. The immediate consequence is that the sales associate makes less commission that day, but the delayed consequence is that the customer is loyal and refers 2 friends. In this case, the total commission earned is higher even though the immediate sale was lower.
  • A doctor could skip a screening question with a patient. The immediate consequence is finding something that looks like the problem, but the delayed consequence is the actual underlying problem remaining.
  • A manager asks an ID to create training. The ID gets started building it right away, trusting that the team requesting the training knows their needs. The immediate consequence is a happy manager, but the delayed consequence is ineffective training that doesn’t actually solve the business problem.
  • If you’re teaching ethics, a small ethical lapse early in the scenario might not seem like a big deal. The immediate consequence might be meeting a deadline or increased recognition.  In the long run, that small lapse leads a continued need to cover up your actions. The Lab: Avoiding Research Misconduct is an example with delayed consequences in some paths.

Looking for More?

Read more about branching scenarios:

 

Book Review: Performance-Focused Smile Sheets

On a scale from 1 to 5, how useful are your current level 1 evaluations or “smile sheets”?

  1. Completely worthless
  2. Mostly worthless
  3. Not too bad
  4. Mostly useful
  5. Extremely useful

Chances are, your training evaluations aren’t very helpful. How much useful information do you really get from those forms? If you know that one of your courses is averaging a 3.5 and another course is averaging a 4.2, what does that really mean? Do these evaluations tell you anything about employee performance?

Personally, I’ve always been a little disappointed in my training evaluations, but I never really knew how to make them better. In the past, I’ve relied on standard questions used in various organizations that I’ve seen over my career, with mixed results. Will Thalheimer’s book Performance-Focused Smile Sheets changes that by giving guidelines and example questions for effective evaluations.

smile_sheets

Raise your hand if most of your evaluation questions use Likert scales. I’ve always used them too, but Thalheimer shows in the book how we can do much better. After all, how much difference is there between “mostly agree” and “strongly agree” or other vaguely worded scales? What’s an acceptable answer–is “mostly agree” enough, or is only “strongly agree” a signal of a quality course?

The book starts with several chapters of background and research, including how evaluation results should correspond to the “four pillars of training effectiveness.” Every question in your evaluation should lead to some action you can take if the results aren’t acceptable. After all, what’s the point of including questions if the results don’t tell you something useful?

The chapter of sample questions with explanations of why they work and how you might adapt them is highly useful. I will definitely pull out these examples again the next time I write an evaluation. There’s even a chapter on how to present results to stakeholders.

One of the most interesting chapters is the quiz, where you’re encouraged to write in the book. Can you identify what makes particular questions effective or ineffective? I’d love to see him turn this book into an interactive online course using the questions in that quiz.

I highly recommend this book if you’re interested in creating evaluations that truly work for corporate training and elearning. If you’re in higher education, the book may still be useful, but you’d have to adapt the questions since the focus is really on performance change rather than long-term education.

The book is available on Amazon and on SmileSheets.com. If you need a discount for buying multiple copies of the book, use the second link.

 

Save

Protagonists Should Be Like Your Learners

When you write a story for learning, you need a few essential elements such as a protagonist (the main character), the protagonist’s goal, and the challenges the protagonist faces. The protagonist should be someone your learners identify with. In workplace learning, that means the character has the same or a similar job as the learners. The learners should recognize the problems the character is dealing with and ideally share the protagonist’s goal. Learners should see a bit of themselves in the characters in your scenarios so they can picture themselves making the same kinds of decisions. When learners identify with your protagonist, they care about what happens to the character. They may be emotionally invested in seeing the protagonist succeed, especially in complex scenarios.

Joan is designing her first branching scenario

Example Protagonist Selection

Let’s review an example. Joan is an instructional designer working on a branching scenario. She has designed and developed many courses in the past, but this is the first time she has used a nonlinear format. She’s feeling a little nervous about getting it right. She’s creating training for front-line managers on how to handle requests for reasonable accommodations for disabilities. Which character should Joan use as the protagonist for her scenario?

  1. Mark, a technical writer with mobility issues who requires assistive technology
  2. Luisa, the VP of HR and an expert in accessibility issues
  3. Cindy, a manager with a team of 8 direct reports

Feedback on Your Choice

Mark would be a good choice for protagonist if this course were for employees to learn how to request reasonable accommodation. Someone like Luisa might be your SME for a course, but she has much more expertise than Joan’s learners. Cindy is a manager, which puts her in the same role as the learners. Joan will be able to put Cindy into situations similar to those managers might encounter. That will allow the learners to practice making the kinds of decisions they need to make in their jobs.

Other Characteristics

Joan might also be able to give Cindy other characteristics that make her similar to her learners. As part of her needs analysis, Joan interviewed two managers who had been through the process themselves. Both managers expressed reluctance to consult HR with questions about accommodations, even in situations where that was the best decision. Joan decides to create an option in the branching scenario where Cindy tries to handle the problem herself without HR but causes a costly misstep. Joan builds into the scenario the possibility of checking with HR before each decision and rewards that action with points in the final score.

Example compliance training with options to look up information

(This example scenario is also used in Motivating Learners to Look Up Compliance Policies Themselves.)

Joan3Stepping Back

At the risk of getting a little too meta, think back to Joan, the instructional designer. When you read that she was nervous about creating her first branching scenario, did that strike a chord with you? If you’re thinking about how to create your first scenario, that probably resonates. Even if you have created many branching scenarios in your career, you might still remember what it felt like to be unsure of yourself. If you’re an ID, you can probably envision yourself in this scenario. That gives you a connection to the character and helps engage you. You as the reader want to pick the right protagonist in the example so Joan’s course will be successful.

Characters in Cultural Context

Keep the culture of the workplace in mind as well. Your protagonist and other characters should reflect the organizational culture. In his report Using Culturally, Linguistically, and Situationally Relevant Scenarios, Dr. Will Thalheimer recommends:

In simulating workplace cues, consider the range of cues that your learners will pay attention to in their work, including background objects, people and their facial expressions, language cues, and cultural referents…

Utilize culturally-appropriate objects, backgrounds, actors, and narrators in creating your scenarios. Consider not just ethnicity, but the many aspects of culture, including such things as socio-economics, education, international experience, immersion in popular culture, age, etc.

Thalheimer’s recommendations from his research are also summarized on his blog.

How have you made your protagonists similar to your learners? Have you ever seen an attempt at scenario-based learning that was unsuccessful because the learners couldn’t identify with the main character?

Image credits

Debunker Club Works to Dispel the Corrupted Cone of Learning

A new group called The Debunker Club is working to dispel myths and misinformation in the learning field. From their website:

The Debunker Club is an experiment in professional responsibility. Anyone who’s interested may join as long as they agree to the following:

  1. I would like to see less misinformation in the learning field.
  2. I will invest some of my time in learning and seeking the truth, from sources like peer-reviewed scientific research or translations of that research.
  3. I will politely, but actively, provide feedback to those who transmit misinformation.
  4. At least once a year, I will seek out providers of misinformation and provide them with polite feedback, asking them to stop transmitting their misinformation.
  5. I will be open to counter feedback, listening to understand opposing viewpoints. I will provide counter-evidence and argument when warranted.

This year, coinciding with April Fool’s Day 2015, the Debunker Club is running an experiment. We’re making a concerted effort to contact people who have shared the Cone of Experience (also known as the Cone of Learning or the Pyramid of Learning).

Many iterations of this cone exist. A Google image search for “cone of learning” returns dozens of results, most of which are false. If you’ve seen something like this that said, “People remember 10% of what they read, 20% of what they hear, 30% of what they see” etc., you’ve seen a variation on this theme.

Image search results for Cone of Learning

The original cone was developed by Edgar Dale and didn’t include any numbers. The later versions are the “corrupted cone” with fictitious statistics added.  Will Thalheimer’s post from 2009 debunking these claims is where I learned it was incorrect. Common sense might give you a hint that these numbers aren’t really based in research though. Think about it–how many times have you see research where all the categories broke into even 10% segments?

As part of the Debunker Club’s efforts, I discovered a post on Dane’s Education Blog called A Hierarchy of Learning. Although this post cites a great debunking article (Tales of the Undead…Learning Theories: The Learning Pyramid), the blog author only says that he “appreciate what it conveys.”

I left the following comment on his Learning Pyramid post.

Thanks for collecting so many resources on your blog. I can see that you’ve worked really hard to share many links and ideas with your readers.

However, the information above, though it may appear to have scientific support, has been exhaustively researched and found to have no basis in science. In fact, the “Tales of the Undead” link you cite debunks it.

An article from the scientific journal Educational Technology shows no research backing for the information. (Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). The Mythical Retention Chart and the Corruption of Dale’s Cone of Experience. Educational Technology, Nov/Dec 2014, 54(6), 6-16.)

The information presented is likely to produce more harm than good, promoting poor learning designs and hurting learners.

While we might abstract some beneficial notions from the percentages portrayed in the misleading information — namely that encouraging realistic practice has benefits — there are numerous faulty concepts within the bogus percentages that can do real harm. For example, by having people think that there are benefits to seeing over hearing, or hearing over reading, we are sending completely wrong messages about how learning works.

Most importantly, recent advances in learning science have really come together over the last two decades. The misleading information was first reported in 1914, with no research backing. It’s better to follow more recent findings than information that has no scientific basis. See, for example, the book Make it Stick by Brown, Roediger, and McDaniel. Julie Dirksen’s Design for How People Learn is another great selection.

I’m part of a larger community of folks called the Debunker Club who are attempting to encourage the use of proven, scientifically-based learning factors in the learning field.

I’m going to be posting about this misleading information on my blog. I hope you’ll comment and respond to my post if you wish. I (and the debunker community in general) want to learn how other people feel about the issues and ideas surrounding the original information and our approach to debunking myths and sharing evidence.

If you’re interested in dispelling misinformation and improving the learning field, please join the Debunker Club and participate in the conversation.