Tag: Will Thalheimer

Elearning Trends for 2018

At the end of last year, Bryan Jones from eLearningArt reached out to me for my predictions on the top 3 eLearning trends for 2018.

He then took the responses from me and 56 other experts and put together a summary video of the top trends here and an article of the top eLearning trends here.

Here are the top 3 trends that I picked, as well as some commentary:

Mobile (#5 on the overall list)

Mobile learning has been happening for over 10 years, and this trend will continue in 2018. Now that mobile learning has been around for a while, we’re learning how to do mobile more effectively than to just put long courses on a smaller screen. The impending demise of Flash will require significant effort in the next few years converting and upgrading old Flash courses to HTML5, which also makes content accessible on more devices.

Microlearning (#1 on the overall list)

Microlearning will continue to be a buzzword for 2018, although I predict we’ll still be hammering out what we actually mean by the term. As we move to more mobile learning, shorter learning and performance support will be more prevalent.

Science-based learning (#8 on the overall list)

Maybe this is my wishful thinking rather than my prediction, but I do see an increasing trend toward science-based and evidence-based learning. While plenty of myths are still perpetuated by the unscrupulous and unaware, I see backlash against pseudoscience. We are fortunate in our field to have folks like Will Thalheimer, Patti Shank, and Julie Dirksen who are working to debunk myths and make research more accessible to practitioners.

 

ID and eLearning Links (1/21/18)

    • Will Thalheimer shares some new questions using the techniques in his Performance-Based Smile Sheet book, including a simplified version of his “world’s best smile sheet question.”

      tags:assessment training

      • Recently, in working with a company to improve their smile sheet, a first draft included the so-called World’s Best Smile Sheet Question. But they were thinking of piloting the new smile sheet for a course to teach basic electronics to facilities professionals. Given the topic and audience, I recommended a simpler version:

        How able will you be to put what you’ve learned into practice on the job?  Choose one.

        A. I am NOT AT ALL ready to use the skills taught.
        B. I need MORE GUIDANCE to be GOOD at using these skills
        C. I need MORE EXPERIENCE to be GOOD at using these skills.
        D. I am FULLY COMPETENT in using these skills.
        E. I am CAPABLE at an EXPERT LEVEL in using these skills.

        This version nicely balances precision with word count.

    • I asked in Julie Dirksen’s Facebook group if there was any eye tracking research specific to elearning. I’ve read research related to general web reading and usability, but I wondered if there are any differences in attention when people are reading to deliberately and consciously learn. Brian McGowan helpfully pulled together this list of resources as a starting point for research.

      tags:e-learning research usability attention

    • Companies with more remote workers have more women in leadership roles because the focus is on productivity and results, not office politics or “face time.”

      tags:research telecommuting

      • The study’s authors speculate that the reason the numbers are so high is because women at remote or mostly remote companies are more likely to be fairly evaluated.

        “It’s because remote work requires companies to focus on the most important aspects of work—productivity, progress, results—rather than less important things like face time in the office, office politics, traditional notions of what leadership ‘looks like,’ popularity or likability, or hours spent at your desk,” they write.

Posted from Diigo. The rest of my favorite links are here.

Instructional Design and E-Learning Links

ID and eLearning Links (10/15/17)

Posted from Diigo. The rest of my favorite links are here.

Instructional Design and E-Learning Links

Immediate and Delayed Consequences in Branching Scenarios

In branching scenarios, we can use a combination of immediate and delayed consequences and feedback. Consequences are what happens as a result of decisions; feedback is what we tell learners after decisions.

Immediate & Delayed Consequences

Use Immediate Consequences Often

Immediate consequences are the intrinsic effects of decisions. A customer who responds angrily, software that doesn’t produce the desired result, or time lost on a project could all be immediate consequences. These consequences don’t directly tell the learner, “Sorry, that was incorrect.” Learners have to perceive and understand the cues in the scenario. They have to draw conclusions based on those cues.

If your learners will need to follow cues when they apply what they’re learning, it’s helpful to provide real-world consequences in your scenario. It’s beneficial to practice interpreting cues.

Immediate consequences that simulate real-world cues can also be more engaging than the omniscient narrator dictating what you did right or wrong.  It keeps learners in the mindset of the story without hitting them over the head with a reminder that they’re learning something.

Use Immediate Feedback with Novices

Immediate feedback is different from intrinsic consequences. This is the instructional feedback or coaching that directly tells learners why their decisions are right or wrong. While this can pull people out of the “flow” of a story, immediate feedback can be helpful in some situations.

First, novice learners who are still building mental models of a topic may benefit more from immediate feedback. Novices may not have the expertise to sort through real-world cues and draw accurate conclusions from them.  Therefore, it may be more important to provide immediate feedback after each decisions in a branching scenario if your audience is new to the topic.

In his research report “Providing Learners with Feedback,” Will Thalheimer explains the benefits of immediate feedback for novices.

“On the surface of it, it just doesn’t make sense that when a learner is piecing together arrays of building blocks into a fully-formed complex concept, they wouldn’t need some sort of feedback as they build up from prerequisite concepts. If the conceptual foundation they build for themselves is wrong, adding to that faulty foundation is problematic. Feedback provided before these prerequisite mental modelettes are built should keep learners from flailing around too much. For this reason, I will tentatively recommend immediate feedback as learners build understanding.”

Provide Instructional Feedback Before a Retry

I always use feedback before restarting a scenario.  If a learner has reached an unsatisfactory ending in a scenario, it’s beneficial to do a short debrief of their decisions and what went wrong. Especially for more experienced learners, some of that feedback may be delayed from when they made the decision. You can summarize the feedback for several previous decisions on the path that led to the final decision.

This feedback should happen before they are faced with the same scenario decisions again. Otherwise, they could make the same mistakes again (reinforcing those mistakes) or simply guess without gaining understanding.

Thalheimer’s research also supports this.

“When learners get an answer wrong or practice a skill inappropriately, we ought to give them feedback before they attempt to re-answer the question or re-attempt the skill. This doesn’t necessarily mean that we should give them immediate feedback, but it does mean that we don’t want to delay feedback until after they are faced with additional retrieval opportunities.”

Use Delayed Feedback with Experienced Learners

Thalheimer notes that delayed feedback may be more effective for retention (i.e., how much do learners remember). That effect might be due to the spacing effect (that is, reviewing content multiple times, spaced out over times, is better for learning than cramming everything into a single event). The delay for feedback doesn’t have to be long; one study mentioned in Thalheimer’s report showed that delaying feedback by 10 seconds improved outcomes.

Delayed feedback may also be more appropriate for experienced learners who are improving existing skills rather than novices building new skills. Experienced learners already have mental models in place, so they don’t have the same needs for immediate correction as novices. They can get the benefit of delayed feedback.

Use Delayed Feedback with Immediate Consequences

In branching scenarios, we can use a combination of immediate intrinsic consequences (e.g., an angry customer response) and delayed instructional feedback (e.g., you didn’t acknowledge the customer’s feelings). Feedback before a retry or restart could count as delayed if it includes feedback for multiple decisions. If you let learners make 2 or 3 wrong choices before a restart, the combined feedback will effectively be delayed.

Use Delayed Consequences When Realistic

We  don’t always immediately know our mistakes are wrong in real life. Sometimes the consequence isn’t obvious right away. Sometimes it seems like a gain the short run, but causes problems in the long run. If that’s the kind of situation you’re training for, letting people continue on the wrong path for a little while makes sense. Neither limited branching nor immediate failure allow you to show delayed consequences.

Providing these delayed consequences has the advantage of better learning from delayed feedback, plus it creates a more realistic and engaging story. Delayed consequences shouldn’t be forced into a scenario where it’s not realistic, but they are a good way to show the long-term effects of actions.

Think about how delayed consequences could be shown in these examples:

  • A bartender gives away many free drinks. The immediate consequence is that the customers are happy, but the delayed consequence is a loss of profit for the bar.
  • A sales associate sells a customer a product that is less expensive but meets the customer’s needs. The immediate consequence is that the sales associate makes less commission that day, but the delayed consequence is that the customer is loyal and refers 2 friends. In this case, the total commission earned is higher even though the immediate sale was lower.
  • A doctor could skip a screening question with a patient. The immediate consequence is finding something that looks like the problem, but the delayed consequence is the actual underlying problem remaining.
  • A manager asks an ID to create training. The ID gets started building it right away, trusting that the team requesting the training knows their needs. The immediate consequence is a happy manager, but the delayed consequence is ineffective training that doesn’t actually solve the business problem.
  • If you’re teaching ethics, a small ethical lapse early in the scenario might not seem like a big deal. The immediate consequence might be meeting a deadline or increased recognition.  In the long run, that small lapse leads a continued need to cover up your actions. The Lab: Avoiding Research Misconduct is an example with delayed consequences in some paths.

Looking for More?

Read more about branching scenarios:

 

Book Review: Performance-Focused Smile Sheets

On a scale from 1 to 5, how useful are your current level 1 evaluations or “smile sheets”?

  1. Completely worthless
  2. Mostly worthless
  3. Not too bad
  4. Mostly useful
  5. Extremely useful

Chances are, your training evaluations aren’t very helpful. How much useful information do you really get from those forms? If you know that one of your courses is averaging a 3.5 and another course is averaging a 4.2, what does that really mean? Do these evaluations tell you anything about employee performance?

Personally, I’ve always been a little disappointed in my training evaluations, but I never really knew how to make them better. In the past, I’ve relied on standard questions used in various organizations that I’ve seen over my career, with mixed results. Will Thalheimer’s book Performance-Focused Smile Sheets changes that by giving guidelines and example questions for effective evaluations.

smile_sheets

Raise your hand if most of your evaluation questions use Likert scales. I’ve always used them too, but Thalheimer shows in the book how we can do much better. After all, how much difference is there between “mostly agree” and “strongly agree” or other vaguely worded scales? What’s an acceptable answer–is “mostly agree” enough, or is only “strongly agree” a signal of a quality course?

The book starts with several chapters of background and research, including how evaluation results should correspond to the “four pillars of training effectiveness.” Every question in your evaluation should lead to some action you can take if the results aren’t acceptable. After all, what’s the point of including questions if the results don’t tell you something useful?

The chapter of sample questions with explanations of why they work and how you might adapt them is highly useful. I will definitely pull out these examples again the next time I write an evaluation. There’s even a chapter on how to present results to stakeholders.

One of the most interesting chapters is the quiz, where you’re encouraged to write in the book. Can you identify what makes particular questions effective or ineffective? I’d love to see him turn this book into an interactive online course using the questions in that quiz.

I highly recommend this book if you’re interested in creating evaluations that truly work for corporate training and elearning. If you’re in higher education, the book may still be useful, but you’d have to adapt the questions since the focus is really on performance change rather than long-term education.

The book is available on Amazon and on SmileSheets.com. If you need a discount for buying multiple copies of the book, use the second link.

 

Save