Category: Research

Feedback in Branching Scenarios: What Works for Novices, Experts, and Everyone

When we provide feedback in branching scenarios, we have several questions to consider.

  • Should we provide consequences (intrinsic feedback) or coaching (instructional feedback)?
  • Should we provide immediate feedback or delayed feedback?
  • What works for novices versus experts?

Intrinsic and Instructional Feedback

In Scenario-based e-Learning: Evidence-Based Guidelines for Online Workforce Learning, Ruth Clark recommends combining intrinsic and instructional feedback.

Intrinsic feedback is the consequences for an action. It’s what happens because of the learner’s decisions. If you have a scenario where an employee falls off a ladder, a customer agrees to buy a more expensive product, or a patient recovers from a medical emergency, that’s intrinsic feedback. You show the learner what happens.

Instructional feedback is coaching that tells the learner about their choice rather than showing them. In a branching scenario, instructional feedback could come from a coach or character that guides learners. Instructional feedback doesn’t necessarily have to mean telling people directly if their choice was correct or incorrect. Learners should be able to figure that out from the intrinsic feedback. Instead, instructional feedback can focus on addressing misunderstanding or explaining why a choice had a certain result.

Novices may need more instructional feedback than experts. Experts are less likely to have problems with cognitive load from sorting through multiple pieces of information in a scenario. Experts are better at diagnosing their own problems based on contextual information like intrinsic feedback. Novices, on the other hand, may need more direct coaching to make sense of the intrinsic feedback, especially when they fail a scenario.

Immediate and Delayed Feedback

When we build branching scenarios, immediate consequences provide realism and keep learners engaged. Every time learners make a decision, something happens: the customer responds, the equipment breaks, or sales go up.

Note that “immediate” here refers to when the learner receives the feedback, not how quick the results would happen in real life. If a learner makes a choice to ignore recommended equipment maintenance to save money, you could jump ahead in time three months to show that equipment breaking and costing more money in the long run. As long as you show the feedback right away, it’s immediate because it gives learner information about their choice immediately.

Delayed consequences happen in branching scenarios when you show one consequence immediately, but a different consequence appears later.

For example, let’s take a scenario where a manager asks an ID to create training. The learner chooses to have the ID start building it right away, trusting that the team requesting the training knows their needs without further analysis.

  • The immediate consequence is that the ID’s manager is happy.
  • The delayed consequence is that the ID creates ineffective training that doesn’t actually solve the business problem.

You can also use delayed feedback, or coaching delivered to the learner later. In his report on Providing Learners with Feedback, Will Thalheimer suggests that feedback should be provided before learners try again. While that research was more related to retaking tests, I think that’s a good guideline for scenario-based learning. If learners fail a scenario and are asked to try again, give them some feedback to help them learn from their mistakes and make better choices next time.

Novices may benefit from more immediate feedback and coaching, while experts may be fine just receiving coaching at the end of a scenario.

Recommendations for Feedback

Here are my overall recommendations for feedback in scenario-based learning. These are based on a combination of research reviews from Clark and Thalheimer, along with recommendations from Cathy Moore, Michael Allen, and others, plus my own experience.

scenario_feedback

For Everyone

  • Provide frequent, immediate consequences that show learners what happens as a result of their decisions.
  • Provide coaching before learners retry a scenario.
  • Use delayed consequences in scenarios where they are realistic, although note that novices may need more coaching to help them understand delayed consequences.

For Novices

  • Provide immediate coaching for novices, especially to correct misconceptions or incorrect strategy selection.

For Experts

  • Use more delayed coaching with expert learners.

Don’t Assume the Recommendations are Perfect

None of these recommendations are correct 100% of the time for every situation or every group of learners. I’m fairly confident recommending frequent immediate consequences and coaching before a retry, but you may find exceptions even to those recommendations. The research on feedback is sometimes contradictory, so there is little firm guidance.

To quote Will Thalheimer, describing conflicting research results, “First, it tells us that we should be skeptical of absolutism. In particular, it would be perilous for us to say, ‘Immediate feedback is always better,’ or, ‘Delayed feedback is always better.'”

Let’s use the research to guide our decisions in providing feedback, but let’s also acknowledge that the research has limitations. Sometimes we have to use our best judgement on how to best support our learners.

 

What I Learned at LSCon

I had a great experience at the eLearning Guild’s Learning Solutions Conference last week. The days were long, but the time was really valuable. My own session on Avoiding Voice Over Script Pitfalls went very well. I had a very active, engaged audience. We even had a voice over artist and an editor attending, which was perfect for my session. I’ve had some requests to give a virtual version of my session, so stay tuned for that.

It was so much fun to get to meet people in person who I’d only met online. I’ve built so many relationships with people online, but it’s great to see them live and connect a different way.

I took about 30 pages of notes over the 3 days. While everything is still fresh in my mind, I want to record some highlights. This list is one thing I can use from every session I attended. This isn’t necessarily the most important point from the speaker; in fact, some of these came from tangents. I’m focusing on what I think I can apply in my own work.

Information on all sessions can be found on the LS Con website.

Tim Gunn: A Natty Approach to Learning and Education

“There’s nothing more powerful than the word no.” Gunn talked about this in terms of advocating for the intellectual property rights for designers, but I think this applies to working with clients and SMEs as well.

Connie Malamed: Crash Course in Information Graphics for Learning

I loved the ideas for data visualization from this presentation. I don’t do infographics often, but I do need to present data and charts in courses (including one of my current projects). My big takeaway is that I need to do more sketching for charts. I’ve started to do more pencil and paper sketching for course layouts thanks to Connie’s last book, but visual brainstorming for charts would be helpful too.

Mark Sheppard: Building a Learning and Social-Collaborative Ecosystem in Slack

One note is that Learning Locker is working with xAPIs that can talk to Slack and pull data. Even without xAPI, you can get other data from Slack, like how many emojis were used to answer a poll.

Julie Dirksen: Diagnosing Behavior Change Problems

How many times has a client or SME given you a vague objective like, “Improve customer service”? That’s a nice business goal, but what does that mean for measurable performance? What behavior do you want to change? Julie shared her “photo test” for identifying behaviors. What would that behavior look like if you took a photo or video of it? Asking that question can help get to an observable behavior you can measure.

Karen Kostrinsky: Storytelling for Learning Impact

Think about the titles for your courses. What’s the most important takeaway? How can you put that takeaway in the title?

This session also had some discussion around the difference between scenarios and stories. Some people raised objections to using stories. I’m planning some future blog posts around those objections and questions.

Glen Keane: Harnessing Creativity in a Time of Technological Change

My favorite quote (I’ve already used it with a client during a video call): “Technology is like a 3-year-old at a formal dinner. Just when you want it to be at its best behavior, it starts acting up.” On a more serious note, Keane talked about how creativity means he can see it in his head, but he has to figure out how to get you to see it too. That’s a challenge we face creating elearning. We can see it in our heads (or the SMEs can see it in their heads), but we have to get it in a format learners can use.

Jane Bozarth: New Ideas for Using Social Tools for Learning

Jane shared lots of inspiration in this session (who knew that the TSA had a great Instagram account?). What I’m going to use first is a Pinterest board for sharing book lists. I started a draft version, but I want to switch the order (I forgot to load them backwards) and move this to a professional account rather than my personal one.

Jennifer Hofmann: Mindsets, Toolsets, and Skillsets for Modern Blended Learning

One quote stood out: “If you can test it online, you can teach it online.” When you think about blended learning, think about goals and objectives first, then assessment. Decide on the instructional strategy, technique, and technology after you figure out the assessment. Maybe some parts of the skill can’t be taught and assessed online, but think about the parts that can be.

Will Thalheimer: Neuroscience and Learning: What the Research Really Says

The big takeaway is that we should be skeptical of claims that we can use neuroscience to improve learning. The reality is that we don’t know enough about neuroscience to really improve learning design yet. Sometimes what people claim is neuroscience (which means fMRI data) is actually earlier cognitive psychology research with an incorrect label.

Panel: What’s Wrong with Evaluation?

This was with Will Thalheimer, Julie Dirksen, Megan Torrance, and Steve Forman, with JD Dillon moderating. Can’t you tell from just the list of names that this was a good discussion?

Julie Dirksen made the point that we as instructional designers don’t get enough feedback on our own work. We don’t really know whether what we’re doing is working or not. It takes 10,000 hours (give or take) to become an expert, but that only works if you get the right kind of feedback to continuously improve your practice.

On a related note, Megan Torrance asked, “Why don’t we do A/B testing on training?” I saw an example of that at the DemoFest, but I admit I’ve never done it myself. Maybe there’s a way to set that up for a future project so I can test what method really works (and get feedback for my own practice in the process).

Jennifer Nilsson: Best Practices Training Should Steal from Software Development

We talk a lot about stealing agile methods from software development, but Jennifer’s presentation was about other proven practices. For example, software developers add comments to their code to explain what something does and why it was done a certain way. We can’t always add comments to our development tools the way you can in true coding, but we can add notes in an off screen text box. That’s an easy solution that will save a lot of time if I have to go back to a complicated interaction a year later.

Diane Elkins: Responsive Design: A Comparison of Popular Authoring Tools

The first thing I’m going to change as a result of this session is what questions I ask clients after they say they want a mobile solution. I haven’t been asking enough follow up questions to understand what clients really mean by “responsive.” Do they mean tablets only? Are they OK with landscape only for phones? Is a scalable solution enough, or do they really want it fully responsive (adaptive)?

Julia Galef: Embracing a Mindset of Continuous Learning

We all use motivated reasoning sometimes and ignore evidence that doesn’t support the outcome we want. One way to check if you’re vulnerable to self-deception on a specific topic is the “button test.” Imagine you could press a button and find out the absolute, complete truth about something. If you find yourself hesitating to push that button, you might be vulnerable to motivated reasoning on that topic. If you know that, you can be aware of your cognitive biases and be more careful.

Photos

I took photos during the sessions and of the lovely sketchnotes taken for many sessions (including sessions I didn’t attend). Email readers, you may need to click through to the post to see the gallery of images.

Benefits of Scenario-Based Learning

Why are scenarios effective for learning? They provide realistic context and emotional engagement. They can increase motivation and accelerate expertise. Here’s a selection of quotes explaining the benefits.

Benefits of Scenario-Based Learning

Accelerating Expertise with Scenario-Based e-Learning – The Watercooler Newsletter : The Watercooler Newsletter: Ruth Clark on how scenario-based elearning accelerates expertise and when to use it

What is Scenario-Based e-Learning?

  1. The learner assumes the role of an actor responding to a job realistic situation.
  2. The learning environment is preplanned.
  3. Learning is inductive rather than instructive.
  4. The instruction is guided.
  5. Scenario lessons incorporate instructional resources.
  6. The goal is to accelerate workplace expertise.

As you consider incorporating scenario-based e-Learning into your instructional mix, consider whether the acceleration of expertise will give you a return on investment.  For example, interviews with subject matter experts indicated that automotive technicians must complete about 100 work orders to reach a reasonable competency level in any given troubleshooting domain.  Comparing delivery alternatives, OJT would require around 200+ hours, instructor-led training would require around 100 hours, and scenario-based e-Learning simulations require approximately 33–66 hours.

Finally, many learners find scenario-based e-Learning more motivating than traditional instructional formats.  Solving a work-related problem makes the instruction immediately relevant.

The Benefits of Scenario Based Training: Scenario-based training better reflects real-life decision making

There is no linear path into what they are subjected. The situations are complex. They often fail and they learn by reflection, becoming much better at the judgements they make next time, even though next time the environment and the scenarios presented are different.

After completing a few exercises, they build their own view of the patterns that are evident and are able to move into a new scenario with confidence even if the environment and scenario is radically different.

Learning on reflection before plunging into the next scenario helps to build the patterns in the participants’ minds that are the evidence that they have learnt.

Quizzes based on scenarios with a, “What would you do next?”, question builds quick and fun repetition into the training programme, helping transfer from short term memory to long term memory.

Scenario-based-learning: PDF explaining theory and how to decide if SBL is the right strategy

Scenario-based learning is based on the principles of situated learning theory (Lave & Wenger, 1991), which argues that
learning best takes place in the context in which it is going to be used, and situated cognition, the idea that knowledge is
best acquired and more fully understood when situated within its context (Kindley, 2002).

SBL usually works best when applied to tasks requiring decision-making and critical thinking in complex situations. Tasks
that are routine to the students will require little critical thinking or decision-making, and may be better assessed using
other methods.

Checklist: Is SBL the right option? (Clark, 2009)
* Are the outcomes based on skills development or problem-solving?
* Is it difficult or unsafe to provide real-world experience of the skills?
* Do your students already have some relevant knowledge to aid decision-making?
* Do you have time and resources to design, develop, and test an SBL approach?
* Will the content and skills remain relevant for long enough to justify the development of SBL?

Learning through storytelling | Higher Education Academy: Why storytelling works for learning

Stories are effective tools for learning due to their ability to facilitate the following cognitive processes: i) concretizing, ii) assimilation, and iii) structurizing (Evans and Evans 1989).

Top 7 Benefits You Get From Scenario-Based Training: Infographic on benefits. “Falling forward,” accelerating time, critical thinking, shared context, engaging emotions, retention, trigger memories

Scenarios allow “falling forward”: Providing a safe space to fail helps build the capacity to fix mistakes as you would in real life

Book Review: Performance-Focused Smile Sheets

On a scale from 1 to 5, how useful are your current level 1 evaluations or “smile sheets”?

  1. Completely worthless
  2. Mostly worthless
  3. Not too bad
  4. Mostly useful
  5. Extremely useful

Chances are, your training evaluations aren’t very helpful. How much useful information do you really get from those forms? If you know that one of your courses is averaging a 3.5 and another course is averaging a 4.2, what does that really mean? Do these evaluations tell you anything about employee performance?

Personally, I’ve always been a little disappointed in my training evaluations, but I never really knew how to make them better. In the past, I’ve relied on standard questions used in various organizations that I’ve seen over my career, with mixed results. Will Thalheimer’s book Performance-Focused Smile Sheets changes that by giving guidelines and example questions for effective evaluations.

smile_sheets

Raise your hand if most of your evaluation questions use Likert scales. I’ve always used them too, but Thalheimer shows in the book how we can do much better. After all, how much difference is there between “mostly agree” and “strongly agree” or other vaguely worded scales? What’s an acceptable answer–is “mostly agree” enough, or is only “strongly agree” a signal of a quality course?

The book starts with several chapters of background and research, including how evaluation results should correspond to the “four pillars of training effectiveness.” Every question in your evaluation should lead to some action you can take if the results aren’t acceptable. After all, what’s the point of including questions if the results don’t tell you something useful?

The chapter of sample questions with explanations of why they work and how you might adapt them is highly useful. I will definitely pull out these examples again the next time I write an evaluation. There’s even a chapter on how to present results to stakeholders.

One of the most interesting chapters is the quiz, where you’re encouraged to write in the book. Can you identify what makes particular questions effective or ineffective? I’d love to see him turn this book into an interactive online course using the questions in that quiz.

I highly recommend this book if you’re interested in creating evaluations that truly work for corporate training and elearning. If you’re in higher education, the book may still be useful, but you’d have to adapt the questions since the focus is really on performance change rather than long-term education.

The book is available on Amazon and on SmileSheets.com. If you need a discount for buying multiple copies of the book, use the second link.

 

Save

When Is Audio Narration Helpful?

In a discussion on eLearning Heroes, Judith Reymond asked about the research on when or whether audio narration is helpful to adult learners.

Speaker and sound waves

In Clark and Mayer’s eLearning and the Science of Instruction, they say that the research generally supports using narration with on-screen visuals. Adult learners retain more from a narration plus visuals approach than from reading on-screen text. They call this the “modality principle.”

Generally speaking, when you have narration, you shouldn’t also have that same text on the screen. This is called the “redundancy principle.” Clark and Mayer note some exceptions when text should be shown on screen (pp. 87-88, 107-108 in the 1st ed):

  • Complex Text: Complex text like mathematical formulas may need to be both on-screen and in narration to aid memory. (In practical experience, I also do this for text that has to be memorized word for word, such as screening questions for addiction.)
  • Key Words: Key words highlighting steps in a process or technical terms
  • Directions: Directions for practice exercises. “Use onscreen text without narration to present information that needs to be referenced over time, such as directions to complete a practice exercise.”
  • No Graphics: When there are no graphics or limited graphics on the screen
  • Language Difficulties: When the audience has language difficulties. I have used redundant on-screen text for an audience with very low literacy and a high percentage of learners with English as a second language. It might be enough to simply provide a transcript or closed captions in those situations so people who don’t need it can ignore or turn off the text.

In practical terms, I’ve found that if every page has narration and you suddenly have no narration for a practice exercise, some learners think something’s broken on the page. I generally have the narrator say something short to introduce the practice exercise, but leave the directions as on-screen text.

However, it’s also tiring to listen to a voice. I usually don’t provide audio feedback on practice activities to give people a break. I’ll sometimes provide other kinds of interaction or content delivery to provide a break from the audio (tabs or “click to reveal” text).

In the book, Clark and Mayer say this:

“Does the modality principle mean you should never use printed text? Of course not. We do not intend for you to use our recommendations as unbending rules that must be rigidly applied in all situations. Instead, we encourage you to apply our principles in ways that are consistent with the way that the human mind works—that is, consistent with the cognitive theory of multimedia learning rather than the information delivery theory.”

The principle of avoiding redundant on-screen text is sometimes treated as sacrosanct. I’ve seen some big names in the field practically yell that this is a firm rule that should never be broken. In real life, it’s not as clear cut, as even Clark and Mayer acknowledge. There’s plenty of redundant on-screen text that has no business being there. You should be thoughtful and intentional if you’re going to provide on-screen text. Generally, it shouldn’t be there, and you need a real reason to break the redundancy principle.

What are your experiences with audio, especially with on-screen text? What have you found works with your audiences?