I’m presenting at the eLearning Guild’s Learning Solutions Conference again this year on “Better Feedback for Scenario-Based eLearning.”
You can watch a two-minute trailer for my session (if the video isn’t embedded below, watch it on YouTube).
When you create a scenario, you work hard to make it realistic and relevant for your learners. Unfortunately, even otherwise engaging scenarios sometimes include abstract feedback like “Incorrect. Please try again.” Simply saying the choice is right or wrong can make learners lose interest and focus, and it doesn’t help them learn from their mistakes.
You will learn how to show learners the consequences of their decisions rather than telling them they’re right or wrong in scenario-based eLearning. This is the difference between “intrinsic feedback” and “instructional feedback.” We will explore several different options for intrinsic feedback, such as progress meters, character responses, and environmental changes. You’ll learn guidelines for when to use immediate feedback and when to delay the feedback in scenarios. We’ll discuss how to design feedback to meet the needs of both novice and expert learners. You’ll also learn when direct instructional feedback is beneficial for learning.
In this presentation, you’ll learn:
- multiple methods to show the consequences of decisions in scenarios
- when to use immediate or delayed feedback
- how to provide appropriate feedback for novice and expert learners
- when to use intrinsic feedback (showing consequences) and instructional feedback (direct coaching)
- how to work with SMEs to get information to provide realistic consequences
- how to write better feedback for short scenarios and complex branching scenarios
Read More about Feedback
This session draws from several previous blog posts (as well as some additional information from other sources).
- Show, Don’t Tell For Scenario Feedback
- Intrinsic and Instructional Feedback in Learning Scenarios
- Immediate and Delayed Consequences in Branching Scenarios
- Using Time as Scenario Feedback
If you’re attending the Learning Solutions Conference, I hope to see you there!
We often talk about conversational writing for elearning. A conversational tone flows better in voice over and leads to better learning outcomes.
However, I occasionally see examples of elearning where the narrator pretends to be in a literal conversation with the learners.
Do you know what kinds of questions generate deeper responses from clients? That’s right, open-ended questions.
I understand why someone might write in this tone, but I find it very patronizing in elearning. One of my SMEs called it the “Blues Clues” method of writing–you ask a question, then pause while people answer it. This is a pitfall in conversational writing you can avoid.
Great for Preschoolers
You see this strategy often in television shows for preschoolers. Daniel Tiger asks the audience to find an object on the screen of a certain color or type. After a pause of a few seconds, Daniel points out the right answer (which is highlighted on the screen).
It’s a great strategy if your audience is preschoolers. For adults…not so much.
Does Your Audience Really Know the Answer?
One problem is that this strategy only works in situations where you’re confident the audience already knows the answer. It has to be something obvious, or you can’t say, “That’s right” and assume they were correct. If it’s that obvious, give your learners some credit for their existing knowledge.
As you already know, open-ended questions generate deeper responses from clients.
Even “as you know” should be used with caution. It’s only safe to use if you really are confident that people know the information. Maybe it’s review from earlier in the course, prior training, or your learner analysis showed that this is prior knowledge you can build on.
Reflection and Connection Questions
Reflection questions that ask learners to connect their own experiences or to brainstorm multiple ideas are fine. It’s the questions where you’re leading them to a single right answer that annoy me.
These kinds of questions can make people think. There isn’t a right or wrong answer.
- What kinds of objections do your clients raise?
- Think about a time when the scope of a project changed. How did you handle it?
- Have you ever had a customer similar to the one in the scenario?
One way to avoid the pitfall of patronizing questions is by replacing them with reflection or connection questions.
What Pitfalls Annoy You?
The “Blues Clues” style for questions is one of my pet peeves in writing for learning. Do you have a pet peeve of your own? Is there a pitfall you wish you could make disappear? You can share (or just vent!) in the comments.
Nicole is creating a branching scenario practicing communication techniques for nutrition counselors to better understand their clients’ goals. She has written a simulated conversation between a counselor and a client. Her SME, Brian, provided this feedback after reviewing the prototype.
The conversation overall does a good job giving plausible choices for questions and showing realistic responses from the client. I want this to be really exciting for learners, like a game. Let’s add a timer for each decision. That way, they’ll be motivated to answer quickly and keep pushing through the scenario.
What do you think? How should Nicole respond to Brian?
A. That’s a great idea! I think that will enhance the learner experience.
B. I’m not sure. Let me do some research.
C. Timing might not be the best form of feedback for this particular course.
Remember your answer; we’ll come back to this question at the end of the post.
When Time is Effective
Time can be a very effective consequence in some learning situations. Check out the Lifesaver training on what to do in emergency situations for an example with effective use of time as feedback.
In the first scenario with Jake, you help someone in cardiac arrest. Each question has a 5 second timer, and you are scored for both accuracy and speed.
Later in the scenario, you simulate performing CPR by pressing two keys on your keyboard in the same rhythm as CPR. While you practice, you see a scale from good to bad showing how close you are to the ideal timing. This lets you adjust your rhythm. After you finish the 30 seconds of simulated CPR, you see a percentage score for your accuracy.
This feedback works in the Lifesaver training because timing really is a critical part of the skill being taught. Speed of response matters in these emergency situations, as does knowing the right rhythm for CPR.
Time can work for other skills too, like manufacturing, making sandwiches in a chain restaurant, or safety training.
When Time is Counterproductive
If the skill you’re practicing and assessing requires critical thinking and careful consideration, measuring time can be counterproductive. For simulated conversations where you want learners to pause and think about their options, it’s better to not use a timer.
You might be thinking, “But in a real conversation, people need to think quickly. Doesn’t that mean we should use timers?” That’s a question about fluency, which requires more practice over time. If your goal is to get people to that point of fluency, you might add a timer, but not for the initial practice. Teach the skill without a timer first, then provide additional practice opportunities to build fluency and speed in the skill.
Do Timers Improve Motivation?
Does a timer increase motivation for getting the right answer? Maybe, if the learners are already motivated prior to starting and time makes sense in the context of the activity. Many games use time as a way to keep players engaged and excited.
I suspect in practice that unnecessary timers encourage people to guess randomly for whatever they can click fastest. Time may actually decrease learners’ motivation to be truly cognitively engaged with the learning experience. They may be more motivated just to click and get through it quickly than to read carefully and understand their decisions.
Timers can create challenges for accessibility. Learners with visually impairments who use a screen reader and keyboard navigation will generally need more time to answer. Learners with mobility impairment may have trouble manipulating a mouse or keyboard quickly. Depending on your audience, adding timers may prevent some learners from being successful in your elearning courses, even if they could do the real task (like having a conversation) without problem.
Revisiting the Communication Scenario Example
Think back to Nicole’s scenario at the beginning of this post. She’s teaching communication skills to nutrition counselors, using a simulated conversation. Her SME, Brian, suggested adding a timer.
What do you think? Does a timer seem helpful in this situation?
Probably not. In this training, it’s more important for learners to think carefully about their choices and responses than to be speedy. Feedback like the expression of the client or a scale showing the client’s motivation to change their eating behavior would be more beneficial than feedback on how quick they are.
Time can work as feedback in learning scenarios, but it should be used sparingly, and only when it is actually relevant to the skill being practiced or assessed.
Do you have any examples of time used successfully as feedback in a scenario? I’d love to see some more samples. Share them in the comments.