Do People Need to Learn, or Can They Look It All Up?

I have been part of several discussions recently that questioned the value of creating courses and delivering formal training. There’s a perception among some people (including some L&D folks) that as long as you have Google and a good network of resources that you can look up anything you need. The other, related idea is that everything can be learned on the job with performance support, without formal training. In this post, I’ll examine the first question.

Do People Really Need to Learn?

Question 1: Do People Need to Bother Learning?

The first argument asks if people need to bother learning anything at all, or if they can just look it up when they need it. Do you really need to remember if you have a mobile phone and a search engine always available?

For example, Bruce Graham started a lively conversation in the Articulate Heroes community by describing someone he met at a conference. She said she takes all the elearning in their organization, regardless of quality, but doesn’t bother to remember much because she knows she can always look it up later. Bruce explains that Henry Ford approached building cars the same way; he found ways to assemble a group of experts and made them available any time he had a question.

He did not need to learn, just have access to knowledge.

If this is how people are REALLY now using online learning, and using our product(s), do all our clever animations, graphics, interactions and so on actually matter any more?

Let’s just give out facts, because millennials know how to access it, and will go back when they need it.

Why do they need to bother learning?

Sometimes You Can Look It Up

I think plenty of things can just be looked up at the time of need. I don’t need to memorize the recipes for most of the dishes I cook; I can just read the recipe to get the exact amounts and steps. For those sorts of tasks, we should probably be creating job aids (recipes and hints for work tasks) rather than courses. At a minimum, we should be creating training plus job aids, or training that helps people learn how to use performance support.

I recently wrote a course where one of the main goals is for people to know where to find and how to use the resources. We don’t care if they can remember all 10 points and 50+ subpoints of this policy. We care that they’re aware that the policy exists and that they can navigate the website to look up the policy when they need it. Therefore, the content delivery is very light. The practice activities are questions like “look up in Table 1 what you need for this safety precaution” and “use this self-assessment to determine what components of the standard you’re currently meeting or not.”

Deeper, Internalized Expertise

Some tasks require a deeper expertise though. A musician can’t stop in the middle of a song to look up a fingering. A salesperson can’t ask a customer to “hold that thought” while he fires up the elearning on objection handling. A doctor can’t ask a patient to wait while she pulls up the example audio of what a heart murmur sounds like for comparison. A line manager can’t walk out in the middle of a meeting to review the online course about delegation. Those skills require internalizing knowledge deeply enough that you can use them at the time of need. You can’t have everything be “just in time.”

Finding Information Isn’t Learning

Steve Flowers argued that searching is fine for finding information, but that’s not the same as training.

We have unprecedented access to good and bad information. To perfectly valid facts and information that can help us get things done. To perfectly misleading and wrong information that can lead us down the wrong path.

This is the core problem with the way many view training and learning. This conflation of movement of information with the efficiency of a training solution is flat wrong. It’s not about storing information in our heads. It’s about being able to adapt and adapt quickly to whatever challenges the task you’ve trained for presents. This rarely hinges on our ability to recall information. Doesn’t mean information isn’t important. But that’s only one ingredient.

Cake != Flour. It’s more than that.

Information != Knowledge != Behavior != Task Success != Results

Work is more complicated than, “Let me Google that” for many types of things that we do. Adept search skills are great. Helpful. But that domain expertise does not transfer to all other domains equally.

I think Steve makes a really important point here. Training is more than just sharing information. It’s also providing people opportunities to practice skills and get feedback to improve performance.

Which Tasks Need to Be Trained?

Sometimes, looking things up (like a recipe or a table of standards) is enough. Sometimes, it’s not. So how do we figure out which tasks are skills that need to be trained and which ones just need a job aid or a searchable resource?

Julie Dirksen shared an idea in her Learning Solutions Conference presentation that I think helps make that distinction.

Think of a task or topic where you might create training or performance support. Is it reasonable to think that someone can be proficient without practice? If people can be proficient without practice, you don’t need training. If people need practice to be proficient, that’s a skill where training might be helpful.

Performance support might also be helpful, especially after the initial training when people are practicing on the job. If it’s something you need to practice, just searching for information won’t be enough though.

Should We Create Courses?

In my next post, I’ll expand more about whether or not we should create courses. (Hint: I think we should, at least sometimes.)

ID and eLearning Links (5/6/18)

Posted from Diigo. The rest of my favorite links are here.

Instructional Design and E-Learning Links

Feedback in Branching Scenarios: What Works for Novices, Experts, and Everyone

When we provide feedback in branching scenarios, we have several questions to consider.

  • Should we provide consequences (intrinsic feedback) or coaching (instructional feedback)?
  • Should we provide immediate feedback or delayed feedback?
  • What works for novices versus experts?

Intrinsic and Instructional Feedback

In Scenario-based e-Learning: Evidence-Based Guidelines for Online Workforce Learning, Ruth Clark recommends combining intrinsic and instructional feedback.

Intrinsic feedback is the consequences for an action. It’s what happens because of the learner’s decisions. If you have a scenario where an employee falls off a ladder, a customer agrees to buy a more expensive product, or a patient recovers from a medical emergency, that’s intrinsic feedback. You show the learner what happens.

Instructional feedback is coaching that tells the learner about their choice rather than showing them. In a branching scenario, instructional feedback could come from a coach or character that guides learners. Instructional feedback doesn’t necessarily have to mean telling people directly if their choice was correct or incorrect. Learners should be able to figure that out from the intrinsic feedback. Instead, instructional feedback can focus on addressing misunderstanding or explaining why a choice had a certain result.

Novices may need more instructional feedback than experts. Experts are less likely to have problems with cognitive load from sorting through multiple pieces of information in a scenario. Experts are better at diagnosing their own problems based on contextual information like intrinsic feedback. Novices, on the other hand, may need more direct coaching to make sense of the intrinsic feedback, especially when they fail a scenario.

Immediate and Delayed Feedback

When we build branching scenarios, immediate consequences provide realism and keep learners engaged. Every time learners make a decision, something happens: the customer responds, the equipment breaks, or sales go up.

Note that “immediate” here refers to when the learner receives the feedback, not how quick the results would happen in real life. If a learner makes a choice to ignore recommended equipment maintenance to save money, you could jump ahead in time three months to show that equipment breaking and costing more money in the long run. As long as you show the feedback right away, it’s immediate because it gives learner information about their choice immediately.

Delayed consequences happen in branching scenarios when you show one consequence immediately, but a different consequence appears later.

For example, let’s take a scenario where a manager asks an ID to create training. The learner chooses to have the ID start building it right away, trusting that the team requesting the training knows their needs without further analysis.

  • The immediate consequence is that the ID’s manager is happy.
  • The delayed consequence is that the ID creates ineffective training that doesn’t actually solve the business problem.

You can also use delayed feedback, or coaching delivered to the learner later. In his report on Providing Learners with Feedback, Will Thalheimer suggests that feedback should be provided before learners try again. While that research was more related to retaking tests, I think that’s a good guideline for scenario-based learning. If learners fail a scenario and are asked to try again, give them some feedback to help them learn from their mistakes and make better choices next time.

Novices may benefit from more immediate feedback and coaching, while experts may be fine just receiving coaching at the end of a scenario.

Recommendations for Feedback

Here are my overall recommendations for feedback in scenario-based learning. These are based on a combination of research reviews from Clark and Thalheimer, along with recommendations from Cathy Moore, Michael Allen, and others, plus my own experience.

scenario_feedback

For Everyone

  • Provide frequent, immediate consequences that show learners what happens as a result of their decisions.
  • Provide coaching before learners retry a scenario.
  • Use delayed consequences in scenarios where they are realistic, although note that novices may need more coaching to help them understand delayed consequences.

For Novices

  • Provide immediate coaching for novices, especially to correct misconceptions or incorrect strategy selection.

For Experts

  • Use more delayed coaching with expert learners.

Don’t Assume the Recommendations are Perfect

None of these recommendations are correct 100% of the time for every situation or every group of learners. I’m fairly confident recommending frequent immediate consequences and coaching before a retry, but you may find exceptions even to those recommendations. The research on feedback is sometimes contradictory, so there is little firm guidance.

To quote Will Thalheimer, describing conflicting research results, “First, it tells us that we should be skeptical of absolutism. In particular, it would be perilous for us to say, ‘Immediate feedback is always better,’ or, ‘Delayed feedback is always better.'”

Let’s use the research to guide our decisions in providing feedback, but let’s also acknowledge that the research has limitations. Sometimes we have to use our best judgement on how to best support our learners.

 

What I Learned at LSCon18

Last month, I attended the Learning Solutions 2018 Conference in Orlando. Once again, it was a great experience. I had fun meeting people like Judy Katz, Tracy Parish, Cammy Bean, and Clark Quinn in person who I have known online for years, plus seeing people again from last year.

Now that I’ve had a few weeks to process and reflect, I want to summarize some of what I learned. I did a similar post last year, and it helped me reinforce and remember what I learned. This is my own “spaced repetition” to help me use these ideas. These comments won’t always be the most important thing each speaker said, but one thing I took away from the session and think I can apply in my own work.

Diane Elkins: Microlearning Morning Buzz

One of the things I appreciated from Diane’s discussion was the balanced approach. This wasn’t the “microlearning will solve all of our problems!” hyperbole I see from many sources. We talked about how microlearning is sometimes a solution, sometimes on its own, sometimes in combination with other forms of training.

Diane also shared a really great idea for training where you have to meet a certain minimum time to meet a legal or regulatory requirement. Instead of doing lots of content dump (which is sometimes padded to fill the required time), why not do just the minimum content plus a lot of practice and maybe reflection?

As a side note, Diane’s hand puppet demonstrations of pointless conversations with SMEs are hilarious.

Kai Kight: Composing Your World

As a musician and former music educator, it was really fun to hear a session start with violin and to watch the reactions of the audience.

One of the ideas from his keynote was to not get so wrapped up in the notes that you forget who you’re playing for. That applies to our work (and many fields); we have to always keep thinking about the audience and what they need.

Kevin Thorn: Comics for Learning

This was a session I attended specifically because it’s outside of what I normally do for work. I’m not quite sure how I’m going to apply this yet, but thinking about the various visual styles for comics gives me some new ideas.

Kevin shared a ton of research, examples, and resources. One that I need to dig into more is the Visual Language Lab.

Ann Rollins and Myra Roldan: Low-Cost, High-Impact AR Experiences

This was another session where I have no experience with the topic, but was curious to see some possibilities. My biggest takeaway is that several tools for simple AR are pretty affordable and easy enough to get started. Simple things like showing a video or a little information to explain features is very doable. We tested Zappar in the session to create a quick sample, but Layar also looks promising.

Julie Dirksen: Strategies for Supporting Complex Skill Development

I took 5 pages of notes from this session, so this could be several blog posts on its own.

How do you know if something is a skill or not? Is it reasonable to think that someone can be proficient without practice? If not, it’s a skill.

One idea I’m going to start using immediately is for self-paced elearning where I ask learners to type a longer answer. I have been using a model answer to compare as a way to help learners evaluate their own work, which is a good start. Julie talked about giving learners a checklist to guide their self-evaluation even more. I can implement that right now.

Tracy Parish: Free eLearning Design Tools

This was a discussion about free tools and how people use them, based largely on Tracy’s immense list of free tools.

Platon: Powerful Portraits

This was an engaging keynote because he has so many great stories about famous (and not famous) people.

One idea he shared was that if you can get people to see themselves in the story you put forward, maybe you can build bridges to connect people.

Cammy Bean: Architecting for Results

The big idea from this presentation was to think about broader systems for learning. Instead of content in a single event, it’s a journey over time. It’s a mix of what you do before the training, during the training, and after the training, but we often focus just on the middle portion.

The mix is going to be a little different for every program. This model is one way to think about the different pieces.

  1. Engage
  2. Diagnose
  3. Learn/Understand
  4. Apply
  5. Assess
  6. Reinforce

The simplified version is Prepare – Learn – Practice – Reflect.

Connie Malamed: Design Critique Party

The best thing I took away from this session was actually the protocol for requesting and giving critiques.

The protocol for the designer requesting critique:

  1. State your objective.
  2. Walk people through the experience.
  3. Say what you’d like to get feedback on
  4. Become an impartial observer

The protocol for giving critique:

If your objective is ____________, then _________ [critique].

Joe Fournier: Novel Writing Tricks

One idea from this session was about how the idea of a theme might apply to learning. The theme ties to the objective. It’s the emotional or value shift in a story. In learning, the theme might be how reporting employee theft is better for everyone.

Panel: Evolution of Instructional Design

This panel included Connie Malamed, Diane Elkins, Kevin Thorn, and Clark Quinn.

Diane Elkins pointed out that in classroom training, we often ask questions that only a few people will answer, and we don’t track them. If it’s OK to do that in classroom training, why not do it in elearning too? It’s OK to ask learners to type longer answers even though some of them will skip it. Let’s not punish the people who are willing to do the work and learn more just because we can’t track it or not everyone will do it.

David Kelly moderated the panel, and he pointed out how our field has a lot of “shiny object syndrome.” We’re often looking for “one tool to rule them all” that will fix every problem in every situation. That just doesn’t exist.

David Kelly: Shifting from Microlearning to Micromoments

There is no definition of microlearning. There are lots of opinions, some of which are labeled as definitions.

Maybe we should be thinking about micro as in microscope: something that narrows the scope of focus to a tiny part of the whole.

Bethany Vogel & Cara Halter: cMOOCs can be Effective

They used Intrepid Learning as the platform, which may be worth exploring more.

In their model, a MOOC is a time-bound online program that contains highly contextualized spaced, micro, and social learning. “Massive” and “Open” aren’t really part of their model, so I admit I’m not sure why they’re calling it a MOOC (other than that’s what their clients are asking for). I think you could do this same model with moderated social, spaced learning and call it “blended” or a “journey.” The experience was good, even if I might quibble with the label.

Photos

You can get a feel for the conference here.

 

Better Feedback for Scenario-Based eLearning Presentation

If you weren’t able to attend my session at the Learning Solutions Conference in Orlando, you can still hear me speaking on this topic. This recording is from a virtual version of the same presentation which I gave to the Online Network of Independent Learning Professionals on March 1 to prepare for the conference.

If you’re reading this in email or RSS and the video doesn’t appear above, try watching it directly on YouTube.

Watch for my next post where I’ll share some of the things I learned at the conference.

Interested in more on this topic? Read all my posts on Storytelling and Scenarios, including several on using feedback to support learning.

Learning Solutions Conference & Expo