When Is Audio Narration Helpful?

In a discussion on eLearning Heroes, Judith Reymond asked about the research on when or whether audio narration is helpful to adult learners.

Speaker and sound waves

In Clark and Mayer’s eLearning and the Science of Instruction, they say that the research generally supports using narration with on-screen visuals. Adult learners retain more from a narration plus visuals approach than from reading on-screen text. They call this the “modality principle.”

Generally speaking, when you have narration, you shouldn’t also have that same text on the screen. This is called the “redundancy principle.” Clark and Mayer note some exceptions when text should be shown on screen (pp. 87-88, 107-108 in the 1st ed):

  • Complex Text: Complex text like mathematical formulas may need to be both on-screen and in narration to aid memory. (In practical experience, I also do this for text that has to be memorized word for word, such as screening questions for addiction.)
  • Key Words: Key words highlighting steps in a process or technical terms
  • Directions: Directions for practice exercises. “Use onscreen text without narration to present information that needs to be referenced over time, such as directions to complete a practice exercise.”
  • No Graphics: When there are no graphics or limited graphics on the screen
  • Language Difficulties: When the audience has language difficulties. I have used redundant on-screen text for an audience with very low literacy and a high percentage of learners with English as a second language. It might be enough to simply provide a transcript or closed captions in those situations so people who don’t need it can ignore or turn off the text.

In practical terms, I’ve found that if every page has narration and you suddenly have no narration for a practice exercise, some learners think something’s broken on the page. I generally have the narrator say something short to introduce the practice exercise, but leave the directions as on-screen text.

However, it’s also tiring to listen to a voice. I usually don’t provide audio feedback on practice activities to give people a break. I’ll sometimes provide other kinds of interaction or content delivery to provide a break from the audio (tabs or “click to reveal” text).

In the book, Clark and Mayer say this:

“Does the modality principle mean you should never use printed text? Of course not. We do not intend for you to use our recommendations as unbending rules that must be rigidly applied in all situations. Instead, we encourage you to apply our principles in ways that are consistent with the way that the human mind works—that is, consistent with the cognitive theory of multimedia learning rather than the information delivery theory.”

The principle of avoiding redundant on-screen text is sometimes treated as sacrosanct. I’ve seen some big names in the field practically yell that this is a firm rule that should never be broken. In real life, it’s not as clear cut, as even Clark and Mayer acknowledge. There’s plenty of redundant on-screen text that has no business being there. You should be thoughtful and intentional if you’re going to provide on-screen text. Generally, it shouldn’t be there, and you need a real reason to break the redundancy principle.

What are your experiences with audio, especially with on-screen text? What have you found works with your audiences?

Do Learning Styles Really Work?

Does tailoring your content for different learning styles really work? Maybe there’s a better use of your time and resources.

View the presentation for more. (If you’re reading this in email or a feed reader and don’t see anything below, you may have to click through to view the presentation.)

Ready to learn more?

Debunker Club Works to Dispel the Corrupted Cone of Learning

A new group called The Debunker Club is working to dispel myths and misinformation in the learning field. From their website:

The Debunker Club is an experiment in professional responsibility. Anyone who’s interested may join as long as they agree to the following:

  1. I would like to see less misinformation in the learning field.
  2. I will invest some of my time in learning and seeking the truth, from sources like peer-reviewed scientific research or translations of that research.
  3. I will politely, but actively, provide feedback to those who transmit misinformation.
  4. At least once a year, I will seek out providers of misinformation and provide them with polite feedback, asking them to stop transmitting their misinformation.
  5. I will be open to counter feedback, listening to understand opposing viewpoints. I will provide counter-evidence and argument when warranted.

This year, coinciding with April Fool’s Day 2015, the Debunker Club is running an experiment. We’re making a concerted effort to contact people who have shared the Cone of Experience (also known as the Cone of Learning or the Pyramid of Learning).

Many iterations of this cone exist. A Google image search for “cone of learning” returns dozens of results, most of which are false. If you’ve seen something like this that said, “People remember 10% of what they read, 20% of what they hear, 30% of what they see” etc., you’ve seen a variation on this theme.

Image search results for Cone of Learning

The original cone was developed by Edgar Dale and didn’t include any numbers. The later versions are the “corrupted cone” with fictitious statistics added.  Will Thalheimer’s post from 2009 debunking these claims is where I learned it was incorrect. Common sense might give you a hint that these numbers aren’t really based in research though. Think about it–how many times have you see research where all the categories broke into even 10% segments?

As part of the Debunker Club’s efforts, I discovered a post on Dane’s Education Blog called A Hierarchy of Learning. Although this post cites a great debunking article (Tales of the Undead…Learning Theories: The Learning Pyramid), the blog author only says that he “appreciate what it conveys.”

I left the following comment on his Learning Pyramid post.

Thanks for collecting so many resources on your blog. I can see that you’ve worked really hard to share many links and ideas with your readers.

However, the information above, though it may appear to have scientific support, has been exhaustively researched and found to have no basis in science. In fact, the “Tales of the Undead” link you cite debunks it.

An article from the scientific journal Educational Technology shows no research backing for the information. (Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). The Mythical Retention Chart and the Corruption of Dale’s Cone of Experience. Educational Technology, Nov/Dec 2014, 54(6), 6-16.)

The information presented is likely to produce more harm than good, promoting poor learning designs and hurting learners.

While we might abstract some beneficial notions from the percentages portrayed in the misleading information — namely that encouraging realistic practice has benefits — there are numerous faulty concepts within the bogus percentages that can do real harm. For example, by having people think that there are benefits to seeing over hearing, or hearing over reading, we are sending completely wrong messages about how learning works.

Most importantly, recent advances in learning science have really come together over the last two decades. The misleading information was first reported in 1914, with no research backing. It’s better to follow more recent findings than information that has no scientific basis. See, for example, the book Make it Stick by Brown, Roediger, and McDaniel. Julie Dirksen’s Design for How People Learn is another great selection.

I’m part of a larger community of folks called the Debunker Club who are attempting to encourage the use of proven, scientifically-based learning factors in the learning field.

I’m going to be posting about this misleading information on my blog. I hope you’ll comment and respond to my post if you wish. I (and the debunker community in general) want to learn how other people feel about the issues and ideas surrounding the original information and our approach to debunking myths and sharing evidence.

If you’re interested in dispelling misinformation and improving the learning field, please join the Debunker Club and participate in the conversation.

My History of Live Blogged Notes

When I attend webinars or participate in online courses and conferences, I usually live blog my notes. That helps me remember what I attended and what I learned, and it lets me share that knowledge with others. In a recent discussion about how I have learned about instructional design without getting a master’s degree, someone asked me what courses and webinars I’ve attended. Because I have done so much live blogging, I was able to provide proof of my ongoing professional development efforts. These posts go back to 2007, so some of the content and references are dated. Generally newer posts are at the top of each category.


Storytelling and Scenario-Based Learning

Synchronous Learning

 Attention and Motivation

Trends and Future Predictions

Games and Simulations

LMSs and Other Tools

Learning Communities

Other Topics

Image credit: Matthew Bowden http://www.digitallyrefreshing.com (http://www.sxc.hu/photo/145972) [Attribution], via Wikimedia Commons

ID and e-Learning Links (2/2/14)

  • Gavin Henrick on Moodle Repositories, including tables comparing features

    tags: moodle

  • Example of a branching scenario activity with a coach and a meter showing progress on every screen, built in ZebraZapps

    tags: scenarios e-learning zebrazapps branching

  • Research on feedback’s effect on performance. Feedback is generally helpful but can be detrimental, especially for more complex tasks. Goal setting can help mitigate the risks of feedback interventions.

    tags: feedback research learning

  • Research on the effects of feedback interventions. Feedback is not always beneficial for learning; in some cases, it can actually depress performance.

    tags: feedback instructionaldesign learning research

    • The MCPL literature suggests that for an FI to directly improve learning, rather than motivate learning, it has to help the recipient to reject erroneous hypotheses. Whereas correcting errors is a feature of some types of FI messages, most types of FI messages do not contain such information and therefore should not improve learning—a claim consistent with CAI research.

      Moreover, even in learning situations where performance seems to benefit from FIs, learning through FIs may be inferior to learning through discovery (learning based on feedback from the task, rather than on feedback from an external agent). Task feedback may force the participant to learn task rules and recognize errors (e.g., Frese & Zapf, 1994), whereas FI may lead the participant to learn how to use the FI as a crutch, while shortcutting the need for task learning (cf. J. R. Anderson, 1987).

    • In the MCPL literature, several reviewers doubt whether FIs have any learning value (Balzer et al., 1989; Brehmer, 1980) and suggest alternatives to FI for increasing learning, such as providing the learner with more task information (Balzer et al., 1989). Another alternative to an FI is designing work or learning
      environments that encourage trial and error, thus maximizing learning from task feedback without a direct intervention (Frese & Zapf, 1994).
  • Female voice over and audio editing for e-learning. Demos are on the website. She has done dialog for more conversational courses in the past, although that demo isn’t on her public website.

    tags: voiceover audio

  • Calculator for pricing custom e-learning based on a number of factors (graphics and multimedia, interactivity, instructional design)

    tags: e-learning pricing

  • This isn’t really about SCORM, but a question on pricing e-learning courses for perpetual licenses rather than annual per user fees

    tags: e-learning pricing

    • Most perpetual license deals I’ve seen in the eLearning space are usually priced at 3.5x the annual user price plus another 10-15% of the contract value for course maintenance and support.
  • Example of why experts often can’t teach well (or write good courses without an ID) based on research of NICU nurses who knew how to recognize infections but were such experts that their knowledge had become automatic and intuitive for them.

    tags: sme training research

    • When you’re a domain expert in your field, it’s difficult to step back and remember what it was like to be a beginner. Once we have knowledge, it’s very hard to remember what life was like without it.

      Instead of placing the burden of training on a subject-matter expert, it’s often more effective to establish a collaboration between subject-matter experts and trainers who are experts in breaking down information, recognizing the critical elements, and putting it back together in a way that’s digestible for people who aren’t experts.

Posted from Diigo. The rest of my favorite links are here.