Debunker Club Works to Dispel the Corrupted Cone of Learning

A new group called The Debunker Club is working to dispel myths and misinformation in the learning field. From their website:

The Debunker Club is an experiment in professional responsibility. Anyone who’s interested may join as long as they agree to the following:

  1. I would like to see less misinformation in the learning field.
  2. I will invest some of my time in learning and seeking the truth, from sources like peer-reviewed scientific research or translations of that research.
  3. I will politely, but actively, provide feedback to those who transmit misinformation.
  4. At least once a year, I will seek out providers of misinformation and provide them with polite feedback, asking them to stop transmitting their misinformation.
  5. I will be open to counter feedback, listening to understand opposing viewpoints. I will provide counter-evidence and argument when warranted.

This year, coinciding with April Fool’s Day 2015, the Debunker Club is running an experiment. We’re making a concerted effort to contact people who have shared the Cone of Experience (also known as the Cone of Learning or the Pyramid of Learning).

Many iterations of this cone exist. A Google image search for “cone of learning” returns dozens of results, most of which are false. If you’ve seen something like this that said, “People remember 10% of what they read, 20% of what they hear, 30% of what they see” etc., you’ve seen a variation on this theme.

Image search results for Cone of Learning

The original cone was developed by Edgar Dale and didn’t include any numbers. The later versions are the “corrupted cone” with fictitious statistics added.  Will Thalheimer’s post from 2009 debunking these claims is where I learned it was incorrect. Common sense might give you a hint that these numbers aren’t really based in research though. Think about it–how many times have you see research where all the categories broke into even 10% segments?

As part of the Debunker Club’s efforts, I discovered a post on Dane’s Education Blog called A Hierarchy of Learning. Although this post cites a great debunking article (Tales of the Undead…Learning Theories: The Learning Pyramid), the blog author only says that he “appreciate what it conveys.”

I left the following comment on his Learning Pyramid post.

Thanks for collecting so many resources on your blog. I can see that you’ve worked really hard to share many links and ideas with your readers.

However, the information above, though it may appear to have scientific support, has been exhaustively researched and found to have no basis in science. In fact, the “Tales of the Undead” link you cite debunks it.

An article from the scientific journal Educational Technology shows no research backing for the information. (Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). The Mythical Retention Chart and the Corruption of Dale’s Cone of Experience. Educational Technology, Nov/Dec 2014, 54(6), 6-16.)

The information presented is likely to produce more harm than good, promoting poor learning designs and hurting learners.

While we might abstract some beneficial notions from the percentages portrayed in the misleading information — namely that encouraging realistic practice has benefits — there are numerous faulty concepts within the bogus percentages that can do real harm. For example, by having people think that there are benefits to seeing over hearing, or hearing over reading, we are sending completely wrong messages about how learning works.

Most importantly, recent advances in learning science have really come together over the last two decades. The misleading information was first reported in 1914, with no research backing. It’s better to follow more recent findings than information that has no scientific basis. See, for example, the book Make it Stick by Brown, Roediger, and McDaniel. Julie Dirksen’s Design for How People Learn is another great selection.

I’m part of a larger community of folks called the Debunker Club who are attempting to encourage the use of proven, scientifically-based learning factors in the learning field.

I’m going to be posting about this misleading information on my blog. I hope you’ll comment and respond to my post if you wish. I (and the debunker community in general) want to learn how other people feel about the issues and ideas surrounding the original information and our approach to debunking myths and sharing evidence.

If you’re interested in dispelling misinformation and improving the learning field, please join the Debunker Club and participate in the conversation.

My History of Live Blogged Notes

When I attend webinars or participate in online courses and conferences, I usually live blog my notes. That helps me remember what I attended and what I learned, and it lets me share that knowledge with others. In a recent discussion about how I have learned about instructional design without getting a master’s degree, someone asked me what courses and webinars I’ve attended. Because I have done so much live blogging, I was able to provide proof of my ongoing professional development efforts. These posts go back to 2007, so some of the content and references are dated. Generally newer posts are at the top of each category.

Woman-typing-on-laptop-cropped

Storytelling and Scenario-Based Learning

Synchronous Learning

 Attention and Motivation

Trends and Future Predictions

Games and Simulations

LMSs and Other Tools

Learning Communities

Other Topics

Image credit: Matthew Bowden http://www.digitallyrefreshing.com (http://www.sxc.hu/photo/145972) [Attribution], via Wikimedia Commons

ID and e-Learning Links (2/2/14)

  • Gavin Henrick on Moodle Repositories, including tables comparing features

    tags: moodle

  • Example of a branching scenario activity with a coach and a meter showing progress on every screen, built in ZebraZapps

    tags: scenarios e-learning zebrazapps branching

  • Research on feedback’s effect on performance. Feedback is generally helpful but can be detrimental, especially for more complex tasks. Goal setting can help mitigate the risks of feedback interventions.

    tags: feedback research learning

  • Research on the effects of feedback interventions. Feedback is not always beneficial for learning; in some cases, it can actually depress performance.

    tags: feedback instructionaldesign learning research

    • The MCPL literature suggests that for an FI to directly improve learning, rather than motivate learning, it has to help the recipient to reject erroneous hypotheses. Whereas correcting errors is a feature of some types of FI messages, most types of FI messages do not contain such information and therefore should not improve learning—a claim consistent with CAI research.

      Moreover, even in learning situations where performance seems to benefit from FIs, learning through FIs may be inferior to learning through discovery (learning based on feedback from the task, rather than on feedback from an external agent). Task feedback may force the participant to learn task rules and recognize errors (e.g., Frese & Zapf, 1994), whereas FI may lead the participant to learn how to use the FI as a crutch, while shortcutting the need for task learning (cf. J. R. Anderson, 1987).

    • In the MCPL literature, several reviewers doubt whether FIs have any learning value (Balzer et al., 1989; Brehmer, 1980) and suggest alternatives to FI for increasing learning, such as providing the learner with more task information (Balzer et al., 1989). Another alternative to an FI is designing work or learning
      environments that encourage trial and error, thus maximizing learning from task feedback without a direct intervention (Frese & Zapf, 1994).
  • Female voice over and audio editing for e-learning. Demos are on the website. She has done dialog for more conversational courses in the past, although that demo isn’t on her public website.

    tags: voiceover audio

  • Calculator for pricing custom e-learning based on a number of factors (graphics and multimedia, interactivity, instructional design)

    tags: e-learning pricing

  • This isn’t really about SCORM, but a question on pricing e-learning courses for perpetual licenses rather than annual per user fees

    tags: e-learning pricing

    • Most perpetual license deals I’ve seen in the eLearning space are usually priced at 3.5x the annual user price plus another 10-15% of the contract value for course maintenance and support.
  • Example of why experts often can’t teach well (or write good courses without an ID) based on research of NICU nurses who knew how to recognize infections but were such experts that their knowledge had become automatic and intuitive for them.

    tags: sme training research

    • When you’re a domain expert in your field, it’s difficult to step back and remember what it was like to be a beginner. Once we have knowledge, it’s very hard to remember what life was like without it.

      Instead of placing the burden of training on a subject-matter expert, it’s often more effective to establish a collaboration between subject-matter experts and trainers who are experts in breaking down information, recognizing the critical elements, and putting it back together in a way that’s digestible for people who aren’t experts.

Posted from Diigo. The rest of my favorite links are here.

Research in Gamification of Learning and Instruction

Last week, I posted a rebuttal to Ruth Clark’s claim that “Games Don’t Teach.” In that post, I shared several links to research about the effectiveness of games for learning. If you are interested in a more in-depth review of research, Karl Kapp’s new book The Gamification of Learning and Instruction has an entire chapter titled “Research Says…Games are Effective for Learning.” This chapter focuses on two areas of the research: meta-analysis studies and research on specific elements of games.

Gamification of Learning and Instruction

The meta-analysis section has a useful table providing a quick summary of the major findings of each meta-analysis reviewed. Here’s a few points from that research:

  • “Game-based approach produced significant knowledge-level increases over the conventional case-based teaching methods.” (Wolfe, 1997)
  • “An instructional game will only be effective if it is designed to meet specific instructional objectives and used as it was intended.” (Hays, 2005)

In the elements of games section, Karl summarizes several individual studies and their findings in the following areas:

  • Reward structures
  • Player motivation (both intrinsic and extrinsic)
  • Avatars
  • Player perspective

Gamification in learning is often viewed very superficially as just adding extrinsic motivators like badges and leaderboards. In this book, Karl recommends going beyond that shallow understanding to look at the ways that games can be effective and to use those elements to enhance learning.

If you’re interested in more information about the book, check out the other posts in the blog book tour.

References (as cited in The Gamification of Learning and Instruction):

Hays, R.T. (2005). The effectiveness of instructional games: A literature review and discussion. Naval Air Warfare Center Training Systems Division (No 2005–004).

Wolfe, J. (1997) The effectiveness of business games in strategic management
course work. Simulation & Gaming, 28(4), 360–376.

Ruth Clark Claims “Games Don’t Teach”

Ruth Clark posted at ASTD an article titled “Why Games Don’t Teach.” It’s a deliberately provocative title, meant to draw attention and cause controversy. A more accurate title would be “Some Games Aren’t Effective at Making People Remember Content,” but that’s a lot less likely to grab attention.

Before I continue, I want to say that I enjoyed her book, eLearning and the Science of Instruction, and I have found some of the research there valuable. I respect her past contributions to the field.

However, I think Clark didn’t do a very careful review of the literature before writing her post, and I don’t think that one study is enough for her to make such a broad claim dismissing games for learning.

Oregon Trail for the iPhone

According to Ruth Clark, you didn’t learn anything playing Oregon Trail, Carmen Sandiego or Lemonade Stand

Let’s look at her summary of the research:

The goal of the research was to compare learning efficiency and effectiveness from a narrative game to a slide presentation of the content. Students who played the Crystal Island game learned less and rated the lesson more difficult than students who viewed a slide presentation without any game narrative or hands on activities. Results were similar with the Cache 17 game. The authors conclude that their findings “show that the two well-designed narrative discovery games…were less effective than corresponding slideshows in promoting learning outcomes based on transfer and retention of the games’ academic content” (p. 246).

The research is behind a paywall, of course, but the abstract is online. (Update 8/11/2014: A copy of the original article can now be found outside the paywall.)

Adams, D.M., Mayer, R.E., MacNamara, A., Koenig, A., and Wainess, R. (2012). Narrative games for learning: Testing the discovery and narrative hypotheses. Journal of Educational Psychology, 104, 235-249.

Next, let’s look at how the authors summarize their own work and see how it compares to Clark’s summary (emphasis mine).

Overall, these results provide no evidence that computer-based narrative games offer a superior venue for academic learning under short time spans of under 2 hr. Findings contradict the discovery hypothesis that students learn better when they do hands-on activities in engaging scenarios during learning and the narrative hypothesis that students learn better when games have a strong narrative theme, although there is no evidence concerning longer periods of game play.

Gee, that “under two hours” point seems like an important limitation of the research, maybe one that should have been mentioned by Clark when claiming that games have no value and “don’t teach.”

It’s also possible that there are flaws in the research.

  • The research says that the games were “well designed,” but maybe they actually weren’t. Maybe they were “well designed” by the standards of traditional courses, but not by the standards of games. Without seeing the full article, I can’t tell.
  • The learners did worse at “retention,” but honestly, I wouldn’t expect a narrative game to be all that effective at helping people memorize content. If retention was the goal, a narrative discovery style game probably was the wrong approach, which brings us back to the previous point about whether the course was well designed for the goals.
  • One of the benefits of games for learning is application and behavior change, something this research didn’t measure. I’m not terribly surprised that a game with hands-on practice didn’t help people simply recall information that well. I would have liked to see some measure of how well the learners could apply the concepts. But, as is also typical of Clark’s work, the focus is on whether people recall content, not whether they can apply it. This, strictly speaking, is a limitation of the research and not a flaw, but it is something we should consider when looking at how we apply this research to our work.

I think there’s a case to be made that the games themselves weren’t actually “well designed” as claimed. They didn’t allow for real practice, just a different format for receiving content. In the discussion on this post in the eLearning Guild’s LinkedIn group, Cathy Moore made this observation:

I don’t have access to the full study cited in Ruth’s article, but based on the description of the games in the abstract, (1) they don’t simulate a realistic situation that’s relevant to the learners and (2) they teach academic info that learners aren’t expected to apply in real life. The material was tested on college students in an academic setting, not adults on the job.

By requiring learners to explore (or slog though, in my opinion!) an irrelevant treasure hunt, you’re adding cognitive load or at the least distracting the brain from the content. It seems likely to me that putting the material in a more relevant context, such as using your knowledge of pathogens to protect patients in a hospital, would have changed the results of the study.

As Ruth herself says in the comments to the article, “I think it’s about designing a simulation (which I don’t equate directly to games) in a manner that allows learners to practice job-relevant skills.” Neither of those games let students practice job- or life-relevant skills. They were entertaining and distracting ways of presenting information for a test.

Another limitation is that this research can’t address the question of engagement and completion rates. In the real world, getting people to complete online learning is often a challenge. If your traditional text-based click next slide presentation course has a less than 20% completion rate, then a game that is engaging enough to make people want to finish and gets completion rates above 90% is a big improvement—even if that game technically produced lower retention rates in a controlled lab environment. Learning doesn’t always have to be drudgery, although sometimes we equate “worthwhile” with “unpleasant.” There is value in making it interesting enough to keep people’s attention, and maybe even an enjoyable experience.

In the previously mentioned discussion, Tahiya Marome made this point:

For the brain, play is learning and learning is play. That traditional educational structures have sucked that dry and replaced it with a grim Puritanical work is learning and learning is work structure doesn’t mean we have to leave it that way. It may take us a while to figure out exactly how, but we can make educating oneself playful and a great, life long game again. We can. Our brains are wired for it.

Clark has some legitimate points about the definition of games being fuzzy and that the design of the game should match the learning outcomes. For example, I agree with her that adding a timer to critical thinking tasks can be counterproductive. Adding a timer to skill practice for skills that really do need to be timed is good practice though. Think of help desk agents who are evaluated both on the quality of their service and how quickly they can solve problems; timed practice matches the learning outcomes.

If Clark is going to make the claim that “games don’t teach,” she needs to address all the research that contradicts her point. She makes this claim without even mentioning any of the other research.; she just pretends nothing else exists beyond the one study cited. That is, frankly, an extraordinary claim, and extraordinary claims require extraordinary evidence. One study doesn’t discount the dozens of successful examples out there. It’s bad use of research to treat any individual study as applying in all situations, regardless of the limitations of the study. What we need to look at is the trends across the bulk of research, not a single data point. There are definitely bad games out there, and games aren’t the solution in every situation, but that doesn’t mean games shouldn’t be one tool in our toolbox like Clark claims.

Here’s a cursory review of a few examples of successful games for learning. This is by no means a comprehensive review, but this would be a good place for Clark to start refuting evidence if she wants to dissuade people from using games. Again, the point is not to look at any single study as being the end of the discussion, but to look at the overall findings and the types of strategies that have repeatedly been shown to work.

  • Immersive games beats classroom in maths, summarized by Donald Clark
  • Via Karl Kapp, from a past presentation:
    • “Trainees learn more from simulations games that actively engage trainees in learning rather than passively conveying the instructional material.”
    • “Trainees participating in simulation game learning experiences have higher declarative knowledge, procedural knowledge and retention of training material than those trainees participating in more traditional learning experiences.”
  • Eduweb has a collection of research related to games for learning. Here’s a highlight from the findings of one paper: “Summative evaluation of our WolfQuest wildlife simulation game finds that players report knowledge gain, stronger emotional attachment to wolves, and significant behavioral outcomes, with large percentages of players following their game sessions with other wolf-related activities, including such further explorations of wolves on the internet, in books and on television.”
  • Kurt Squire has done extensive research in games for learning. Clark basically needs to disprove all of his work to support her claim.
  • Clark Aldrich has created a number of successful games and simulations, such as Virtual Leader. “Using practiceware significantly increased retention and application, not just awareness of learned content.”
  • James Paul Gee has published a number of articles on games and learning.
  • Mark Wagner’s dissertation on MMORPGs in education found that “MMORPGs may help students develop difficult to teach 21st Century skills and may be used to support student reflection.”
  • The Educational Games Research blog features exactly what you would think it does based on the title.

Thanks to Cathy and Tahiya for giving me permission to quote them here!

I’d love to hear if any of you out there have designed games for learning and found them to be effective or not. I’ll have more to say about this topic next week in my post for the blog book tour for the Gamification of Learning and Instruction.