Last week, I posted a rebuttal to Ruth Clark’s claim that “Games Don’t Teach.” In that post, I shared several links to research about the effectiveness of games for learning. If you are interested in a more in-depth review of research, Karl Kapp’s new book The Gamification of Learning and Instruction has an entire chapter titled “Research Says…Games are Effective for Learning.” This chapter focuses on two areas of the research: meta-analysis studies and research on specific elements of games.
The meta-analysis section has a useful table providing a quick summary of the major findings of each meta-analysis reviewed. Here’s a few points from that research:
- “Game-based approach produced significant knowledge-level increases over the conventional case-based teaching methods.” (Wolfe, 1997)
- “An instructional game will only be effective if it is designed to meet specific instructional objectives and used as it was intended.” (Hays, 2005)
In the elements of games section, Karl summarizes several individual studies and their findings in the following areas:
- Reward structures
- Player motivation (both intrinsic and extrinsic)
- Player perspective
Gamification in learning is often viewed very superficially as just adding extrinsic motivators like badges and leaderboards. In this book, Karl recommends going beyond that shallow understanding to look at the ways that games can be effective and to use those elements to enhance learning.
If you’re interested in more information about the book, check out the other posts in the blog book tour.
References (as cited in The Gamification of Learning and Instruction):
Hays, R.T. (2005). The effectiveness of instructional games: A literature review and discussion. Naval Air Warfare Center Training Systems Division (No 2005–004).
Wolfe, J. (1997) The effectiveness of business games in strategic management
course work. Simulation & Gaming, 28(4), 360–376.
Ruth Clark posted at ASTD an article titled “Why Games Don’t Teach.” It’s a deliberately provocative title, meant to draw attention and cause controversy. A more accurate title would be “Some Games Aren’t Effective at Making People Remember Content,” but that’s a lot less likely to grab attention.
Before I continue, I want to say that I enjoyed her book, eLearning and the Science of Instruction, and I have found some of the research there valuable. I respect her past contributions to the field.
However, I think Clark didn’t do a very careful review of the literature before writing her post, and I don’t think that one study is enough for her to make such a broad claim dismissing games for learning.
Let’s look at her summary of the research:
The goal of the research was to compare learning efficiency and effectiveness from a narrative game to a slide presentation of the content. Students who played the Crystal Island game learned less and rated the lesson more difficult than students who viewed a slide presentation without any game narrative or hands on activities. Results were similar with the Cache 17 game. The authors conclude that their findings “show that the two well-designed narrative discovery games…were less effective than corresponding slideshows in promoting learning outcomes based on transfer and retention of the games’ academic content” (p. 246).
Adams, D.M., Mayer, R.E., MacNamara, A., Koenig, A., and Wainess, R. (2012). Narrative games for learning: Testing the discovery and narrative hypotheses. Journal of Educational Psychology, 104, 235-249.
Next, let’s look at how the authors summarize their own work and see how it compares to Clark’s summary (emphasis mine).
Overall, these results provide no evidence that computer-based narrative games offer a superior venue for academic learning under short time spans of under 2 hr. Findings contradict the discovery hypothesis that students learn better when they do hands-on activities in engaging scenarios during learning and the narrative hypothesis that students learn better when games have a strong narrative theme, although there is no evidence concerning longer periods of game play.
Gee, that “under two hours” point seems like an important limitation of the research, maybe one that should have been mentioned by Clark when claiming that games have no value and “don’t teach.”
It’s also possible that there are flaws in the research.
- The research says that the games were “well designed,” but maybe they actually weren’t. Maybe they were “well designed” by the standards of traditional courses, but not by the standards of games. Without seeing the full article, I can’t tell.
- The learners did worse at “retention,” but honestly, I wouldn’t expect a narrative game to be all that effective at helping people memorize content. If retention was the goal, a narrative discovery style game probably was the wrong approach, which brings us back to the previous point about whether the course was well designed for the goals.
- One of the benefits of games for learning is application and behavior change, something this research didn’t measure. I’m not terribly surprised that a game with hands-on practice didn’t help people simply recall information that well. I would have liked to see some measure of how well the learners could apply the concepts. But, as is also typical of Clark’s work, the focus is on whether people recall content, not whether they can apply it. This, strictly speaking, is a limitation of the research and not a flaw, but it is something we should consider when looking at how we apply this research to our work.
I think there’s a case to be made that the games themselves weren’t actually “well designed” as claimed. They didn’t allow for real practice, just a different format for receiving content. In the discussion on this post in the eLearning Guild’s LinkedIn group, Cathy Moore made this observation:
I don’t have access to the full study cited in Ruth’s article, but based on the description of the games in the abstract, (1) they don’t simulate a realistic situation that’s relevant to the learners and (2) they teach academic info that learners aren’t expected to apply in real life. The material was tested on college students in an academic setting, not adults on the job.
By requiring learners to explore (or slog though, in my opinion!) an irrelevant treasure hunt, you’re adding cognitive load or at the least distracting the brain from the content. It seems likely to me that putting the material in a more relevant context, such as using your knowledge of pathogens to protect patients in a hospital, would have changed the results of the study.
As Ruth herself says in the comments to the article, “I think it’s about designing a simulation (which I don’t equate directly to games) in a manner that allows learners to practice job-relevant skills.” Neither of those games let students practice job- or life-relevant skills. They were entertaining and distracting ways of presenting information for a test.
Another limitation is that this research can’t address the question of engagement and completion rates. In the real world, getting people to complete online learning is often a challenge. If your traditional text-based click next slide presentation course has a less than 20% completion rate, then a game that is engaging enough to make people want to finish and gets completion rates above 90% is a big improvement—even if that game technically produced lower retention rates in a controlled lab environment. Learning doesn’t always have to be drudgery, although sometimes we equate “worthwhile” with “unpleasant.” There is value in making it interesting enough to keep people’s attention, and maybe even an enjoyable experience.
In the previously mentioned discussion, Tahiya Marome made this point:
For the brain, play is learning and learning is play. That traditional educational structures have sucked that dry and replaced it with a grim Puritanical work is learning and learning is work structure doesn’t mean we have to leave it that way. It may take us a while to figure out exactly how, but we can make educating oneself playful and a great, life long game again. We can. Our brains are wired for it.
Clark has some legitimate points about the definition of games being fuzzy and that the design of the game should match the learning outcomes. For example, I agree with her that adding a timer to critical thinking tasks can be counterproductive. Adding a timer to skill practice for skills that really do need to be timed is good practice though. Think of help desk agents who are evaluated both on the quality of their service and how quickly they can solve problems; timed practice matches the learning outcomes.
If Clark is going to make the claim that “games don’t teach,” she needs to address all the research that contradicts her point. She makes this claim without even mentioning any of the other research.; she just pretends nothing else exists beyond the one study cited. That is, frankly, an extraordinary claim, and extraordinary claims require extraordinary evidence. One study doesn’t discount the dozens of successful examples out there. It’s bad use of research to treat any individual study as applying in all situations, regardless of the limitations of the study. What we need to look at is the trends across the bulk of research, not a single data point. There are definitely bad games out there, and games aren’t the solution in every situation, but that doesn’t mean games shouldn’t be one tool in our toolbox like Clark claims.
Here’s a cursory review of a few examples of successful games for learning. This is by no means a comprehensive review, but this would be a good place for Clark to start refuting evidence if she wants to dissuade people from using games. Again, the point is not to look at any single study as being the end of the discussion, but to look at the overall findings and the types of strategies that have repeatedly been shown to work.
- Immersive games beats classroom in maths, summarized by Donald Clark
- Via Karl Kapp, from a past presentation:
- “Trainees learn more from simulations games that actively engage trainees in learning rather than passively conveying the instructional material.”
- “Trainees participating in simulation game learning experiences have higher declarative knowledge, procedural knowledge and retention of training material than those trainees participating in more traditional learning experiences.”
- Eduweb has a collection of research related to games for learning. Here’s a highlight from the findings of one paper: “Summative evaluation of our WolfQuest wildlife simulation game finds that players report knowledge gain, stronger emotional attachment to wolves, and significant behavioral outcomes, with large percentages of players following their game sessions with other wolf-related activities, including such further explorations of wolves on the internet, in books and on television.”
- Kurt Squire has done extensive research in games for learning. Clark basically needs to disprove all of his work to support her claim.
- Clark Aldrich has created a number of successful games and simulations, such as Virtual Leader. “Using practiceware significantly increased retention and application, not just awareness of learned content.”
- James Paul Gee has published a number of articles on games and learning.
- Mark Wagner’s dissertation on MMORPGs in education found that “MMORPGs may help students develop difficult to teach 21st Century skills and may be used to support student reflection.”
- The Educational Games Research blog features exactly what you would think it does based on the title.
Thanks to Cathy and Tahiya for giving me permission to quote them here!
I’d love to hear if any of you out there have designed games for learning and found them to be effective or not. I’ll have more to say about this topic next week in my post for the blog book tour for the Gamification of Learning and Instruction.
As part of David Kelly’s Learning Styles Awareness Day, I’m revisiting the idea of learning styles. I admit that when I was taught learning styles in my education program, I didn’t question it. It made intuitive sense, and I’d never heard a real criticism of the theory. When I started digging into the research though, I realized that the research support for learning styles is pretty flimsy.
If I think back to the way learning styles were taught to me though, it was never applied the way that the theory is “officially” supposed to work. The most common idea is that people have some sort of style, and if you match that style they will learn better. That’s what Will Thalheimer’s still-unanswered research challenge asks for: something where individuals receive training matched to their style. If you’re a visual learner, you would only receive learning via visual methods; if you’re an auditory learner, you’d listen to everything you learn, etc.
That was never how it was applied in the classroom though. For K-12 classroom applications, learning styles were really about providing multiple methods of learning for everyone in the class. In a physical classroom, you didn’t have the option of individualizing everything, so you tended to look for ways to hit the visual and auditory at the same time or for multiple activities to reinforce the same content.
As a music teacher, that might mean something like teaching rhythms through multiple channels. I’d start by having students listen to me chant and clap a rhythm (auditory), then have them echo that rhythm back (auditory and kinesthetic). After several minutes of echoing rhythms with a specific type of pattern, I’d draw a rhythm on the chalkboard (yes, actual chalk) and connect how it looks to how it sounds (visual and auditory). Then we’d practice reading some rhythms with similar patterns, with them looking, chanting, and clapping all together.
If I was teaching music today, I’d do that same kind of lesson, just not because of learning styles. That’s all based on the Kodály method, which does have research support (at least as far as I know; I haven’t dug into it since I rarely teach music anymore). But the idea of approaching concepts from multiple angles with different methods and media still makes sense. It isn’t because I’m matching to a particular style; it’s because I’m helping everyone learn through multiple channels. This might be what Tom Stafford from Mind Hacks is getting at when he says “Having thought about learning styles helps teachers improve their teaching and also helps increase their confidence and motivation.” I really wish he provided a citation for the idea that thinking about learning styles helps teachers improve their teaching though; I’d like to know whether that’s just his opinion or something with data to support it.
So what does this mean for me as an instructional designer today, rather than a K-12 music and band teacher? As an instructional designer, I basically ignore learning styles. I do think about presenting information with both visuals and audio, but that’s more based on cognitive load theory than learning styles. I’m also working to do better at visual presentation with graphics and not just words, because that is supported by research. As Judy Unrein noted “…humans are such overwhelmingly visual creatures that if we simply catered better to that one sense, we could improve the vast majority of our designs.”
Judy’s idea of focusing on interaction preferences is an interesting one. People do have different preferences, and those preferences can change based on the context (and the type of content, I would add). Giving learners some control over how they interact with the training does seem beneficial. If we don’t lock down the navigation, they can choose which parts they really need. In spite of the research, I personally find audio in e-learning to be generally obnoxious, so if I can turn it off and read the captions instead, I almost always will do that instead. I can read much faster that you can read to me, thank you very much, so I’m annoyed if you don’t give me the option of reading.
What about you? Is there anything in learning styles that you find useful in your own practice, or is it something you’ve abandoned in favor of other ideas?
Julie Dirksen’s Design for How People Learn is a great book for instructional designers because it actually is written using the principles taught. Some instructional design books use a “do as I say, not as I do” kind of approach: they talk about chunking content into manageable amounts, using effective visuals, and motivating learners, but they are filled with long, unbroken blocks of dry text. Design for How People Learn is an easy, fun read, with lots of visuals and realistic examples that touch on frustrating problems instructional designers face.
Julie says, “I recently heard the advice for authors that you should write the book you want to read but can’t find. That’s basically what I did.”
Lots of Images
Images are interspersed in every topic. It’s a lot of stick figures, but you’d be surprised at how effective stick figures can be at conveying a concept. For example, chapter 2 “Who Are Your Learners?” includes a series of stick figures facing different inclines representing the challenge of a course. It’s five variations of a single stick figure with a single angled line depicting a hill, but it still gets the point across. You can see how a novice learner is facing a steeper hill than an expert. I was a little surprised to not find any screenshots of actual courses, but the book doesn’t feel like it’s missing them.
When I was reading this book, I realized that I suddenly started using a lot more visuals in the course I was developing. The way the images were done in the book gave me more inspiration for my own course. Even if you’re an experienced instructional designer who is already familiar with most of the research and principles, this book is valuable as an example of well-done graphics for learning.
Stories and Examples
Although the book doesn’t include screenshots or examples of actual courses or training materials, the stories and examples do depict actual problems instructional designers face. For example, there’s an example of a new manager who has gone through training but isn’t applying the coaching skills taught. You’re given a description of her performance and asked to consider whether this is really a problem that can be fixed by training. It’s very realistic; you’ve probably seen or experienced a similar situation yourself. You can connect it to your experience, and it’s easy to see how this applies in your work. Julie explains benefits of using stories later in the book, but she applies the principle throughout.
The book includes lots of research about how we learn and remember, but it’s very accessible. The language is approachable and often humorous. The research is always framed in terms of “OK, so what does that mean for me when I’m creating a course? What do I do with that research?” I admit that there weren’t a lot of surprises for me in the research; it was mostly information I was already familiar with. I expect anyone with a masters degree in instructional design or who does a lot of independent reading and study would find it to be the same. However, those who are just getting started in the field or are accidental instructional designers will find to be a good foundation of research principles. The references at the end of each chapter are a good resource to dig deeper.
The Table of Contents and a sample chapter on motivation are both available on Julie’s site.