On reddit, someone asked how to manage the complexity of branching scenarios and keep them from growing out of control. One of the issues with branching scenarios is that you can get exponential growth. If each choice has 3 options, you end up with 9 slides after just 2 choices, and 27 after 3 choices. This is 40 pages total with only 3 decisions per path. For most projects, that’s more complexity than you want or need.
So how do you manage this complexity?
One way to make this branching easier to manage is by creating your scenarios in Twine. Twine makes it very easy to draft scenarios and check how all the connections flow together. No matter how complex your scenario is, Twine makes it easier to create it. Cathy Moore has an example of a scenario she built in Twine. This scenario has 57 total decision points, but it only took her 8 hours to create.
You can use Twine as your initial prototype, or you can use it as your final product. I have used Twine as my initial draft and prototype, then exported everything to Word as a storyboard for developers to build the final version in Storyline.
Planning a Scenario
Before I sit down to write a scenario, I always know my objectives. What are you teaching or assessing?
I usually have an idea of how long the ideal or perfect path will be. If you have a multi-step process, that’s your ideal path. If there’s going to be 4 decision points on the shortest path, I know what those are before I start writing.
I also usually know at least some of the decision points based on errors or mistakes I need to address.
There’s a limit to how much you can plan before you just start writing it out though. I find it’s easier to just open up Twine and figure it out within that system.
Allow Opportunities to Fix Mistakes
One trick for managing the potentially exponential growth is by giving learners a chance to get back on the right path if they make a minor error. If they make 2 or 3 errors in a row, they get to an ending and have to restart the whole thing.
For example, maybe you’re teaching a communication skill where they should start with an open-ended question before launching into a sales pitch. Choice A is the open-ended question (the best choice). Choice B is a closed question (an OK choice). Choice C is jumping right into the sales pitch without asking (bad choice). After the customer response for choice B, I’d give them an opportunity to use the open-ended question (A) as their follow up. Reusing some choices helps keep it from growing out of control. In this image, reusing choices cuts the total number of pages from 40 to 20.
Make Some Paths Shorter
Not every path needs to be the same length. In the above image, one branch from choice C is shorter. It ends after 2 choices instead of 3. You might make a short path if people make several major errors in a row. Past a certain point, it makes sense to ask people to reset the scenario from the beginning or backtrack to a previous decision.
Good, OK, and Bad
In branching scenarios, not everything is as black and white as a clear-cut right or wrong answer. You can have good, OK, and bad choices and endings. In this example from my portfolio, green is good choices/endings, orange is OK choices/endings, and red is bad choices/endings. In this scenario, if you choose 39 (bad), you have 3 options: 40 (back on the good path, recovering from the mistake), 41 (OK), and 42 (a bad choice leading to a restart). This example has 15 endings, which is still more than I would like; if I was redoing it now I would probably collapse a few more of those endings together.
Do you have any suggestions or tips for managing and reducing the complexity of branching scenarios? Please share in the comments.
Bloom’s taxonomy sometimes creates unclear verb categorization and connection to assessments. This framework is focused on performance objectives and ties the type of knowledge to verbs, instructional strategies, and types of practice or assessment. This is partially drawn from Merrill’s work. Procedural knowledge and declarative knowledge are handled differently.
“How to be a learning mythbuster” from Cathy Moore. Part of this is the broader problem that most people are lousy at understanding research and verifying sources. This isn’t exclusive to the learning profession. We should be better about avoiding the myths in our own field though.
We work in organizations that believe harmful myths. We’re pressured to work as if the myths are true, and we can’t or don’t take the time we need to keep our knowledge up to date and combat the myths.
Capterra’s analysis of top LMSs by customers, users, and social media popularity. Many people only review 2-3 LMSs before making a decision. This list gives people some additional choices to review while still being a manageable list. The explanation of their research is linked below the infographic.
Ruth Clark posted at ASTD an article titled “Why Games Don’t Teach.” It’s a deliberately provocative title, meant to draw attention and cause controversy. A more accurate title would be “Some Games Aren’t Effective at Making People Remember Content,” but that’s a lot less likely to grab attention.
Before I continue, I want to say that I enjoyed her book, eLearning and the Science of Instruction, and I have found some of the research there valuable. I respect her past contributions to the field.
However, I think Clark didn’t do a very careful review of the literature before writing her post, and I don’t think that one study is enough for her to make such a broad claim dismissing games for learning.
Let’s look at her summary of the research:
The goal of the research was to compare learning efficiency and effectiveness from a narrative game to a slide presentation of the content. Students who played the Crystal Island game learned less and rated the lesson more difficult than students who viewed a slide presentation without any game narrative or hands on activities. Results were similar with the Cache 17 game. The authors conclude that their findings “show that the two well-designed narrative discovery games…were less effective than corresponding slideshows in promoting learning outcomes based on transfer and retention of the games’ academic content” (p. 246).
Adams, D.M., Mayer, R.E., MacNamara, A., Koenig, A., and Wainess, R. (2012). Narrative games for learning: Testing the discovery and narrative hypotheses. Journal of Educational Psychology, 104, 235-249.
Next, let’s look at how the authors summarize their own work and see how it compares to Clark’s summary (emphasis mine).
Overall, these results provide no evidence that computer-based narrative games offer a superior venue for academic learning under short time spans of under 2 hr. Findings contradict the discovery hypothesis that students learn better when they do hands-on activities in engaging scenarios during learning and the narrative hypothesis that students learn better when games have a strong narrative theme, although there is no evidence concerning longer periods of game play.
Gee, that “under two hours” point seems like an important limitation of the research, maybe one that should have been mentioned by Clark when claiming that games have no value and “don’t teach.”
It’s also possible that there are flaws in the research.
The research says that the games were “well designed,” but maybe they actually weren’t. Maybe they were “well designed” by the standards of traditional courses, but not by the standards of games. Without seeing the full article, I can’t tell.
The learners did worse at “retention,” but honestly, I wouldn’t expect a narrative game to be all that effective at helping people memorize content. If retention was the goal, a narrative discovery style game probably was the wrong approach, which brings us back to the previous point about whether the course was well designed for the goals.
One of the benefits of games for learning is application and behavior change, something this research didn’t measure. I’m not terribly surprised that a game with hands-on practice didn’t help people simply recall information that well. I would have liked to see some measure of how well the learners could apply the concepts. But, as is also typical of Clark’s work, the focus is on whether people recall content, not whether they can apply it. This, strictly speaking, is a limitation of the research and not a flaw, but it is something we should consider when looking at how we apply this research to our work.
I think there’s a case to be made that the games themselves weren’t actually “well designed” as claimed. They didn’t allow for real practice, just a different format for receiving content. In the discussion on this post in the eLearning Guild’s LinkedIn group, Cathy Moore made this observation:
I don’t have access to the full study cited in Ruth’s article, but based on the description of the games in the abstract, (1) they don’t simulate a realistic situation that’s relevant to the learners and (2) they teach academic info that learners aren’t expected to apply in real life. The material was tested on college students in an academic setting, not adults on the job.
By requiring learners to explore (or slog though, in my opinion!) an irrelevant treasure hunt, you’re adding cognitive load or at the least distracting the brain from the content. It seems likely to me that putting the material in a more relevant context, such as using your knowledge of pathogens to protect patients in a hospital, would have changed the results of the study.
As Ruth herself says in the comments to the article, “I think it’s about designing a simulation (which I don’t equate directly to games) in a manner that allows learners to practice job-relevant skills.” Neither of those games let students practice job- or life-relevant skills. They were entertaining and distracting ways of presenting information for a test.
Another limitation is that this research can’t address the question of engagement and completion rates. In the real world, getting people to complete online learning is often a challenge. If your traditional text-based click next slide presentation course has a less than 20% completion rate, then a game that is engaging enough to make people want to finish and gets completion rates above 90% is a big improvement—even if that game technically produced lower retention rates in a controlled lab environment. Learning doesn’t always have to be drudgery, although sometimes we equate “worthwhile” with “unpleasant.” There is value in making it interesting enough to keep people’s attention, and maybe even an enjoyable experience.
In the previously mentioned discussion, Tahiya Marome made this point:
For the brain, play is learning and learning is play. That traditional educational structures have sucked that dry and replaced it with a grim Puritanical work is learning and learning is work structure doesn’t mean we have to leave it that way. It may take us a while to figure out exactly how, but we can make educating oneself playful and a great, life long game again. We can. Our brains are wired for it.
Clark has some legitimate points about the definition of games being fuzzy and that the design of the game should match the learning outcomes. For example, I agree with her that adding a timer to critical thinking tasks can be counterproductive. Adding a timer to skill practice for skills that really do need to be timed is good practice though. Think of help desk agents who are evaluated both on the quality of their service and how quickly they can solve problems; timed practice matches the learning outcomes.
If Clark is going to make the claim that “games don’t teach,” she needs to address all the research that contradicts her point. She makes this claim without even mentioning any of the other research.; she just pretends nothing else exists beyond the one study cited. That is, frankly, an extraordinary claim, and extraordinary claims require extraordinary evidence. One study doesn’t discount the dozens of successful examples out there. It’s bad use of research to treat any individual study as applying in all situations, regardless of the limitations of the study. What we need to look at is the trends across the bulk of research, not a single data point. There are definitely bad games out there, and games aren’t the solution in every situation, but that doesn’t mean games shouldn’t be one tool in our toolbox like Clark claims.
Here’s a cursory review of a few examples of successful games for learning. This is by no means a comprehensive review, but this would be a good place for Clark to start refuting evidence if she wants to dissuade people from using games. Again, the point is not to look at any single study as being the end of the discussion, but to look at the overall findings and the types of strategies that have repeatedly been shown to work.
“Trainees learn more from simulations games that actively engage trainees in learning rather than passively conveying the instructional material.”
“Trainees participating in simulation game learning experiences have higher declarative knowledge, procedural knowledge and retention of training material than those trainees participating in more traditional learning experiences.”
Eduweb has a collection of research related to games for learning. Here’s a highlight from the findings of one paper: “Summative evaluation of our WolfQuest wildlife simulation game finds that players report knowledge gain, stronger emotional attachment to wolves, and significant behavioral outcomes, with large percentages of players following their game sessions with other wolf-related activities, including such further explorations of wolves on the internet, in books and on television.”
Clark Aldrich has created a number of successful games and simulations, such as Virtual Leader. “Using practiceware significantly increased retention and application, not just awareness of learned content.”
James Paul Gee has published a number of articles on games and learning.
Mark Wagner’s dissertation on MMORPGs in education found that “MMORPGs may help students develop difficult to teach 21st Century skills and may be used to support student reflection.”
Thanks to Cathy and Tahiya for giving me permission to quote them here!
I’d love to hear if any of you out there have designed games for learning and found them to be effective or not. I’ll have more to say about this topic next week in my post for the blog book tour for the Gamification of Learning and Instruction.