Online Learning Plagiarism Horror Stories

It’s a common complaint about online education that students plagiarize, but my horror stories aren’t about students. My horror stories are about developing online courses. (Some details changed to protect the not entirely innocent.)
Paste Copy Paste Copy

How NOT to Hide Your Tracks When Plagiarizing

I was reviewing a course from someone who had just left the company when I discovered some sentences that just didn’t fit quite right. The tone shifted drastically mid-paragraph—always a red flag. So I started to highlight some phrases so I could Google them…and suddenly realized there were links embedded in the text.

As it turned out, this course included extensive copying from other sources. One of the sources was a website that frequently linked to glossary terms. But the person who copied the content didn’t know how to remove the links after she pasted it in Word, so he just changed the blue text to black and removed the underline. As soon as I hovered my mouse over the offending passages, I could follow the links right back to the original source. At least he made it easy for me to find the source and prove the content was easy.

Oh, the Irony

I’ve seen a number of subject matter experts plagiarize content for courses, but my all-time favorite story isn’t from a course I worked on myself. Another instructional designer had a SME who, frankly, really wasn’t a great writer. She had been struggling for weeks to coach him on the writing style and content. Finally, she received a draft that was right on track. She was so happy that she was getting better quality work from him…until she did a routine plagiarism check on it. More than one entire paragraph had been copied from a website without any attempt to paraphrase or cite the original.

The topic of the copied and pasted content? Business ethics.

Plagiarism Resources

Patti Shank asked recently whether people plagiarize because they don’t know or don’t care. I think it’s a combination of the two. Many people really haven’t been taught what plagiarism is or how much paraphrasing is really required to make something your own. A lot of people don’t have a good system for keeping track of citations when they research, which makes it easy to lose track of which ideas came from which source. Especially in education, many people think that “fair use” covers any educational purpose, regardless of the amount of content or how much you share it. Most people couldn’t tell you the four factors for determining fair use. This Fair Use Evaluator helps walk you through all the factors and gives you a time stamped PDF to document your analysis.

Did I Plagiarize? is a great resource for explaining the types and severity of plagiarism. The same author also created a resource called Can I Use that Picture? which helps explain image copyright laws in the US. If your client says, “We don’t need a budget for images. Just go to Google Images; there’s lots of stuff there,” this might help explain why that isn’t a good plan.

Of course, educating people doesn’t help if they don’t care whether they plagiarize or not. Especially in higher education settings, where plagiarism can mean losing a job, I always tell SMEs at the start of a project that I’m sure it won’t be a problem but that I routinely check for plagiarism. I also explain that I expect everything to be cited, even if it’s paraphrased. Just setting the expectation helps reduce plagiarism.

More Resources:

Your Horror Stories

What about you? Do you have a great story about copied and pasted content finding its way into an online course, either as part of course development or as a student submission? What are your horror stories?

Image Credit: Paste Copy Paste Copy by wiredforlego

ID and E-Learning Links (4/12/15)

Posted from Diigo. The rest of my favorite links are here.

Debunker Club Works to Dispel the Corrupted Cone of Learning

A new group called The Debunker Club is working to dispel myths and misinformation in the learning field. From their website:

The Debunker Club is an experiment in professional responsibility. Anyone who’s interested may join as long as they agree to the following:

  1. I would like to see less misinformation in the learning field.
  2. I will invest some of my time in learning and seeking the truth, from sources like peer-reviewed scientific research or translations of that research.
  3. I will politely, but actively, provide feedback to those who transmit misinformation.
  4. At least once a year, I will seek out providers of misinformation and provide them with polite feedback, asking them to stop transmitting their misinformation.
  5. I will be open to counter feedback, listening to understand opposing viewpoints. I will provide counter-evidence and argument when warranted.

This year, coinciding with April Fool’s Day 2015, the Debunker Club is running an experiment. We’re making a concerted effort to contact people who have shared the Cone of Experience (also known as the Cone of Learning or the Pyramid of Learning).

Many iterations of this cone exist. A Google image search for “cone of learning” returns dozens of results, most of which are false. If you’ve seen something like this that said, “People remember 10% of what they read, 20% of what they hear, 30% of what they see” etc., you’ve seen a variation on this theme.

Image search results for Cone of Learning

The original cone was developed by Edgar Dale and didn’t include any numbers. The later versions are the “corrupted cone” with fictitious statistics added.  Will Thalheimer’s post from 2009 debunking these claims is where I learned it was incorrect. Common sense might give you a hint that these numbers aren’t really based in research though. Think about it–how many times have you see research where all the categories broke into even 10% segments?

As part of the Debunker Club’s efforts, I discovered a post on Dane’s Education Blog called A Hierarchy of Learning. Although this post cites a great debunking article (Tales of the Undead…Learning Theories: The Learning Pyramid), the blog author only says that he “appreciate what it conveys.”

I left the following comment on his Learning Pyramid post.

Thanks for collecting so many resources on your blog. I can see that you’ve worked really hard to share many links and ideas with your readers.

However, the information above, though it may appear to have scientific support, has been exhaustively researched and found to have no basis in science. In fact, the “Tales of the Undead” link you cite debunks it.

An article from the scientific journal Educational Technology shows no research backing for the information. (Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). The Mythical Retention Chart and the Corruption of Dale’s Cone of Experience. Educational Technology, Nov/Dec 2014, 54(6), 6-16.)

The information presented is likely to produce more harm than good, promoting poor learning designs and hurting learners.

While we might abstract some beneficial notions from the percentages portrayed in the misleading information — namely that encouraging realistic practice has benefits — there are numerous faulty concepts within the bogus percentages that can do real harm. For example, by having people think that there are benefits to seeing over hearing, or hearing over reading, we are sending completely wrong messages about how learning works.

Most importantly, recent advances in learning science have really come together over the last two decades. The misleading information was first reported in 1914, with no research backing. It’s better to follow more recent findings than information that has no scientific basis. See, for example, the book Make it Stick by Brown, Roediger, and McDaniel. Julie Dirksen’s Design for How People Learn is another great selection.

I’m part of a larger community of folks called the Debunker Club who are attempting to encourage the use of proven, scientifically-based learning factors in the learning field.

I’m going to be posting about this misleading information on my blog. I hope you’ll comment and respond to my post if you wish. I (and the debunker community in general) want to learn how other people feel about the issues and ideas surrounding the original information and our approach to debunking myths and sharing evidence.

If you’re interested in dispelling misinformation and improving the learning field, please join the Debunker Club and participate in the conversation.

ID and E-Learning Links (3/22/15)

Posted from Diigo. The rest of my favorite links are here.

Intrinsic and Instructional Feedback in Learning Scenarios

A few years ago, I was a judge for a competition on scenario-based learning. While there were a few terrific submissions, I thought many of the courses missed the whole point of scenario-based learning. They started out fine: they provided some sort of realistic context and asked learners to make a decision. Then, instead of showing them the consequences of their decision, they just provided feedback as if it was any other multiple choice assessment. “Correct, that is the best decision.” Blah. Boring. And ineffective.

In her book Scenario-based e-Learning: Evidence-Based Guidelines for Online Workforce Learning, Ruth Clark labels the two types of feedback “intrinsic” and “instructional.” Instructional feedback is what we see all the time in e-learning; it’s feedback that tells you what was right or wrong and possibly guides or coaches you about how to improve.

With intrinsic feedback, the learning environment responds to decisions and action choices in ways that mirror the real world. For example, if a learner responds rudely to a customer, he will see and hear an unhappy customer response. Intrinsic feedback gives the learner an opportunity to try, fail, and experience the results of errors in a safe environment.

Intrinsic feedback is one of the features of scenario-based learning that sets it apart from traditional e-learning. When you show learners the consequences of their actions, they can immediately see why it matters. The principles or process that you’re teaching isn’t just abstract content anymore; it’s something with real world implications and it matters if they get it wrong. It’s more engaging to receive intrinsic feedback. Learners are also more likely to remember the content because they’ve already seen what could happen if they don’t make the right choices.

Intrinsic feedback can take a number of forms. Customer reactions (verbal and nonverbal), patient health outcomes improving, sales figures dropping, a machine starting to work again, and other environmental responses can be intrinsic feedback. The example below contains three pieces of intrinsic feedback, all on the left side: a facial expression, a conversation response, and a motivation meter at the bottom.

Screenshot of a branching scenario with intrinsic and instructional feedbackIn this example, learners are trying to convince someone to make healthier eating choices using motivational interviewing. Motivation level is an “invisible” factor, so I made it visible with a motivation indicator in the lower left corner. As learners make better choices and the patient feels more motivated to change, the motivation meter shows their progress.

Scenarios can also use instructional feedback. In the above example, a coach at the top provides instructional feedback and guidance on learners’ choices. Clark recommends using both intrinsic and instructional feedback in most situations.

One issue with instructional feedback is that it can break the realism of a scenario. Using a coach can help alleviate that problem, as can having learners ask for advice from people inside a scenario (a manager, an HR rep, another worker, etc.). Using a conversational tone for the instructional feedback also helps keep it within the scenario. Instructional feedback in a scenario often doesn’t need to explicitly say that a choice was correct or incorrect; that’s clear enough from the intrinsic feedback. Focus your instructional feedback on explaining why a choice was effective or how it could have been better.

Feedback can also be delayed rather than happening immediately. Clark recommends more immediate feedback for novices but delayed feedback for experts or more advanced learners. Depending on the audience, for some branching scenarios I do immediate intrinsic feedback for each choice learners make. When learners make bad choices that cause them fail and they need to restart the scenario, they receive instructional feedback with guidance on how to improve on their next attempt. They might be able to make two or three bad choices in a row before they hit a dead end in the scenario, so the instructional feedback is delayed. It keeps the momentum of the scenario moving forward but still provides support to learners to help them improve.

If you’re building scenario-based learning, don’t leave out the intrinsic feedback! Your learners will thank you.