Can Turnitin detect AI-produced writing?
You may have heard that Turnitin released a preview of its AI-detection tool in April 2023. Due to concerns about some of its features, all four campuses have decided not to implement this feature.
Please be aware that the existing version of Turnitin cannot detect content from ChatGPT or other AI tools. Even running a prompt for your writing assignment through an AI tool (e.g., Gemini, ChatGPT, or Claude), several times, then uploading those to Turnitin, is unlikely to match submissions that include AI. In most cases, the AI will produce results with sufficient variation to fool Turnitin.
What is an AI detector?
These are applications that predict the likelihood that writing was created by an AI or a human. Typically, these look at characteristics of the writing, particularly how random it is. Humans tend to use a greater variety of words, have more randomness in their spelling, grammar, and syntax, and generally more complexity than an AI. Some will give a verbal or graphical indication of how strongly it finds the text to be from a human or an AI. Others return results in terms of perplexity (a measure of randomness within a sentence) and burstiness (a measure of randomness between sentences) with scores, graphs, or color coding. Lower perplexity and burstiness scores are more likely to be from an AI, with higher ones pointing toward human authorship.
We highly discourage the use of ChatGPT or similar AIs (e.g., Bing AI Chat, Bard, Claude) to determine if a paper was written by an AI or human. They produce false results at a very high rate regardless of who or what wrote the paper. They will also produce plausible rationalizations to defend their answers if asked. This is the worst way to determine AI plagiarism.
Are AI detectors reliable?
No, at best they are indicative. Published claims to reliability vary greatly, between about 26 and 80% for free ones. In other words, expect them to be wrong between a fifth and three-quarters of the time. Those figures apply to the free detectors already available in early 2023. Turnitin claims a false positive rate of 1% (i.e., 1% of those submissions flagged as containing AI-content will actually have no AI-generated content) and adjusts its threshold for what it flags as AI-created high in order to avoid more false positives. Those figures may be many times higher for non-native writers of English.
It is possible they will improve, but this should be viewed as an arms race rather than a stable situation. For instance, recent advances with ChatGPT (particularly using GPT-4) show that it is possible to coach it to write with more complexity and fluency, making it harder to detect. This is particularly true of students using well-engineered prompts. By prompt engineering, we mean the creation of fully developed questions and instructions (sometimes including data) to elicit the desired kind of results from the AI.
If you are going to use these tools, we recommend that you check your sample with more than one.
Are there privacy or other issues?
Free AI detectors have not been vetted by the University. This is strictly a case of use at your own risk for now. You should never feed them any content allowing identification of the student. We also do not know the specifics of if or how they store or use content. The same applies to feeding text from student papers back into an AI for evaluation.
Many faculty will enter parts of student papers into Google or other search engines to try to find matches, so this may not seem so different to you. You may wish to consider the differences between that and pasting or uploading all (or large parts of) a paper into an application. Beyond privacy, there may be questions of student copyright to consider.
Because we have a license and agreements with Turnitin that covers FERPA and meet the University's interpretation of student copyright, these considerations do not apply to Turnitin tools.
What should I look for when reading a paper?
There are also telltale signs to look for, though, again, as the technology evolves, these may change. This is based on ChatGPT.
- Look at the complexity and repetitiveness of the writing. AI's are more likely to write less complex sentences and to repeat words and phrases more often.
- One telltale sign of an AI-generated paper in 2023 was made up or mangled citations, which might include DOIs for other articles or that pointed nowhere: a mixture of correct and incorrrect author, title, and publication information; books and articles that do not exist, and nonsensical information. Due to improvements in the tools and their access to web searches, this is less likely to happen in 2024 and may continue to improve; however, the tools still make mistakes, and the citations produced may not contain information pertinent to the information in the paper. It is still worthwhile looking for egregious factual errors. At least on some subjects, chatbots will insert information that is flatly impossible. While students might do this, in combination with other factors, the errors are often things that are unlikely to be made by a human. Remember, AIs do not understand what they are writing. The phrase "stochastic parrots" is often used to describe them, as they work out what words are most likely to follow other words and string them together.
- Look for grammatical, syntax, and spelling errors. These are more likely to be mistakes a human author would make.
- If you have a writing sample from the student that you know is authentic, compare the style, usage, etc. to see if they match up or vary considerably.
- Does the paper refer directly to or quote the textbook or instructor? The AI is unlikely to have textbook access (yet) or know what is said in class (unless fed that in a prompt).
- Does the paper have self-references that refer back to the AI by name or kind? In some cases students have left references to ChatGPT made to itself in the text.
- Consider giving ChatGPT your writing prompt and see how it compares to student submissions.
What are some ways I can structure writing assignments to discourage bad or prohibited uses of AI?
- Consider requiring students to quote from specific works, such as the textbook or from class notes. AI is unlikely to have access to either, though students might include quotes in the writing prompt.
- Add reflective features to the assignment, these could be written or non-written, such as having students discuss live, or record (e.g., VoiceThread, Panopto), on what they have found in their research and reflect on their writing.
- Use ChatGPT or other tools as part of the writing process, for instance, brainstorming, but also have students critique the work, consider the ethics of using it, etc.
- Create a writing assignment with scaffolding (including outlines, rough drafts, annotated bibliographies, incorporating feedback from peer reviews and the instructor, etc.) could help with some aspects of AI. It might not be very effective on the outline or first draft as those could be still be generated by the AI. Asking for an annotated bibliography would, with the current limitations of the software, be something it could do with much success. Peer review and instructor feedback needs to be detailed and substantive. Otherwise, students could give those as further prompts to the AI, generating new versions of the paper in an iterative fashion.
- Contact your campus teaching-learning center, writing program, or Missouri Online for ideas and help working out assignments that promote use of AI in positive ways or mitigate possible harms.
What other resources are available?
- J. Scott Christianson, End the AI detection arms race: Scott Christianson (MU Trulaske College of Business), discusses reasons to move away from high levels of concerns about AI plagiarism in the "End the AI detection arms race" journal article. He suggests moving toward engaging with technology to benefit both students and professors.
- Issues Posed by Generative AI for Teaching and Learning MISSOURI ONLINE: A comprehensive overview of AI-related issues to consider when teaching or designing a course or assignments.
- ChatGPT & Generative AI: Missouri Online blog post from February, 2023.
- Cultivating a Culture of Academic Integrity in your Classroom and on Campus: recording of a session on academic integrity (including ChatGPT) from the January, 2023 MU Teaching Renewal Week 2023. (Requires UM Panopto login.)
- AI Plagiarism Overview: this is a recorded version of a Missouri Online session expanding on these subjects. It was recorded on May 3, 2023. Given the changing nature of the topic, portions of it may be out of date when viewed.
- How To Check If Something Was Written with AI: long blog post on ways to detect AI writing. This is not focused on education, but some of the suggestions are useful. The author also discusses a mixture of for-fee and freely-available tools.
- A Toolkit for Addressing AI Plagiarism in the Classroom: this has many good suggestions and a Conversation Template: To Discuss AI-Plagiarism With Students with useful ideas on how to approach a student you believe is using AI in unacceptable ways.
- AI Writing Detection: A Losing Battle Worth Fighting: Inside Higher Education article covering on-going attempts to detect AI writing, the challenges, and the pitfalls.
- Sarah Eaton, 6 Tenets of Postplagiarism: Writing in the Age of Artificial Intelligence: Sarah Eaton (University of Calgary) is a scholar studying academic integrity. She argues that our ideas about plagiarism are outdated and need to evolve along with the technology.
- Sarah Eaton, The Use of AI-Detection Tools in the Assessment of Student Work: this is an excellent piece on problems of AI detection and considerations if you do decide to use it. This is written from a Canadian perspective, and so reflects a somewhat set of legal and educational policy constraints than found in the United States, but it is still valuable to US educators for its clarity of thought and its cautions.
- Maha Bali, Agentic and Equitable Educational Development For a Potential Postplagiarism Era: Maha Bali (The American University in Cairo) critiques and expands Eaton's ideas on post-plagiarism, specifically from a more global perspective.
- Turnitin AI Writing Resources (Turnitin LLC, 2023): Turnitin has created several documents (including rubrics) for working with the challenges of AI writing.
- Glossary of Artificial Intelligence Terms for Educators from NCSU
- James Zou, et al., GPT Detectors Are Biased Against Non-Native English Writers is a study of how AI detectors give high percentages of false positives to papers written by non-native English writers. It also shows that the papers are likely to fare better if polished by an AI.
- Lori Salem, et al., Evaluating the Effectiveness of Turnitin’s AI Writing Indicator Model looks at the Turnitin AI Indicator in depth, noting especially its difficulties with hybrid texts and that there seems little relationship between the flagged text and those passages actually written by AI.
- UNESCO guide to ChatGPT and Artificial Intelligence in Higher Education contains a primer on AI, suggestions for working with students, considerations of different types of assignments, as well as ethical and institutional considerations.
- Debby R. E. Cotton, et al., Chatting and cheating: Ensuring academic integrity in the era of ChatGPT - a paper summarizing research done in the early months of ChatGPT and also with earlier versions of GPT.
- Beth McMurtrie and Beckie Supiano, Caught Off Guard by AI: Professors Scrambled to React to ChatGPT This Spring - and Started Planning for the Fall. A summary of a faculty survey and interviews by the Chronicle of Higher Education regarding how professors reacted to AI in the Spring of 2023 and their plans for Summer and Fall 2023.
- ChatGPT, Artificial Intelligence, and Academic Integrity- MU Office of Academic Integrity statement on GenAI and plagiarism/cheating