This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. You are welcome to use, remix, and share with attribution.
While there may not be completely chatbot-proof assignments, try some of the strategies listed below to mitigate the use of chatbots by students in your course.
You may also want to add specific language to your syllabus indicating that using a chatbot in your course will be considered cheating and/or plagiarism.
Move away from the five-paragrah essay format. Chatbots can follow this format easily. Encourage your students' originality by moving away from this formulaic format.
Tip: If you want to stick with the five-paragraph essay, test out your prompt on an advanced chatbot like ChatGPT. Greene (2022) writes, "If it can come up with an essay that you would consider a good piece of work, then that prompt should be refined, reworked, or simply scrapped... if you have come up with an assignment that can be satisfactorily completed by computer software, why bother assigning it to a human being?"
Sticking with essays? Warner (2022) suggests focusing on process rather than product. Scaffolding learning and allowing students to explain their thinking and make learning visible along the way are strategies that may help you confirm student originality. Warner's philosophy aligns with that at Butler University: "I talk to the students, one-on-one about themselves, about their work. If we assume students want to learn - and I do - we should show our interest in their learning, rather than their performance."
In the short-term, you can have your students write essays in class and on paper.
This isn't a good long-term solution for a few reasons:
Note: Some students may have accommodations to type their work rather than handwrite it. Make sure to follow student accommodations when assigning work.
Idea from Ditch That Textbook
Using collaborative activities and discussions is one strategy to mitigate the use of chatbot responses in your class. While students may generate ideas from a chatbot, they will need to discuss with one another whether they want to use the chatbot responses, if they fit the prompt, and if they are factually accurate.
Activities to try include:
These strategies can work for online courses with a few tweaks. For discussions, ask students to post a recording rather than text. While students may generate a response using ChatGPT, creating their video will require more interaction with the content than copy-pasting a text response would.
Idea from Ditch That Textbook
Engage your students in meaning-making activities to demonstrate their learning.
Consider low-tech activities like:
Consider technology-infused activities like:
* Note that a chatbot can provide an outline for these activities.
Idea from Ditch That Textbook
Brain dumps are an ungraded recall strategy. The practice involves pausing a lecture and asking students to write everything they can recall about a specific topic. Read more at:
Idea from Ditch That Textbook
Consider using planned or impromptu oral exams. You may consider including phrasing in your syllabus about conducting oral exams if you suspect plagiarism through the use of a chatbot.
Idea from Darren Hudson Hick (2022).
When selecting readings, consider sourcing more obscure texts for your students to read. Chatbots may have less information in their training data on obscure texts. As an example, the New York Times reports that, "Frederick Luis Aldama, the humanities chair at the University of Texas at Austin, said he planned to teach newer or more niche texts that ChatGPT might have less information about, such as William Shakespeare’s early sonnets instead of 'A Midsummer Night’s Dream'" (Huang, 2023).
Contact your department's librarian for help sourcing new content.
(Note that ChatGPT is currently trained on data through 2021. Some educators suggest using newer writings and research, but this strategy isn't foolproof since the training models for chatbots are updated frequently.)
Coordinate times to take your class to conduct field observations; students can note their observations and write a reflection about their experience.
Idea from Kelley (2023)
Currently Turnitin cannot detect content written by chatbots. Watch the video below for an example of Turnitin scores against 20 Chat GPT essay with the same prompt.
Although some chatbot detection tools exist, CAT does not currently recommend using these. We need to further look into these tools and the benefits and harms that they may present to our students and faculty. Three reasons for our hesitation are accuracy, copyright issues, and data and privacy issues.
While CAT has not conducted robust testing, we have submitted some examples of chatbot-produced writings through three common AI detection tools. All three tools have failed to detect chatbot writing in some way. We do not have enough evidence yet to know the false positives and false negatives that these tools may produce. If you use a chatbot detection tool, a fake rating is not enough evidence to accuse a student of cheating or plagiarism; we encourage you to gather additional evidence in the form of alternative assessments like oral exams.
Some chatbot detection tools may have privacy policies that violate FERPA or have harmful data collection policies. For this reason, you will need to scrub any personally identifiable information—and, depending on the tool, use a code, only known to you, to match results with students—before submitting to a detection tool.
Turnitin is planning to include detection software in the near future; their website states, "We will incorporate our latest AI writing detection capabilities—including those that recognize ChatGPT writing—into our in-market products for educator use in 2023” (Caren, 2022). Here's a preview of what is in the works (Chechitelli, 2023):
In addition to Turnitin's detection tool, OpenAI, the developer behind ChatGPT, is attempting to add watermarks to ChatGPT output (Wiggers, 2022). If these watermarks can be implemented, there may be new ways to detect ChatGPT output on the horizon. Scott Aaronson (2022), the OpenAI researcher working on watermarking, describes it on his blog post, saying, "Basically, whenever GPT generates some long text, we want there to be an otherwise unnoticeable secret signal in its choices of words, which you can use to prove later that, yes, this came from GPT. We want it to be much harder to take a GPT output and pass it off as if it came from a human." In the post, he goes on to explain mathematically how this is possible. Aaronson points out that there is not regulation for AI safety, so there is no guarantee that tools coming after ChatGPT will contain similar watermark features.
Aaronson, S. (2022, November 28). My AI safety lecture for UT Effective Altruism. Shtetl-Optimized: The blog of Scott Aaronson. Retrieved on January 11, 2023, from https://scottaaronson.blog/?p=6823.
Bowman, E. (2023, January 9). A college student created an app that can tell whether AI wrote an essay. NPR. Retrieved on January 10, 2023, from https://www.npr.org/2023/01/09/1147549845/gptzero-ai-chatgpt-edward-tian-plagiarism
Caren, C. (2022, December 15). AI writing: The challenge and opportunity in front of education now. Turnitin. Retrieved on January 10, 2023, from https://www.turnitin.com/blog/ai-writing-the-challenge-and-opportunity-in-front-of-education-now
Chechitelli, A. (2023, January 13). Sneak preview of Turnitin’s AI writing and ChatGPT detection capability. Turnitin. Retrieved on January 17, 2023, from https://www.turnitin.com/blog/sneak-preview-of-turnitins-ai-writing-and-chatgpt-detection-capability
Ditch That Textbook. (2022, December 17). ChatGPT, chatbots and artificial intelligence in education. Retrieved on January 6, 2023, from https://ditchthattextbook.com/ai/
Hick, D.H. (2022, December 15). Today, I turned in the first plagiarist I’ve caught using A.I. software to write her work [Facebook post]. Facebook. Retrieved on January 10, 2023, from https://www.facebook.com/title17/posts/pfbid0D8i4GuCUJeRsDJjM1JJtfkDYDMCb7Y7RdK2EoyVhRuctg9z2fhvpo1bB2WAxGBzcl
Huang, K. (2023, January 16). Alarmed by A.I. chatbots, universities start revamping how they teach. The New York Times. Retrieved on January 17, 2023, from https://www.nytimes.com/2023/01/16/technology/chatgpt-artificial-intelligence-universities.html
Greene, P. (2022, December 11). No, ChatGPT is not the end of high school english. But here’s the useful tool it offers teachers. Forbes. Retrieved on January 9, 2023, from https://www.forbes.com/sites/petergreene/2022/12/11/no-chatgpt-is-not-the-end-of-high-school-english-but-heres-the-useful-tool-it-offers-teachers
Kelley, K.J. (2023, January 19). Teaching Actual Student Writing in an AI World. Inside Higher Ed. Retrieved on January 19, 2023, from https://www.insidehighered.com/advice/2023/01/19/ways-prevent-students-using-ai-tools-their-classes-opinion
Warner, J. (2022, December 11). ChatGPT can't kill anything worth preserving: If an algorithm is the death of high school English, maybe that's an okay thing. The Biblioracle Recommends. Retrieved on January 11, 2023, from https://biblioracle.substack.com/p/chatgpt-cant-kill-anything-worth
Watkins, R. (2022, December 18). Update your course syllabus for chatGPT. Medium. Retrieved on January 6, 2023, from https://medium.com/@rwatkins_7167/updating-your-course-syllabus-for-chatgpt-965f4b57b003
Wiggers, K. (2022, Decemer 10). OpenAI’s attempts to watermark AI text hit limits. TechCrunch. Retrieved on January 10, 2023, from https://techcrunch.com/2022/12/10/openais-attempts-to-watermark-ai-text-hit-limits/
Library Hours
Study Rooms
My Library Account
Library Website