Which “AI” will win: artificial intelligence or academic integrity? That’s a question facing officials as cheating cases have soared at GW following the rise of generative artificial intelligence programs like ChatGPT and DALL-E.
The ability to brainstorm ideas, draft outlines and create images with only a few keystrokes has revolutionized higher education — so much for students’ independent, creative work. But AI and academic integrity aren’t necessarily opposed to each other.
Guidelines from the Office of the Provost released in April describe GAI tools as “an exciting addition to the learning process” and outline how professors might allow or restrict students from using them. From certifications to research programs to workshops, the University is exploring artificial intelligence and offering it to staff, faculty and students through software like Adobe Express.
For some students, that’s a shortcut. Putting aside the strange and frequently inaccurate results such programs can generate, GAI can write entire essays with a single query. But such uses can violate GW’s Code of Academic Integrity: Submitting GAI-generated content without an instructor’s permission is technically cheating. And GAI-related infractions helped fuel what officials said was a 476 percent increase in cheating at GW in fall 2023 as compared to fall 2021 and 2022.
All of this adds up to a strange situation. Officials want the University to be at the forefront of artificial intelligence, while faculty have the mission of ensuring students use these tools responsibly — if at all — in the classroom. Moreover, individual professors’ policies on GAI can range from entire homework assignments that incorporate artificial intelligence to complete bans on any and all programs. For all the optimism about GAI’s future, there’s a deep uneasiness about its present.
But looking solely at academic integrity, the problem isn’t just artificial intelligence. Prompt-based text and image creators are just a means to an end, and the truth is students have cheated, are cheating and will likely continue to cheat. But why?
The COVID-19 pandemic provides one answer. The number of academy integrity reports at GW and nationwide rose between 2020 and 2021. When students felt less accountable to remote courses and more stressed in a time of unprecedented crisis, they were more likely to consider cheating or plagiarizing. And as the pandemic waned in late 2022 and early 2023, GAI took off. For students who were exhausted, overwhelmed and performing below expectations, the sudden arrival of the programs created a perfect storm.
Some students lack the skills and knowledge they should already have, and they incorrectly believe artificial intelligence can paper over the cracks. This is an explanation for cheating, not an excuse. GAI can help students “think smarter, not harder,” but it’s up to them to maintain their academic integrity. So, while plans for faster, more flexible disciplinary hearings and further guidance on artificial intelligence don’t get to the heart of why cheating and GAI-related infractions have increased, they can help streamline GW’s handling of cases when they arise.
It’s difficult to predict the future of artificial intelligence, especially when its capabilities — and the implications of using it — continue to rapidly evolve. But for better and for worse, from formal research to last-minute assignments, these tools are here to stay. GWU, meet GAI.
The editorial board consists of Hatchet staff members and operates separately from the newsroom. This week’s staff editorial was written by Opinions Editor Ethan Benn based on discussions with Contributing Culture Editor Jenna Baer, Editorials Assistant Paige Baratta, Contributing Social Media Director Anaya Bhatt, Contributing Opinions Editor Riley Goodfellow and Social Media Director Ethan Valliath.