Officials are discussing how to expand the Code of Academic Integrity to include violations for students who use AI programs to complete their assignments after a surge in the public’s use of AI.
Provost Chris Bracey said during a Faculty Senate meeting Friday officials will deliver a statement this week on how the academic integrity code will account for AI programs including ChatGPT, a program launched in November capable of writing code, academic papers and responses to simple prompts. But a handful of faculty said that instead of prohibiting the program from the classroom, they plan to incorporate AI tools like ChatGPT into their curriculum this spring to prepare students for a future with AI use rising across industries.
After signing up with a name and email address, users of ChatGPT can enter prompts that generate poems, stories, essays, emails and more, enabling students to enter their assignment instructions for the program to automate a completed paper in return.
“We are trying to figure out whether our existing code prohibits the use of this AI software that can respond to a prompt to generate an essay,” Bracey said. “Our general sense is that this does not qualify as plagiarism but rather falls under the general category of cheating.”
The Code of Academic Integrity defines cheating as the use of unauthorized materials during an academic exercise, while plagiarism is defined as the misrepresentation of ideas as one’s own.
ChatGPT’s popularity has spread rapidly across social media because of its ability to mimic human conversation patterns and text, with more than one million users already writing papers, articles, emails and more with the program. ChatGPT has also sparked debate in the academic world between some who fear that the bot’s ability to produce human-like writing will enable widespread cheating and others who see it as an opportunity to restructure teaching and encourage students to learn how to write better than the AI system.
OpenAI, the company behind ChatGPT, is reportedly close to releasing a more powerful model called GPT-4 with new multibillion-dollar investments from tech giant Microsoft.
Faculty said because students have open access to ChatGPT online, professors should teach students how to best use the technology instead of banning it from their classrooms. They said they will encourage students to use the program to build on critical skills like research and an enhanced style of writing.
Lorena Barba, a professor of mechanical and aerospace engineering, said faculty should avoid the “fearmongering” associated with the rapidly advancing technology. She said faculty should design “authentic” and “relevant” assignments that students could not complete by cheating with AI tech.
“Faculty need to rethink their assessments to account for the fact that this tool is available to students, and, even better, teach students how to use it effectively,” Barba said in an email. “A few students will always seek to cut corners, and their learning will suffer. I will do my best to convince them otherwise, but, in the end, it’s their choice.”
Alexa Alice Joubin, a professor of English, said she has “embraced” AI in the classroom and taught students “prompt engineering,” a way of designing the most suitable prompt to provide the most ideal answer with ChatGPT to build on skills like designing research questions.
“AI is no different than when the calculator was first invented,” she said. “It was disruptive, but it didn’t kill math. The typewriter was invented and was disruptive, but it did not kill writing. AI is just another tool. I think overhyping it is really unhealthy because all those people who are doing the fearmongering have never spent time with this.”
Joubin said ChatGPT would provide enhanced learning opportunities in her English classes through exercises where students can critique AI essays without the “uncomfortable” elements of critiquing another student’s writing in class.
“Here is an essay, and it’s not written by a human, so we can do anything we want so you wouldn’t be afraid of offending,” Joubin said. “It’s like a lab for the humanities, it’s brilliant.”
Tadeusz Zawidzki, an associate professor of philosophy, said while papers that ChatGPT generates can “cause panic” among humanities professors, limitations exist in the program’s inability to convey personal human experiences.
“One key limitation of GPT-3 tech is that it can draw only on the collective experience of humanity as it appears on the internet,” Zawidzki said in an email. “It has no sensory, emotional, embodied or socially embedded experiences of its own on which to draw. This is a source of information which will always constrain human creativity in ways unavailable to GPT-3 tech.”
Zawidzki said the University should teach students to specialize in academic writing with content unique to personal human experiences in the future as ChatGPT becomes more widespread.
“Perhaps we should focus on skills at rendering experiences in language when teaching students to write since it seems like something that will always, truly distinguish human from GPT-3 writing,” he said.
Katrin Schultheiss, an associate professor of history, said the University should give professors guidance on how best to move forward with incorporating artificial intelligence into their classes.
“I absolutely think that the University should be issuing guidelines and organizing training,” she said. “When we had to go online when COVID came, the University reacted and had a bunch of workshops and did a real mobilization around this new technology, and I think there has to be a similar mobilization around this issue.”
Schultheiss said despite its advantages, ChatGPT has also raised some concerns for its occasional errors displaying factually incorrect information due to its “limited knowledge” of current events.
“People use this stuff all the time, and sometimes it’s wrong,” Schultheiss said. “But it looks right. You know, that can be problematic too.”
Schultheiss is co-organizing an introductory meeting with engineering professor Lorena Barba and educational technology professor Ryan Watkins Wednesday to discuss ChatGPT’s future as an educational tool and familiarize students and faculty with the technology.
“The idea behind this is an introduction to AI literacy,” she said. “The meeting is meant for people who haven’t thought much about it, for people like me who have just encountered this in the fall and are trying to wrap their heads around it.”