Editor’s Note: A previous version of this article, published on Nov. 12, incorrectly reported the existence of a new AI policy. The Star has removed the original article from its website and published an updated, accurate version below.
Texas State does not have an official Artificial Intelligence (AI) policy, the Honor Code Council recommends faculty outline whether they allow AI or not. Professors across campus are split, with some promoting AI use and others opting out.
In January 2023, shortly after the launch of ChatGPT in November 2022, the Honor Code Council updated its policy to include “automated means” as an example of potential cheating. This change allows faculty to decide whether using such tools constitutes academic dishonesty, according to Honor Code Council Chair Rachel Davenport.
“When I wrote it, I was thinking automated means might mean like AI, like some tool that will do it for you,” Davenport said.
Davenport said allowing faculty to decide whether or not AI is permitted in their classroom is on par with peer institutions such as UT Austin.
“It wouldn’t make logical sense for us to [have a blanket AI policy],” Davenport said. “The other reason is we just don’t have that authority. Our job truly is not to be prescriptive, it’s just to help facilitate the process.”
If a faculty member suspects a student used AI in a way that violates a clearly stated prohibition in the syllabus or assignment guidelines, they should begin by contacting the student to discuss the alleged violation. If the professor still believes an Honor Code violation has occurred after this discussion, or if they received no response from the student in three business days, they must fill out the Honor Code Review Form. The case will then proceed to the Assistant Vice Provost for Experiential and Academic Initiatives’ office for next steps.
“If it’s not allowed, then you’re violating the policy. If the faculty member says, ‘totally, you can use it, or you can use it in these ways’, you’re no longer violating,” Davenport said. “So that’s the way that we are trying to support faculty without coming at this with an explicit AI policy.”
One example of an AI syllabus statement on the faculty development website comes from Dr. Jelena Tešić, an assistant professor of computer science. It reads, “Treat ChatGPT like a fellow student in this class: Ask questions, but do not copy the answers. Ask for help, but do not copy the code.”
Carlos Balam-Kuk Solís, lecturer in the Occupational And Workforce Leadership Studies Department, said generative AI allows his students to spend less time on busy work. For one of his classes in which students develop apps, he allows them to use generative AI to create the app logo, allowing them to focus more on what the app does rather than design.
“Everybody’s concern was that AI was going to just allow people to sidestep the kind of guardrails that we have put in place through policy and practice around academic integrity. But over time, people have started to understand that AI is going to be part of our lives whether or not we like it,” Balam-Kuk Solís said.
Balam-Kuk Solis noted that hesitation around AI is a typical response to any new technology, citing the internet as a similar example. However, he encourages his students to use AI responsibly by directing them to university-sanctioned tools that protect privacy, such as Copilot.
Other faculty members haven’t fully integrated AI into their teachings, wary of the effects it may have on students’ learning. Katherine Warnell, assistant professor of psychology, said she does not allow AI and writes that in her syllabus, but a lot of times the use of AI can go unrecognized.
“This is a global issue for education,” Warnell said. “I don’t think there’s a right way to write that policy that we’re just not doing. I don’t think anyone knows how to write that policy… Even if you disallow it in your syllabus, if a student says ‘No, I didn’t use it.’ That’s really hard.”
Texas State’s Honor Code states AI detectors may mistakenly flag legitimate software tools, such as Grammarly, as generative AI. If a faculty member suspects a violation, they are encouraged to discuss the issue with the student before officially reporting an Honor Code breach.
“You cannot rely on the detector,” Davenport said. “You’ve got to talk to your students and find out. There are false positives but also false negatives. I’ve found cases where it was clearly written by AI and Turnitin didn’t catch it.”
Bridget Dunn, psychology junior, said AI could be helpful to push students toward a goal, like drafting ideas but letting it do all the work, like fully writing papers, would be an obvious misuse of the tool.
“I think when [faculty] completely try to prohibit it [AI], it creates an aggressive feel toward students since there are ways that you can use it without it being complete plagiarism,” Dunn said. “I think if they set some guidelines for how you can use it on different assignments…that’s a little bit more productive.”
Elise Salinas • Nov 21, 2024 at 8:48 am
I feel like professors should recognize the benefits of AI with using it as a study tool to understand concepts of a topic a little better.