Texas State has a new Artificial Intelligence (AI) policy in the Honor Code: if a syllabus does not say anything about AI use, students should assume it’s prohibited. Professors across campus are split, with some promoting AI use and others opting out.
Texas State’s Honor Code on AI went into effect on Sept. 6, 2024, less than two weeks after the first day of the fall semester.
Texas State’s Faculty Development Department provided resources to help faculty familiarize themselves with AI if they choose to, giving seminars, trainings and examples of different syllabus statements about AI.
“[Faculty are] in the best positions to decide what’s best for their students,” Candace Hastings, director of Faculty Development said. “Most of them are navigating this new space as well and saying, ‘Okay, this is where it would help, and this is where it wouldn’t.'”
One example of an AI syllabus statement on the faculty development website comes from Dr. Jelena Tešić, an assistant professor of computer science. It reads: “Treat ChatGPT like a fellow student in this class: Ask questions but do not copy the answers. Ask for help, but do not copy the code.”
Hastings said there are two ways to use AI: one is transactional, where you passively receive information, and the other is transformational, where you engage with AI as a learning tool, verifying its information rather than circumventing real understanding.
Carlos Balam-Kuk Solís, lecturer in the Occupational And Workforce Leadership Studies Department, said generative AI allows his students to spend less time on busy work. For one of his classes in which students develop apps, he allows them to use generative AI to create the app logo, allowing them to focus more on what the app does rather than its design.
Everybody’s concern was that AI was going to just allow people to sidestep the guardrails that we have put in place through policy and practice around academic integrity. But over time, people have started to understand that AI is going to be part of our lives whether or not we like it,” Balam-Kuk Solís said.
Balam-Kuk Solís said hesitation around AI is a typical response to any new technology, citing the internet as a similar example. However, he encourages his students to use AI responsibly by directing them to university-sanctioned tools that protect privacy, such as Copilot.
Other faculty members haven’t fully integrated AI into their teachings, wary of the effects it may have on students’ learning. Katherine Warnell, assistant professor of psychology, said she does not allow AI and writes that in her syllabus, but a lot of times the use of AI can go unrecognized.
“This is a global issue for education,” Warnell said. “I don’t think there’s a right way to write that policy that we’re just not doing. I don’t think anyone knows how to write that policy… Even if you disallow it in your syllabus, if a student says ‘No, I didn’t use it.’ That’s really hard.”
Texas State’s Honor Code states AI detectors may mistakenly flag legitimate software tools, such as Grammarly, as generative AI. If a faculty member suspects a violation, they are encouraged to discuss the issue with the student before officially reporting an Honor Code breach.
The faculty development website suggests that if professors use detection tools, it is “a supplement to, rather than a replacement for, your professional judgment and understanding of your student’s work.”
“Even if you’re not using it, you’re in an ecosystem where it exists, and then you’re impacted by it,” Warnell said. “Students are so afraid they’re going to get falsely flagged for using it… They’re really stressed they’re going to be dragged before the honor council when they didn’t use ChatGPT because it’s so hard to prove either way – how do you prove a negative?”
Bridget Dunn, psychology junior, said AI could be helpful to push students toward a goal, like drafting ideas but letting it do all the work, like fully writing papers, would be an obvious misuse of the tool.
“I think when [faculty] completely try to prohibit [AI], it creates an aggressive feeling toward students since there are ways that you can use it without it being complete plagiarism,” Dunn said. “I think if they set some guidelines for how you can use it on different assignments…that’s a little bit more productive.”
Lindsey • Nov 12, 2024 at 1:03 pm
The problem with the AI checker is if the student writes really well it flags it as AI. The student is then left deciding if they should loose points on grammar or risk being flagged as AI