Some professors at Knox have come up with an Artificial Intelligence disclosure of use policy in their syllabi because of the widespread use of AI today, while others spend some time on the first day of classes to discuss use of AI. However, professors have observed that most students at Knox are averse to using AI, as they believe it harms the creative process or they believe it does not provide specific help they need for their assignments.
Professors at Knox have realized that Artificial Intelligence softwares such as ChatGPT were not going to go away and there is a need for a comprehensive AI policy. This includes Assistant Professor of History, Jessa Dahl, who updated the syllabus for her class to include a disclosure policy.
“My policy is one that is based around disclosure and largely modeled after the expectations of the honor code. Students are required to disclose if they’ve used generative AI in an assignment, to make sure the content of the assignment is still accurate, relevant to the prompt, and doesn’t engage in copyright infringement, and they cannot cite AI as an authoritative source,” Dahl said.
Other professors such as Associate Professor of English and Theatre, Sherwood Kiraly, mention the use of technology in their syllabus, but do not have a distinct AI policy. However, they incorporate discussions about the use of AI softwares at the start of their class.
“Professor of English, Nick Regiacorte, and I are team-teaching a new First Person/Persona class and I’m doing Playwriting/Screenwriting. In both cases the class is made up of students who want to write, and such people, it turns out, are the mortal enemies of AI writers,” said Kiraly. “[Students] expressed the opinion that using AI to write is antithetical to individual art, which of course it is. They also observed that in its present state Chat GPT isn’t much of an artist. Nick addressed the Chat GPT issue in our syllabus, as well. Neither of us has had occasion to question the living wellspring of our student writing so far this fall.”
Dahl and Kiraly both agree that students at Knox are generally opposed to the use of generative AI. In her 300-level class, Dahl split the class into groups to discuss ethical debates surrounding AI, such as labor and creativity, and surveyed the students about using AI as a research aid.
“The vast majority of the students were either uninterested in or actively opposed to incorporating generative AI into class exercises assignments. In addition to the ethics problems, the students decided that as upper-level history majors and minors they could do better history than the AI–and I agreed with them,” Dahl said.
Dahl also observed that AI has not changed grading in her classes because she encourages students to engage with specific historical documents, which the AI would be unable to do without making broad generalizations. Additionally, she believes that AI has not largely impacted grading because of the Honor Code system at Knox.
“When I was a student at Knox, I found that to be empowering–I was trusted to take un-proctored exams wherever I wanted to, regardless of the temptations or the opportunities for cheating that might have created. I trust my students to disclose to me how or if they use AI, because ultimately if they don’t, I’m not the one that’s getting cheated.” Dahl said. “Maybe I’m an honor code idealist–I think that in implementation, there are certainly changes and adjustments that need to be made to make sure the practical system can deal with AI and still live up to the principle of the idea behind it.”