UO faculty wrestle with advent of genAI in higher education

For the University of Oregon and other institutions of higher learning, the advent of generative artificial intelligence engines such as ChatGPT is posing logistical, technical and ethical questions as the technology changes the way instructors teach and how students learn. 

UO instructors and students are still finding their way around generative AI. A group of faculty members has been discussing the issue since fall, and this summer, faculty will gather at the UO Summer Teaching Institute to better understand how to navigate this new world.

The faculty group, one of UO’s Communities Accelerating the Impact of Teaching, or CAIT, is exploring questions raised by generative AI, such as what range of policies and procedures can support teaching and learning in the context of generative AI.

The group has been tasked with producing a plan for this year’s Summer Teaching Institute, an annual week-long workshop sponsored by the Office of the Provost and hosted by the Teaching Engagement Program and UO Online. The theme of this year’s event is “Teaching in the Age of Artificial Intelligence.” The deadline to apply for the workshop is May 22. 

Ramón Alvarado, assistant professor of philosophy and an expert in data ethics, is the CAIT's faculty facilitator. He said the group of 17 faculty members come from different disciplines and bring with them different concerns about generative AI. 

“When we first started, we started out thinking we were all afraid of this new technology,” he said. “We assumed each one of us were worried about the same thing. Then we realized some of us were excited. And people that were worried, were worried for different reasons.”

Rather than a one-size-fits-all policy on generative AI, the Office of the Provost encourages instructors to set the policies that make sense for their courses based on their goals for student learning. But it is essential that every faculty member communicates to students about how generative AI can and can’t be used in class. 

Thus, the Teaching Engagement Program offers guiding principles around AI and teaching. One is that course policies should be learning-centered. Like other learning tools available to students, generative AI can be used to learn important skills, including creativity, critical thinking, ethical decision-making and discerning use of resources. 

Another guiding principle is that instructors should prioritize transparency around the use of generative AI so students understand the policy in each class and the expectations for course activities and assignments. 

The Office of the Provost strongly encourages instructors to have an explicit policy on generative AI in their syllabi, including any relevant distinctions between generative AI use as process, and generative AI content as product. Instructors can review and adapt several sample course policies for their own courses. 

“Instructors may find they need nuanced policies that make distinctions between process and product,” said Lee Rumbarger, associate vice provost for teaching engagement. “For example, some colleagues may think it’s okay for AI to help students brainstorm paper topics, but that it’s not okay to submit papers with AI-generated language in the final products that students hand in.”

“The more instructors can be clear with students and open a line of communication about AI, the more it will help students navigate the different contexts they find themselves in as they move from course to course and from one set of expectations to another,” Rumbarger said.

The issue of using generative AI for writing assignments has different implications for different disciplines. 

“Writing in computer science is a secondary skill — you’re learning programming,” Alvarado said. “Whereas with creative writing, that is the one thing you teach and assess.”

Alvarado said in his course on critical reasoning, his policy on the use of generative AI boils down to this: “By the time you’re through with this course, you’re going to realize there is no use for generative AI technology, and if you use it, you’re missing out on what’s cool about thinking.” 

A survey conducted last year by the consulting firm Tyton Partners concluded the use of generative AI in higher education “continues to grow and will be sticky.” The survey found 49 percent of college students are regular users of the technology, and 75 percent of current users said they will continue to use it even if their institutions ban it. 

Alvarado’s hunch is that student use of generative AI peaked sometime in the middle of 2023, “when the hype level was the highest,” with roughly 60 to 70 percent of students trying it in different capacities: to start a paper, to correct a paragraph, or in some instances even do the work for them. Today, he estimates fewer than half his students are actively using generative AI. 

“The people who decided to use it, especially the ones who were serious about their college education, saw that it was very limited,” he said. “For critical thinking and engaged courses, it would not do the work for them, unless they wanted a C or C-plus in their courses.” 

Those who are continuing to use it are using it as a tool rather than using it to do work for them, Alvarado said. “People are realizing this is a limited-use technology,” he said. 

“It’s hard to use in my courses,” he said. “What I am hearing from colleagues in the humanities is that use has gone down.”

In the legal profession, where writing skill is paramount, generative AI holds the potential to be hugely disruptive. In legal education, instructors are grappling with how to instruct students on the use of the tool, said Rebekah Hanley, a clinical professor of law and the 2023-24 Galen Scholar in Legal Writing.

“Things are changing so fast, we’ve barely had a chance to catch our breath and understand the lay of the land,” she said. “The legal profession is not uniformly embracing the technology, not yet. There are still folks who are resistant, reluctant, and nervous about it, given the high-level failures.”

Those failures include lawyers who have used generative AI to write legal briefs which included citations to judicial decisions that do not exist.

But as more people experiment and learn about the tool, they’ve come to realize that, used properly, the generative AI “is the future of law practice, so we all have a responsibility to rethink what we’re doing and why we’re doing it, to effectively prepare students for that work.”

Ari Purnama, assistant professor of cinema studies and CAIT participant, said when it comes to generative AI in higher education, “Context is important. Context is king.”

Purnama said his sense is that students in creative fields are using generative AI for brainstorming purposes. 

“They have a story idea, and they need some sort of sounding board to develop the idea further, and from there it becomes a coherent thing they can present or pitch,” he said. 

But using the technology to write a screenplay or script is problematic because they could run into problems with plagiarism. Plus, it probably won’t be any good. 

“I could notice immediately if someone used generative AI to create text completely 100 percent.

“The skills I think will be helpful for me and students and faculty members who are going to be using this technology is to harness our prompt generating skills,” Purnama said. “It’s a huge skill I anticipate will be very prominent.”

—By Tim Christie, University Communications