Professors, students divided over AI technology in classrooms

Professors, students divided over AI technology in classrooms

Artificial intelligence apps.

Courtesy of Adobe stock photo

Top Takeaways
  • AI rebels say learning is a process, but artificial intelligence denies students the ability to learn naturally.
  • California colleges grant students and staff free access to AI.
  • Professors use AI-detection software to scan for AI-generated coursework.

In September, Toddy Eames ushered her Cal State Dominguez Hills film students out of the classroom and into the Academy Museum of Motion Pictures in downtown Los Angeles to watch “Jaws.” She called it a “unique” experience these days — a shared human activity that she hopes fosters critical and creative thinking among her students.

As a vocal advocate for preserving human creativity in film, Eames is among the so-called AI rebels, who reject the use of artificial intelligence in classrooms.

But she and other AI rebels are fighting an uphill battle as K-12 schools and colleges embrace artificial intelligence. This summer, Gov. Gavin Newsom pushed to implement it in the state’s education system — from grades ninth to 12, community colleges, and the California State University system — to train students and prepare them for a “wide range of jobs” in the field. “AI is the future — and we must stay ahead of the game,” Newsom said when he signed an agreement with Google, Adobe, IBM and Microsoft. 

Google’s Gemini AI entered into a partnership with California’s community college system in September, and the CSU’s chancellor’s office partnered with OpenAI in February, spending $16.9 million to grant every student and faculty member a ChatGPT account and to offer AI training modules to equip them with “AI skills needed in the workplace.”

Ed Clark, chief information officer for the CSU chancellor’s office, argued the partnership is about equity, not shortcuts. “We want CSU students to have access to this tool at the same level that Ivy League students do,” Clark said. “We don’t want them at a disadvantage in the workforce.”

Yet, despite the goals of the partnership, reactions are mixed among students and professors. Some professors are embracing AI in their classrooms, while others see no need for it. “It’s almost cruel to put it in the hands of a student who is not in a position to say no,” said Eames, a 10-year film and media professor at Dominguez Hills.

Eames considers AI a crutch — a tool that can do much of the work for students, bypassing learning and handling challenges in their coursework. “College is about process, not product,” she said. “If you’re just here to get the degree, then what’s the point?” 

‘What’s the point … if we forget how to be human?’

Toddy Eames

When news of the CSU’s partnership with OpenAI reached faculty, Eames said she was “disappointed.” She said there was no prior notice, and at the time of the announcement, training was not provided to help faculty navigate the effect on their classrooms. Today, colleges offer such training.

Eames invited a Hollywood film producer to her class to test AI in real time, feeding it the Oscar-winning screenplay “Moonlight” alongside a “terrible script” with similar themes. She said the AI tool rated each script equally, strengthening her belief that AI can’t deduce a good script from a bad one.  

While Eames acknowledges that the technology could simplify lesson planning, she questions relying on computer algorithms to teach.

“You can’t put the genie back in the bottle,” Eames continued. “But you can choose to resist. That’s why I’m going to keep fighting my little fight in the classroom because at the end of the day, what’s the point of all this technology if we forget how to be human?”

Some professors, like Eames, said that they face pressure to adopt AI into their classroom curriculum. However, Leslie Kennedy, the CSU assistant vice chancellor for academic technology services, said the CSU system is not mandating faculty to use the technology.

“We support academic freedom 100 percent,” Kennedy said. “It’s up to them to decide what their tolerance levels are and how they’re going to incorporate it.” 

Clark also emphasized that the technology is not being positioned as a replacement for creativity but as a supplement, adding that “This is not about automating thinking.” 

One student’s concern

Flynn Fluetsch

Flynn Fluetsch, 19, an anthropology student at Sacramento City College, is conflicted about their community college’s partnership with Google’s Gemini. AI isn’t allowed in half their classes, so they wonder why they should grow dependent on it.

While Fluetsch (who uses “they/them” pronouns) does see the value in using AI as a thesaurus or to organize ideas, they avoid it for their educational purposes and wish their college would slow down to allow students and faculty to adjust to it.

“I have to spend so much more time avoiding it or making sure that my work won’t get flagged as AI when it isn’t. It has yet to make my life easier,” Fluetsch said.

Fluetsch questioned the idea of workplace training, noting that AI is banned in many classrooms and some workplaces, including Amazon, Apple and Verizon. 

Professors use AI-detection software to scan for AI-generated coursework. Fluetsch mentioned being afraid of “false positives,” where their work is incorrectly flagged as AI-generated. “Sometimes I’ll see it and my originality score will be like an 80 percent, and sometimes I think, like, ‘Oh God, does that mean they think 20 percent of what I just wrote was AI?’”

Kennedy, of the CSU chancellor’s office, concurred that AI-detection tools have been unreliable, stating the CSU system has a long way to go to establish a consistent process. In the meantime, it has been licensed with Turnitin, an AI-detection tool.

Overall, Fluetsch fears that AI is being pushed into spaces that aren’t ready for it. “Humanity has a tendency to be curious about something and keep pushing to its absolute limit,” Fluetsch said, “even if the outcome is bad.” 

‘I don’t use it for anything’

Nancy Ann Cheever

Nancy Ann Cheever, the journalism program coordinator at CSUDH, views AI’s rapid evolution as a growing concern for students and wants her journalism students to focus on their own reporting skills instead. 

“In my teaching, personally, I don’t use it for anything,” said Cheever. “I don’t use it to develop lesson plans. I don’t use it to come up with ideas for things. I like to do that on my own.”

Cheever said her primary concern is the lack of reliability in what generative AI produces, pointing out that the technology is trained on copyrighted material. “The big one is that it doesn’t produce accurate information,” she explained. 

Kennedy said that the CSU system is working to provide guardrails for students to aid with the technology moving forward. “The concerns about misinformation and copyright are exactly why we can’t leave them to figure it out alone,” she said. 

Cheever said that while she sees potential for AI in fields like computer science and mathematics, she worries that the humanities could become “watered down” by generic computer algorithms. 

Mixed reactions

Evelyn Favela

Evelyn Favela, 22, a recent CSUDH sociology graduate, takes a deliberate stand against the spread of AI in education by avoiding the technology in her academic work. Rather than turning to chatbots, she turned to the university’s writing center and tutoring services and created her own structured study schedules to complete assignments. 

Favela recalled scrolling through online forums one evening, where she came across posts of people describing their “AI boyfriends” and relying on chatbots for emotional support. She said that the posts were unsettling and worried that growing dependence on AI could erode social skills and deepen isolation, pointing to what psychologists call “AI psychosis.” 

Favela said that she believes using AI to write her papers defeats the purpose of coming to campus. “If I relied on AI to write my papers completely for me, what is the point of me paying tuition to go to school?”

These concerns are shared by Meg Whitener, a 41-year-old philosophy alumna at Sonoma State who quit her job as an executive pastry chef to return to college and study the ethics of law. “Is the point of going to university to learn how to write a really good prompt,” Whitener said, “or is it to learn?”

She graduated this year, but it wasn’t easy, given that the university announced it would be closing her department among six others. She was “shocked” that despite ongoing budget issues, the CSU system was spending money on AI rather than saving struggling programs.

Whitener became more involved in the arts at Sonoma State and brought her thoughts on AI and how it distorts “what it means to be human” into her piece “Am I,” which was featured in a campus gallery and juried student show to start conversations between students and faculty about AI.

Despite her fears over where AI is taking education and the world, Whitener tries to be optimistic about the future. “We need to take those little moments, just take a step back and feel what it feels like to be real.”

Rylan Valdepena, a senior communications major at Sonoma State, and Dylan Smith, a senior journalism major at Cal State Dominguez Hills, are members of EdSource’s California Student Journalism Corps.



Source link

Related Articles

Ready to Launch Your Academic Future?

Join thousands of students using our tools to find and fund the perfect college. Let Resource Assistance USA guide your journey.

Get Started Now