Generative AI era: Renewed role for faculty expertise
Credit: Photo by Allison Shelley/Complete College Photo Library
There is a critical moment in workshops we run for faculty who are adapting to a world with generative artificial intelligence (AI). Users load their assignments into ChatGPT — often for the first time. They are convinced that due to some quirk or “neat trick” they learned two years ago their expertise will shield them from an AI tool doing work as well as a trained expert.
Then they see it: The AI model, with adequate prompting and iteration, produces high quality content in seconds.
We have felt the air leave rooms as some of the smartest people we know realize what this means for teaching and then spiral into the consequences for their place in the world.
But, counter to what many fear, faculty expertise is far from obsolete. In fact, its centrality is renewed in an era when a chatbot can produce competent drafts in seconds.
What matters now isn’t AI prompt hacks but years of domain knowledge that let professors ask the right questions and teach students to do the same. As creation gets cheap, the premium shifts to human judgment, framing and ethical use that keep powerful tools productive rather than misleading.
Consider this: If you are a domain area novice you will not know what questions to ask or if the outputs are good. Professors, on the other hand, can get good results because they know what they are looking for. Zach is a professor of communication and teaches about the First Amendment. If he prompts ChatGPT for an overview of Section 230 of the Communications Decency Act and contemporary criticisms, he can immediately identify if the response is any good and probably spot some missing items. If an inexperienced user did the same exercise, they would have no idea if they were looking at quality outputs.
Every few weeks we get an example of an LLM used poorly: A Deloitte study for the Australian Government with factual errors, or a MAHA report rife with made-up citations and inaccuracies. Recently, even the California State University system was embarrassed by AI-generated errors in a legal filing.
These problems are not spotted by AI detectors but rather people with domain-specific expertise. That is not to say humans are good at identifying AI writing — we continue to be quite bad at it. However, domain experience and expertise are invaluable in identifying gaps and mistakes in human and AI outputs.
The same thing that makes professors good at finding the problems in the writing and logic of a graduate student makes them good at identifying errors and oversights from AI.
For some anti-intellectuals, the emergence of AI has granted their wish — an excuse to tell experts they are no longer needed. But the opposite is true. Our work may shift, with less energy spent creating content and more on evaluating it, but the need and demand for the ability to judge the quality of outputs has increased.
Open AI’s Sam Altman suggested that GPT5 was “like having a team of Ph.D.-level experts in your pocket.”
That is sort of true. But if you are a novice in a field of study, you do not even know what questions to ask that AI Ph.D. researcher. Yes, they are capable and productive, but if you ask a Ph.D.-level talent in astronomy, whose only goal is to please you to work together to prove the world is flat, that is exactly what they are going to do.
This phenomenon is why we continue to see stories of novice users partnering with LLMs to produce what they believe are revolutionary breakthroughs but turn out to be hallucinations. The most well-known example is that of Allan Brooks who thought he had redefined physics with the help of ChatGPT. The LLM encouraged his delusion to the point of contacting leading figures in the academy for confirmation. This is what happens when you do not have the domain expertise to guide powerful tools.
There is a reason Ph.D. researchers are usually under the direction of someone with a Ph.D. — the domain knowledge and experience are necessary to get the researcher to do their best work. This is also why teaching undergraduates remains one of the scholar’s most vital roles: helping students find not just answers but learn what questions to ask and how to ask them.
Actually building deep subject-level knowledge and experience in a world where AI exists is a distinct challenge separate from this, and one we have written about, but the value of expertise is the thing that actually unlocks the power of artificial intelligence.
The AI era makes subject-level expertise more essential, not less. We have to push back against the anti-intellectual narrative. Doing so elevates academics and professors who are wondering about their place in the world and helps the broader public understand that the power of these systems is only accessible to users with experience and expertise.
•••
Zach Justus is the director of faculty development and a professor of communication arts and sciences, and Nik Janos is a professor of sociology, at California State University, Chico. They write about the intersection of teaching/learning and generative AI and how it is disrupting higher education on their blog, Melts into Air.
The opinions expressed in this commentary represent those of the author. EdSource welcomes commentaries representing diverse points of view. If you would like to submit a commentary, please review our guidelines and contact us.