Pluralism in practice
Last week, I published a review essay about Michael Pollan’s new book, A World Appears: A Journey into Consciousness, arguing for pluralism when it comes to thinking about consciousness in humans and machines.
Here I argue for pluralism when it comes to teaching with and about large language models. These are remarks prepared for a panel titled “The Future of AI and What it Means for Higher Education” at the capstone event for the American Association of College and University’s 2025-2026 Institute on AI, Pedagogy, and the Curriculum. Applications for next year’s Institute are now open.
I’m grateful to Bryan Alexander, our moderator, and the other participants on the panel for their ideas as we prepared. The panel was fabulous, with a lively discussion happening in chat alongside the conversation on screen.
Teaching with and without AI
If you think it’s important to teach students how to use AI, that’s great. Go for it. If you want to teach students how not to use it, I think that’s great too. Again, go for it. Students are well served by a diversity of approaches to using AI (or not!). Encouraging discussion among those with different ideas is a better institutional approach to the intrusion of technologies we call artificial intelligence than mandatory AI literacy workshops and one-size-fits-none policies.
This plea for pluralism comes from a sense that battles between enthusiasts and critics of AI are not that important relative to larger problems. I’m talking about the sense that something is off in the functioning of institutions of higher education, and has been for a while now… Since COVID? Since iPhones? Since I was a student and life was good? The intrusions of AI intensify the feelings, providing a focus for unease or outrage. But most teachers and students feel overwhelmed, not by AI, but by all of it: the grind of the end of term as classroom joys fade into grading and being graded; the headlines describing how systems of higher education are under attack from the governments that have created and funded them; the depressing emails from administrators about belt-tightening.
Recommended by LinkedIn
Nothing I say here is meant to undermine a sense of solidarity in the face of external threats. Those who work in higher education and believe in its value should unify around academic freedom, public support for research, and the safety and success of students. But the need for unity should not extend to whether and how teachers use the technology.
Uniformity of practice is not possible with the tools we use to teach because what we do as scholars and teachers is so various, and so is what AI technologies offer. The choices we make as educators about AI lead to conflict because AI is contentious. Some of this conflict is moral: objections to technology so transparently aimed at replacing humans. Some is political: objections to building data centers or the disregard for the rights of working artists and writers. Some is educational: objections to asking students to use these tools because of their potential harms or worries about deskilling.
The rest of the essay is here on an ad-free platform built for reading.
𝐀𝐈 𝐋𝐨𝐠 is where I write about how AI is changing education and what we should do about it.
To receive each post in your email inbox, subscribe on Substack. To receive a notice in your LinkedIn feed, click here. Learn more about AI Log here.