AI & Tech

AI in Academia: The Grind, The Gain, and the Great Recalibration

A few months ago, I was teaching a bright MBA class when a student raised his hand in the middle of a lecture. He said he had misgivings about my arguments. And then, right there in class, he told me he had been using an AI tool to critique my points.

I learned a thing or two that day. Not just about the subject, but about how AI was changing the very nature of learning. I left the class not thinking about how students should avoid AI, but about how I could use AI to prepare better.

I wasn’t prepared for the question, and I’ll admit—I felt mildly threatened.

Now, my parents were both professors. I’ve been teaching a paper at a top-tier business school for over a decade, in addition to my other work. I’ve seen academia up close—the passions, the programmes, and the politics. So when I came across the California Faculty Association (CFA) resolution on AI, I paid attention.

California, after all, is at the heart of the tech world. If any faculty association could chart the future of AI in academia, I thought it would be this one.

But what the CFA put out was quite the contrary.

The CFA is pushing for strict rules on AI in universities, raising concerns that AI might replace roles, undermine hiring processes, and compromise intellectual property. As they put it:

“AI will replace roles at the university that will make it difficult or impossible to solve classroom, human resources, or other issues since it is not intelligent.”

I respect their concerns. But I also believe the real challenge isn’t what AI should do—it’s what humans should still do in a world where AI can do so much.

And that leads to some fundamental dilemmas.

A Moment to Recalibrate

The goal of education was always to teach thinking—knowledge was simply a measure of that thinking. Somewhere along the way, we confused the measure with the goal.

Instead of focusing on fostering deep thought, we turned education into a test of memory. AI now forces a reckoning. If AI can retrieve, process, and even generate knowledge faster, more accurately, and with greater depth than most students, what does that mean for education?

AI offers an opportunity not to restrict learning, but to recalibrate it—to return to the real goal: teaching students how to think, question, and navigate complexity.

Three Dilemmas Academia Must Confront

1. Who Does the Work—Humans or AI?

AI can grade essays, draft research papers, and provide instant feedback. It’s efficient. But efficiency isn’t learning.

Law firms now use AI for contract analysis. Junior lawyers “supervise” the process. The result? Many don’t develop the deep reading skills that once defined great legal minds. If universities follow the same path—letting AI mark essays and summarise concepts—students may pass courses but never truly engage with ideas.

Douglas Adams once said, “We are stuck with technology when what we really want is just stuff that works.” AI works—but at what cost?

2. Who Owns the Work?

Professors spend years developing course material. AI scrapes, reuses, and repackages it. Who owns the content?

The entertainment industry has been fighting this battle. Writers and musicians pushed back against AI-generated scripts and songs trained on their work. Academia isn’t far behind. If AI creates an entire course based on a professor’s lectures, who gets the credit? The university? The AI? Or the human who originally built it?

The CFA resolution warns about this:

“AI’s threat to intellectual property including use of music, writing, and the creative arts as well as faculty-generated course content without acknowledgement or permission.”

The same battle playing out in Hollywood is now knocking on academia’s door.

3. Does Efficiency Kill Learning? Or Is That the Wrong Question?

It is easy to assume that efficiency threatens deep learning. The grind—rewriting a paper, wrestling with ideas, receiving tough feedback—has long been seen as an essential part of intellectual growth.

AI makes everything smoother. But what if the rough edges were the point?

A medical student who leans on AI for diagnoses might pass exams. But will they develop the instincts to catch what AI misses? A student who lets AI refine their essay may get a better grade. But will they learn to think?

Victoria Livingstone, in an evocative piece for Time magazine, described why she quit teaching after nearly 20 years. AI, she wrote, had fundamentally altered the classroom dynamic. Students, faced with the convenience of AI tools, were no longer willing to sit with the discomfort of not knowing—the struggle of writing, revising, and working their way into clarity.

“With the easy temptation of AI, many—possibly most—of my students were no longer willing to push through discomfort.” – Victoria Livingstone

And therein lies the real challenge.

The problem isn’t efficiency itself—it is what is being optimised for.

If learning is about acquiring knowledge, AI makes that easier and more efficient. But if learning is about developing the ability to think, question, and synthesise complexity, then efficiency is irrelevant—because deep thinking requires time, struggle, and iteration.

So maybe the question isn’t “Does efficiency kill learning?” but rather:

What kind of learning should be prioritised in an AI-enabled world?

If efficiency removes barriers to learning, then we must ask:

What should learning look like when efficiency is no longer a limitation?

A Complex Problem Without Simple Answers

It is tempting to look for quick fixes—ban AI from classrooms, tweak assessments, introduce AI literacy courses. But this is not a simple or even a complicated problem. It is a complex one.

Dave Snowden, from his Cynefin framework, would call this a complex problem—one that cannot be solved with predefined solutions but requires sense-making, experimentation, and adaptation.

Livingstone’s frustration is understandable. AI enables students to sidestep the very struggle that shapes deep learning. But banning AI will not restore those lost habits of mind. Universities cannot rely on rigid policies to navigate a world where knowledge is instantly accessible and AI tools continue to evolve.

Complex problems do not have rule-based solutions. They require adaptation and iteration. The real response to AI isn’t restriction—it is reimagination.

Engage with AI, rather than fight it. Encourage students to think critically about AI’s conclusions. Reshape assessments to focus on argumentation rather than recall.

In a complex system, progress does not happen through control. It happens through learning, adaptation, and deliberate experimentation.

Reimagination, Not Regulation

Saying no to AI is a false choice. AI will seep into academia like a meandering tsunami that doesn’t respect traffic lights at the shore. The real challenge is not limiting AI, but reimagining education.

The CFA is right to demand a conversation about AI in education. But academia must go beyond drawing lines in the sand. It must reinvent itself.

AI is not the threat. The real danger is holding on to learning models that worked well in an earlier time.

That time is past.

It is time to unlearn. And recalibrate.