How artificial intelligence exposed the quiet bargain education made with itself and why the reckoning can no longer be deferred
There is a particular kind of institutional discomfort that does not announce itself. It accumulates quietly, in the gap between what we say we are doing and what we are actually doing, until something arrives to make the gap impossible to ignore. For British education, that something has a name: generative artificial intelligence. And the discomfort it has exposed was already there, long before any student typed a prompt and submitted the result.
The conversation in staffrooms, lecture theatres, and vice-chancellors’ offices has largely been framed as a problem of detection. How do we know if a student used AI? What tools can we deploy? How do we rewrite our academic integrity policies? These are understandable questions, and in their urgency they feel productive. But they are, in important respects, the wrong questions. They point outward, toward the student and the technology, when the more uncomfortable inquiry points inward, toward the institution itself.
The deeper question is not: how do we catch AI-generated work? The deeper question is: what does it tell us that the work could plausibly be submitted at all?
We were always grading the output. AI just made the consequences of that visible.
A Bargain Made in Plain Sight
For decades, the dominant model of academic assessment has been output-focused. The student produces an essay, an examination script, a dissertation chapter. The institution evaluates that product. This is not inherently wrong, written work is a legitimate and important form of intellectual expression. But somewhere along the way, the output came to be treated as synonymous with the learning itself. If the essay was good, the student had learned. If the examination was passed, the knowledge had been acquired. The product was taken as proof of the process.
This model depended, implicitly, on a fairly simple assumption: that producing a plausible essay required you to have done the thinking. The two things were entangled. You could not convincingly argue a position you had not wrestled with. You could not construct a coherent analysis without having worked through the material. The output was imperfect as a proxy for learning, but it was a proxy nonetheless.
Generative AI has severed that entanglement completely. A student can now submit a polished, well-referenced, grammatically impeccable essay on almost any topic without having engaged meaningfully with a single idea it contains. The output can be excellent. The learning can be zero. And the assessment, if it is purely output-focused, cannot reliably tell the difference.
What AI has done, in other words, is not create a new problem. It has made an old one impossible to ignore.
What the Assignment Was Actually For
Consider what happens when a student sits down to write an essay — a real one, without assistance. They read. They become confused. They make notes that do not quite cohere. They draft an argument they later recognise as wrong, and they have to work out why. They sit with a hard question and feel the specific frustration of not yet knowing how to answer it. They try a formulation, discard it, try another. At some point, if the assignment is well-designed and the student is engaged, something clicks. A connection is made. A position hardens. The argument begins to hold together.
This process is not incidental to education. It is education. The essay is the gym; the thinking is the workout. The point was never purely the document that emerged at the end. The point was the cognitive work required to produce it: the sustained attention, the tolerance of uncertainty, the discipline of converting tangled thought into clear prose, the intellectual courage required to commit to a position and defend it.
These are not decorative capacities. They are foundational ones. The ability to analyse a complex situation, hold competing considerations in tension, form a reasoned judgement, and communicate it with clarity and precision — these are precisely the skills that employers report finding absent in graduates, that citizens require to participate meaningfully in public life, and that, frankly, a functioning adult needs simply to navigate the world.
The essay is the gym; the thinking is the workout. When a student submits AI-generated work, they have not skipped the homework. They have skipped the development.
When a student submits AI-generated work, they have not merely circumvented an assessment task. They have skipped the developmental experience the assessment was designed to provide. They have not learned to argue a point. They have not built the habit of sustained intellectual effort. They have not discovered what it feels like to be genuinely wrong, or genuinely right, about something that matters. They have received a grade for work they did not do, and in doing so, they have been deprived of something they may not even know they are missing.
The Courage to Fail the Work
There is a reflexive sympathy, among educators who consider themselves progressive, for the student caught using AI. The system is broken, the argument goes. Assessment is reductive. The student was just trying to survive. And there is truth in this. Assessment systems in British higher education have often become bureaucratic, volume-heavy, and poorly aligned with genuine intellectual development. Students operate under real pressure, financial and otherwise. None of this is in dispute.
But sympathy for a student’s circumstances is not the same as doing them a service. Allowing AI-generated work to pass — or penalising it half-heartedly, inconsistently, in ways the student learns to game — is not compassionate. It is a postponement of accountability that compounds the original harm. The student who graduates without the ability to think rigorously will discover this deficit at the worst possible moment: in a professional context, under pressure, without a tool to do the thinking for them in a way that anyone around them will fail to notice.
Failing the work, firmly, consistently, and without apology, is therefore not an act of severity. It is an act of honesty. But it only carries genuine educational value if it is coupled with the conversation the failure should open. Not: you broke a rule. But: what do you actually think about this? Where does your analysis begin? What would you argue if the argument had to come from you?
This is the moment an educator earns their role. Not in the marking of outputs, but in the encounter with a student who has not yet learned to produce thought of their own, and who needs someone to believe it is possible before they can believe it themselves.
The Structural Problem Institutions Must Own
It would be convenient to locate the problem entirely with students. It is less convenient, but more accurate, to locate a significant portion of it with institutions. The conditions under which AI-assisted submission becomes tempting — even rational — have been created by structural choices that universities and schools have made over many years.
Assessment volume has ballooned. In many degree programmes, students submit work so frequently, on so many modules simultaneously, that deep engagement with any single piece becomes practically impossible. The incentive structure rewards throughput over thinking. Deadlines cluster. Word counts multiply. The message transmitted, whether or not it is intended, is: produce the thing. Meet the specification. Move on.
Feedback, meanwhile, has often become perfunctory. Students submit essays into a system that returns a number and a paragraph of generic commentary two weeks later, by which point they have moved on to the next module and the next deadline. The opportunity for genuine intellectual dialogue — the back-and-forth through which a student’s thinking actually develops — is structurally squeezed out.
Contact hours have declined in many institutions as student numbers have grown. The ratio of students to tutors makes it genuinely difficult to know, in any rich sense, how a student’s mind works. Assessment has increasingly had to function as a proxy not just for learning but for the kind of knowing that once came from regular personal contact.
None of this excuses academic dishonesty. But it does explain the environment in which AI submission has flourished. An institution that responds to that environment purely with surveillance and detection software, without examining the structural conditions it has created, is engaged in a performance of concern rather than a genuine response to crisis.
What a Genuine Response Looks Like
The path forward is neither simple nor cheap. It involves accepting that some of what has accumulated in assessment practice needs to be dismantled, and that the dismantling will require resources, political will, and an honest conversation with students about what education is actually for.
More oral assessment. More process-oriented tasks, in which students submit drafts, annotate their own reasoning, and are asked to discuss their work in real time. More assignments that cannot be meaningfully replicated by AI because they are grounded in the specific, the local, and the personal: a student’s own experience, their own argument, their own encounter with a piece of evidence or a human being in a particular place at a particular time.
More explicit teaching of the cognitive work itself. Many students have never been taught to argue. They have been taught to write. These are related but not identical skills. They have not been shown how to sit with a question they cannot immediately answer, how to read an opposing argument with genuine curiosity rather than defensive reflex, how to change their mind without feeling they have failed. These metacognitive capacities — awareness of one’s own thinking — are learnable. They are not being systematically taught.
And more honesty from institutions about the purpose of the exercise. Students respond to authenticity. If they believe an assessment is a bureaucratic hoop, they will look for the most efficient route through it. If they understand — genuinely understand, because an educator they trust has told them — that the struggle is the point, that the discomfort of not yet knowing is where the growth lives, many of them will choose to do the work. People generally want to become capable. They need to believe it is possible, and that someone is paying attention.
The struggle is the point. The discomfort of not yet knowing is where the growth lives.
Students Must Learn to Walk
There is a metaphor that captures something essential here. We are handing out shoes to people we have never let touch the ground. We are giving students credentials for intellectual capacities they have not had the opportunity — or the requirement — to develop. We are then sending them into a world that will demand those capacities and expressing surprise when they are absent.
The student who submits AI-generated work is, in many cases, not a bad actor. They are a rational agent operating in a system that has told them, consistently and through its own incentive structure, that the output is what counts. They learned to optimise for the output. We taught them to.
This does not mean we shrug. It means we change the system with the same rigour we are currently directing at the student. It means that failing the AI-generated essay is the beginning of a responsibility, not the end of one. The question the educator must ask after that failure is not just: what will we do with this student? But: what will we change about how we teach, so that the next cohort does not face the same choice?
The Clock Has Not Just Run Out, It Has Been Ignored
The urgency is real. Students are graduating now with hollow portfolios of high-quality outputs and low-quality thinking. They are entering medicine, law, journalism, public policy, finance, and teaching. They are entering roles in which the quality of their reasoning has consequences for other people. The deficit will show up. In some cases it already has.
But the deeper urgency is not about the job market. It is about what education is for. At its most fundamental level, education is the project of helping a human being become more fully themselves: more capable of thought, more able to navigate complexity, more equipped to live a life of meaning and contribution. That project is not served by efficient credential production. It is served by the slow, uncomfortable, irreplaceable work of learning to think.
AI has arrived at a moment when education was already in a fragile relationship with that project. The temptation is to treat it as a new threat to be managed by new tools. The opportunity, and it is a genuine one, for those with the courage to take it, is to treat it as the catalyst for a long-overdue reckoning with what we have quietly allowed education to become.
The clock ran out. The question now is whether the institutions with the most to answer for are willing, finally, to answer for it.
💛 Love what we do at The New Art School & Design Education Talks podcast? Help keep design education alive!
✨ Join our mailing list: https://sendfox.com/thenewartschool Explore more: https://linktr.ee/thenewartschool | @newartschool| https://newartschool.education/ | https://heretakis.medium.com/ | https://odysee.com/@thenewartschool:c
