The Un-Cheatable Assignment
Shifting from Policing Products to Assessing Process in the Age of AI
The Faulty Compass of AI Detection
In the wake of generative AI’s rapid integration into academic life, the first and most instinctive response from many educational institutions has been to reach for a technological fix: the AI detector. The impulse is understandable. Faced with an unprecedented challenge to traditional assessment, the promise of a tool that can cleanly distinguish between student work and machine-generated text feels like a necessary first line of defense. Moreover, it builds on the familiar framework of the plagiarism-checker. If I can already check for plagiarism by uploading a student essay, why not also check for machine generated text?
This has given rise to a veritable arms race, with schools investing significant resources in software designed to police the authenticity of student submissions and students finding clever ways around these detection tools. But as a growing body of peer-reviewed research demonstrates from 2023-2025, this reliance on detection is not just a flawed strategy; it is a strategic dead end built on a foundation of technical futility and pedagogical harm.
As this arms race has proceeded, a critical conversation has unfolded in real-time across labs and research papers. The central question has been a profoundly practical one: could we build a reliable compass to navigate this new terrain? The answer to this question has unfolded as a story in three acts. The first act is a cautionary tale about the failure of a purely technological fix—a frantic, reactive arms race that ended in a strategic dead end. But from this failure, a second act emerged as a quiet but profound pedagogical revolution, wherein educators and researchers began to ask a better set of questions. This has led us to our third and final act: the articulation of a new strategic imperative. An imperative that connects this new vision for the classroom directly to the durable, human competencies required to thrive in an AI-augmented world.
The First Act: No Easy Fix
The first act of this drama delivered a humbling truth: our nets for catching AI-generated text were full of holes. Foundational research from that period, crystallized in the landmark paper “Can AI-Generated Text be Reliably Detected?“ by Sadasivan and colleagues, offered a comprehensive and damning verdict. In one of the most rigorous early studies, they subjected fourteen detection tools to a battery of tests against texts generated by five different language models, including ChatGPT, across a dataset of student writing from eight distinct academic disciplines. Their findings revealed that the detectors were not only fundamentally brittle but also dangerously biased. While they could be consistently fooled by the simplest of countermeasures (a light coat of paraphrasing could completely erase the statistical fingerprints) a more troubling flaw emerged. The tools performed significantly worse on texts written by non-native English speakers, frequently misclassifying human writing as AI-generated as demonstrated more profoundly in “GPT Detectors are Biased Against Non-Native English Writers.” This exposed a profound equity issue, demonstrating that a detection-first approach carries an unacceptably high risk of leveling false accusations against an already vulnerable student population. The conclusion was a stark warning against technological hubris: any claim of reliable detection needed to be treated with profound skepticism.
With the direct approach faltering, the story’s first act turned to a more elegant solution. If we couldn’t identify the text from the outside, perhaps we could brand it from the inside. This led to innovative work on watermarking, such as the method proposed in “Watermarking Pre-trained Language Models with Backdooring.” The idea was to embed a secret signature, a kind of digital birthmark, into the model itself. This watermark could be designed to survive even when the model was later fine-tuned, offering a far more persistent signal than surface-level statistical analysis. It was a clever and promising pivot, suggesting a future where intellectual property could be traced and protected at its source.
But this too revealed the villain of the story to be stubbornly persistent. In a direct challenge to this new hope, research like “On the Reliability of Watermarks for Large Language Models“ showed that even these sophisticated internal signatures were vulnerable. The weapon that brought them down was the same one that had foiled the initial detectors: the paraphrase attack. An automated paraphrasing tool, itself an AI, could effectively wash away the invisible ink of the watermark, degrading it to the point of unreliability. The message was clear: in an adversarial context, where there is an incentive to evade, watermarks could not be a foolproof shield.
And so, the curtain fell on this first, frantic wave of research with a shared and sobering conclusion. The reliable detection of AI-generated text, whether through external analysis or internal branding, remains an unsolved problem. The simple act of rephrasing is the Achilles’ heel of our current methods. This realization, however, was not a failure. It was a gift of clarity. It proved that a technological arms race was unwinnable and, in doing so, forced us to ask a better question. It told us we were pointing our compass in the wrong direction.
The Second Act: A Quiet Revolution in Pedagogy
If the first act of our drama was about the failure of a technological fix, the second is the story of a quiet but profound pedagogical revolution. As the arms race of detection sputtered, a different kind of work was happening in classrooms and research studies around the world. A new cohort of scholars began to ask a more interesting set of questions: what if AI wasn’t a threat to be policed, but a partner to be leveraged? What if we shifted our focus from the finished product to the messy, human process of creation? This shift didn’t produce a single, dramatic breakthrough, but rather a slow, collaborative weaving that brought together one thread of evidence at a time into a new educational tapestry.
We can certainly argue that this act is hardly finished. There are many figures still adding new threads to this tapestry; but for now I want to investigate some of the core warps and wefts of this expanding cloth. The foundational thread has been the simple, but powerful, idea of making the process the point. In their work on university writing instruction, Akiba and Garte offered a powerful proof of concept. They redesigned a writing assignment not as a single, high-stakes submission, but as a two-part journey. Students first used AI to get feedback on an ungraded draft, then earned their grade based on the quality of their human-led revisions. The results are pedagogically routine, but transformative for thinking about the role of AI in writing curricula. By reframing AI as an “interactive writing partner” for a low-stakes task, the incentive to cheat evaporated. The design itself encouraged authentic engagement, turning AI from a tool of evasion into a tool for improvement and fostering a powerful sense of learner autonomy.
This classroom-level insight became the cornerstone for a more ambitious, curriculum-wide vision. Smith et al. took this process-based philosophy and scaled it into a year-long, longitudinal model for a diverse group of postgraduate students. Their work reads like a blueprint for building AI competency as a core skill. Using “experience mapping,” they scaffolded the curriculum from foundational ethics to advanced application, treating AI not as a contraband tool but as a central object of study. Their “process-based assessments” were brilliant in their simplicity (students had to document and critique their AI use, making their thinking visible). But, their study also surfaced a crucial human element: even as students grew more confident, they harbored an “ethical hesitation,” a direct result of ambiguous and untrustworthy institutional policies.
It is a clear signal that pedagogical innovation couldn’t succeed in a policy vacuum.
This tension between pedagogy and policy is where the second act deepens. The work of Martin et al. delivered a finding that, on its surface, seems like a crisis but is, in fact, a liberation. When one group of students was given unrestricted access to AI and another was not, the teaching staff could not reliably tell their work apart. This wasn’t a sign of rampant cheating; it was definitive proof of the futility of detection. It underscored their central argument that our goal should not be policing, but the cultivation of “critical AI literacy.” At the same time, this finding amplifies the “ethical hesitation” found by Smith et al. If instructors cannot detect AI use, then students are left to navigate a high-stakes guessing game about what is permissible, a situation that erodes the very trust that a process-based model seeks to build.
From this complex web of evidence, a clear path forward begins to emerge, one that braids together process, equity, and policy. Wang and Ren provide a large-scale model for how this can work in practice. Their “transparent process” design had 140 undergraduates use AI for a collaborative Wikibook project, but with a critical requirement: they had to log all their AI interactions. This simple act of making the process visible inherently designed out the possibility of plagiarism. Their work, like that of Smith et al., also surfaced a profound and hopeful insight. For their largely international cohorts, generative AI acted as a powerful instrument of digital equity. It provided crucial language support that helped to level the academic playing field.
If these studies provide the pedagogical “why” and “how,” Perkins et al. provide the institutional “what.” They address the “ethical hesitation” head-on with the tool they designed: The Artificial Intelligence Assessment Scale (AIAS). This five-level framework’s aim is to help faculty build trust through transparency. It empowers educators to explicitly define the rules of engagement for every assignment, from “AI Prohibited” to “AI Task Completion, Human Evaluation.” It resolves the ambiguity that breeds anxiety and allows for the kind of intentional, scaffolded curricula designed by Smith et al. to flourish. It provides the firm, reliable ground upon which a new, more honest partnership between students and faculty can be built.
What we have here, then, is not a collection of disconnected studies, but a coherent and compelling new philosophy of assessment taking shape. There are many others that we could choose to cite here instead, working on similar issues with similar aims. More important than the research cited is the trajectory it unveils. Akiba and Garte give us the classroom tactic. Smith and Wang & Ren give us the curricular strategy. Martin et al. give us the liberating critique of the old way. And Perkins et al. give us the institutional policy to bring it all together. The message is unambiguous. The future is not about building better walls, but about designing better classrooms and fostering a more trusting, transparent, and equitable relationship with our students and our tools.
The Third Act: The Strategic Imperative
This brings us to the third and final act of our story, where the stakes of this pedagogical revolution are raised to the highest level. If the first act was about the failure of a technological strategy and the second was about the rise of a pedagogical one, this third act is about the emergence of a new strategic imperative. The shift toward process-oriented assessment is not merely a clever solution to a classroom management problem. It is the most vital and necessary adaptation higher education can make to fulfill its core mission in an AI-augmented world. It is the answer to the ultimate “So what?”
The brutal truth is that the world for which we are preparing our students has already irrevocably changed. The value of purely informational knowledge, the kind that can be easily retrieved and synthesized by a machine, has plummeted. This realization, however, should not cause panic. The imperative to focus on “what you can do with what you know” has long been the bedrock of professional training and applied learning. This is not a radical departure from our values, but a powerful clarification of them. This is not a pantomime for eliminating theoretical inquiry; it is a claim about the character we seek to develop in our students. Our third act, our strategic imperative, is to design learning environments that cultivate a sense of wonder, de-escalating the transactional threat of “return on investment” by facilitating genuine transformation.
Process
The heart of this imperative lies in a simple, profound truth: the very skills cultivated by the virtue-forming, process-oriented pedagogy imagined in our second act are the same skills now demanded by the AI-augmented workforce. The “un-cheatable assignment” is also the most relevant one. When a student is asked to document their research trail, reflect on their use of AI, engage in peer review, and revise their work based on multiple forms of feedback, they are not performing academic busywork. They are rehearsing the core practices of the modern knowledge worker, cultivating a durable, tool-agnostic disposition that is resilient, adaptive, and uniquely human.
This approach addresses the well-documented “motivational crisis” in education by shifting away from disposable assignments: work created solely for a grade and destined for the digital trash bin. Instead, we can empower students to become active creators of knowledge, tasking them with building something of lasting value. At the heart of this transformation is the concept of hands-on, renewable assignments, and one of the most powerful examples is tasking students with designing learning experiences for others.
Consider the findings of “Learning and Motivational Processes When Students Design Curriculum-Based Digital Learning Games,” a study that explored this very idea. When students were challenged to create their own digital games based on their curriculum, they were compelled to engage with the material on a much deeper level. They weren’t just studying to pass a test; they were working to understand the subject so thoroughly that they could effectively teach it to others through the complex, interactive medium of a game.
This kind of project is inherently engaging because it taps into our most powerful intrinsic motivations: curiosity, the deep satisfaction of mastering a difficult skill, and the fundamental human need to connect with a community. The struggle to design a functional, educational game becomes a form of “hard fun.”. This is a rewarding, iterative process that forges a far more memorable and durable understanding of the material than rote memorization ever could. It pushes students beyond simply remembering facts and into the higher-order cognitive work of analysis, evaluation, and creation.
The benefits of this pedagogical model ripple far beyond a single course. When students, particularly working adults, are empowered to shape a project that has a direct application to their professional lives, the learning becomes immediately relevant and motivating (as demonstrated in “A Conceptual framework for Non-Disposable Assignments”). These renewable assignments become tangible artifacts for a professional portfolio, demonstrating practical skills in communication, teamwork, and project management more powerfully than a transcript ever could. Furthermore, the very act of creating for a real audience fosters a vibrant “community of learning.” As students design, test, and share their creations, they learn from and with one another, building not just knowledge but the crucial social and collaborative skills that are the bedrock of professional life.
Ultimately, by empowering students to be creators, we shift the focus of education from the transactional act of earning a grade to the transformational process of building something meaningful. This is how we de-escalate the threat of ROI. We don’t ignore it; we transcend it. By creating environments of experimentation that push students beyond what they thought themselves capable of, we make the entire learning journey more engaging, the knowledge gained more permanent, and the skills developed more valuable for the world that awaits them after graduation. This is the ultimate purpose of the shift from product to process.
Specification
However, designing a brilliant, renewable assignment is only half the battle. If we pour all our energy into creating these rich, engaging, process-oriented experiences but continue to assess them with the blunt, anxiety-inducing instrument of traditional grading, we create a profound cognitive dissonance. We tell students we value their creative process, but we reward them based on a system that prioritizes point accumulation over genuine risk-taking and iterative learning. This is where the second pillar of our pedagogical revolution becomes essential: Specifications Grading.
At its core, Specifications Grading, or “specs grading,” is a philosophical shift disguised as a grading system. It moves away from the subjective and often demoralizing world of percentage points and toward a clear, transparent model of competency. The first step in this shift is recognizing the profound, and often hidden, harm of traditional grading. For students, a score of 82% is a piece of data shrouded in fog. It simultaneously says “pretty good” and “not good enough,” but it rarely communicates anything precise about what was done well and what needs improvement. This ambiguity generates anxiety and pushes students to play a defensive, grade-maximizing game rather than taking the intellectual risks necessary for deep learning.
The second step is to replace this ambiguity with radical clarity. Specs grading does this by replacing the hundred-point scale with a simple binary: the work either meets the clearly defined standards (”specifications”) for a given task, or it does not yet. Think of it less like a judge at a beauty pageant holding up a score of 8.7, and more like a pilot’s pre-flight checklist. The criteria are objective, transparent, and known to everyone in advance. The wing flaps are either in the correct position, or they are not. This clarity removes the instructor’s subjective “judgment” from the equation and replaces it with a shared, objective standard.
This clarity, in turn, allows for the third and most important step: transforming the entire assessment process from a series of high-stakes judgments into a low-stakes, iterative conversation. When the goal is no longer to scrape together as many points as possible, but to meet a clear standard of quality, the entire dynamic shifts. The focus moves from arguing about partial credit to collaborating on how to meet the specifications. This detoxifies the relationship between student and instructor, creating a partnership aimed at achieving mastery rather than a transaction aimed at producing a grade.
This system is built on a simple, powerful premise: the first attempt at any meaningful work is rarely the best. Therefore, the invitation to revise is not an exception to the rule; it is the rule itself. In most specs grading systems, a “Not Yet Satisfactory” submission is not a failure; it is the expected and normal first step in a dialogue. Students are typically given a set number of “tokens” or opportunities to resubmit their work without penalty, turning the feedback loop into the primary engine of learning. This explicitly reframes “failure” as “iteration,” a concept that is fundamental to how all meaningful work gets done in the professional world.
Crucially, this is not about lowering standards. In fact, it’s about raising them. Because students have the opportunity to revise, the bar for “Satisfactory” can be set at a genuinely high level of proficiency (a level that might seem punishing in a traditional, one-and-done grading system). The safety net of revision gives students the courage to attempt a high bar, knowing that a misstep is not a catastrophe but a learning opportunity. The goal is not to “fail” students, but to create a structure where the only path to success is through a rigorous, iterative process of feedback and improvement that leads to genuine mastery.
Tailoring Transgression
This brings us to the heart of the matter. A pedagogy built on renewable assignments and specs grading sets a high bar for authentic, applied work. This creates a powerful new role for AI, not as a tool for cheating, but as an essential partner in learning. In this model, AI becomes a necessary scaffold, a specialized coach that allows students to achieve ambitious project goals that would otherwise be out of reach. It enables a profound shift from personalization (where a system serves up pre-packaged content) to what we might better call tailoring. A student is no longer just a recipient of a standardized assignment; they become an artisan, using this powerful new tool to tailor the project to their unique skills, interests, and professional aspirations.
This is where the work of the great feminist thinker bell hooks becomes so resonant. What we are describing is a pedagogy of transgression. By integrating AI as a learning partner, we transgress the traditional boundaries of the classroom. The instructor is no longer the sole arbiter of knowledge, a role they could never truly fill. Instead, they become the architect of a learning environment, a mentor guiding students as they use powerful tools to build knowledge for themselves. A history student wanting to present their research as a data visualization, for instance, can partner with an AI to learn the necessary coding skills: a competency far beyond the history professor’s expertise. The professor’s role is to guide the historical thinking, while the AI helps execute the tailored vision. This is a transgression of authority in the service of a deeper, more authentic learning.
This transgressive act, as hooks understood, is the very thing that allows for the creation of a true learning community. When we abandon the pretense of the all-knowing professor and embrace a model where instructor, student, and AI are all partners in a shared inquiry, we create a space for vulnerability, experimentation, and mutual growth. This is education as “the practice of freedom.” Students are freed from the anxiety of the single, high-stakes performance and empowered to engage in the messy, iterative, and deeply rewarding process of creation. By tailoring their work to a real audience and a real purpose, they form a community that extends beyond the classroom walls, building not just a portfolio of work, but a durable sense of their own agency and power in the world.
To create this kind of transgressive space, however, requires a profound shift in the teacher’s own disposition. It demands letting go of control. This is not a diminished rigor; it means releasing the illusion of absolute authority. The instructor must move from being a “sage on the stage” to what we might call an “architect of discovery.” Their primary creative act is no longer the delivery of a perfect lecture, but the design of a compelling problem, the curation of a rich set of resources, and the scaffolding of a process through which students can build their own understanding.
This requires an ethic of hospitality, an ability to create a space where uncertainty is not a threat but an invitation, and where the instructor is willing to say, “I don’t know the answer to that, but let’s design a process to find out together.”
In this new ecosystem, AI functions as a kind of non-human intelligence working through hybridity with the student. The best metaphor is not a personal one, like a “tutor” or an “assistant,” but a functional one, like a weaver’s loom. A loom does not tell the weaver what to create, nor does it possess its own creative vision. It is, however, a sophisticated partner that extends the weaver’s capabilities, allowing for the creation of patterns and complexities that would be impossible by hand alone. The weaver provides the intent, the aesthetic judgment, and the critical eye; the loom provides the structure, the speed, and the amplification of that intent. The final tapestry is a true hybrid, an inseparable fusion of human vision and non-human capability. So too should it be in the classroom: the student brings the inquiry and the critical judgment, while the AI provides the scaffolding and the technical power, forming a hybrid intelligence aimed at a tailored, creative outcome.
Who is an educator now?
This vision, in turn, demands a new set of characteristics from the educator. First is a profound intellectual humility, an acknowledgment that they cannot be an expert in every tool or technique their students might wish to explore. Their expertise must shift from encyclopedic content knowledge to a mastery of process. Second is the cultivation of design thinking, seeing the course not as a series of content modules but as a holistic learning experience to be designed, tested, and iterated upon. Finally, and most importantly, is the development of a practice of metacognitive mentorship. The key questions for the instructor are no longer “Is this correct?” but “What was your process? Why did you choose that tool? What did you learn from the attempt, even if it failed?” This is the hard, human work of teaching in the age of AI: not to be the source of all answers, but to be the guide who helps students ask better questions of themselves, their tools, and the world.
When we weave these principles together (making the process the product, designing for connection, assessing for competence, and building something that lasts) they cease to be a mere list of best practices. They become the architectural blueprint for a more humane and powerful form of education. This is not simply a new way to grade or a new type of assignment; it is a coherent philosophy that reframes the entire purpose of our work. It is a commitment to building learning environments where the deepest human needs for connection, purpose, and mastery are not just acknowledged, but are placed at the very center of the experience.
This vision finds its most powerful expression when we place it in dialogue with bell hooks’s call for a transgressive pedagogy. The classroom she imagined—a space of freedom, community, and shared vulnerability—is precisely what this model seeks to build. Each principle is a deliberate transgression against the traditional, hierarchical model of education. Making the process the product transgresses the tyranny of the single, correct answer. Designing for connection transgresses the isolating individualism of the traditional essay. Assessing for competence transgresses the subjective power of the instructor as judge. And building something that lasts transgresses the artificial boundary between the classroom and the world.
It is here that AI, so often framed as the enemy of this humanistic vision, reveals itself as its most potent and unexpected catalyst. The AI partner, the weaver’s loom, is what makes this transgressive space a practical reality for every student. It is the tool that allows a learner to tailor a project to their deepest interests, to acquire a technical skill on the fly, to build something far more ambitious and beautiful than they could have alone. This is how we create education as the practice of freedom: by giving students not just permission, but the powerful, world-building tools they need to pursue their own most compelling questions. The challenge of generative AI is not, in the end, a technological problem. It is a pedagogical opportunity, a chance to build the classrooms of which we have always dreamed.








This is a wonderful and comprehensive look at the opportunities schools have, Adam! The sad reality is that it will take a long time for the educational enterprise to make the necessary changes that you are advocating - even though we need change today!. Until colleges and universities adopt a process-based rather than knowledge-based approach, the rest of k-12 is kind of forced to keep doing the same old thing, since that's how students gain entry into higher ed.
I'm interested in hearing about ways to be intentional about process learning while the school system attempts to change course. I have some ideas, and I wonder if you have insights to share. Thanks,
Oh yes! A complete manifesto, a call to arms, a source that I think I'll be coming back to often. Thank you for this thorough overview of process-based teaching.