The Problem
Most assignments in higher education are built around a single question: can you produce the right output? Write the essay. Solve the problem. Select the right answer. True or False. Submit the analysis. The output arrives, gets graded, and everyone moves on. The process that produced it — the struggle, the revision, the moments where understanding either clicked or didn't — is invisible. It's always been hard to see but you used to be able to get some sort of tell based on the output. Now it's completely irrelevant because the output can be generated without any process at all.
That's the design flaw. Not the AI flaw. The design flaw.
Professors feel this acutely. They read a submission and something is off. It may not be wrong exactly, just hollow. The argument is technically sound. The structure is clean. But there's no evidence of a mind working through it. No interesting wrong turns, no idiosyncratic phrasing, no sign that the person who submitted it ever actually wrestled with the material. It reads like a document, not like thinking. The beauty of process is erased and what's left is stale. It's true.
So the professor grades it. Because again, the rubric grades outputs. And the rubric was designed for a world where outputs were hard to fake.
"That world ended. The assignment didn't get the memo."
Why Banning Tools Solves Nothing
The instinct to restrict AI access during assignments is understandable. If the tool is causing the problem, remove the tool. Clean logic.
However, restricting AI doesn't restore the assignment's ability to measure understanding. It just makes AI feel like contraband. Students will use it anyway but it'll be illegal. Now you're playing whack-a-mole with something that isn't going away and employers will actually require. That's a game I wouldn't sit down to play.
AI detectors take the same approach from a different angle. They aren't stupid, the engineers who built them were trying to identify something real. But they're looking at the output for signs of machine involvement instead of looking at the student for signs of human understanding. Even if they worked perfectly, which they don't, you'd still know nothing about whether the student learned or retained a single thing.
"Great. Another honor code update. That'll do it. More surveillance. More restrictions. More policies. None of them answer the actual question."
The assignment is the problem. Not the tools students are using to get around it.
The Real Insight
A hot take that probably hasn't been said out loud very often: the output was always a proxy. We wanted to measure understanding and we measured production because production was hard enough that it implied understanding. That implication is gone, and now to measure understanding, we have to come up with a different proxy than the output itself to get there.
Of course the fix isn't to make production harder again. However good the models are at the time you're reading this, they're just going to get better. This just becomes an arms race you'll lose. The fix is to stop measuring production entirely and start measuring process and comprehension directly.
What does a student's process actually look like? How did they approach the problem? Where did they get stuck? What types of questions are they asking? Where did they need extra help? What did they try that didn't work? What do they think now that they've finished? Those questions have always been more revealing than the finished product. We just never had a scalable way to ask them for every student on every assignment.
The hollow submission from the opening?
Ask that student to walk you through their reasoning out loud and you'll know in ninety seconds whether anything real happened behind it.
And remember the hollow submission from the opening? The one that reads like a document instead of thinking? Ask that student to walk you through their reasoning out loud and you'll know in ninety seconds whether anything real happened behind it. That's always been true. The oral exam has always known this. The rest of the assessment system just ignored it because it didn't scale. Prova can get that dug out and identified because it is scalable.
What a Redesigned Assignment Actually Looks Like
Start by accepting that AI use is not the enemy. It's the environment. You're going to have to get comfortable in this environment. The course or professor or assignment that pretends otherwise is the one that produces hollow outputs and graduates who can't perform.
Build around process, not product. Ask students to document how they approached the problem, what tools they used and why, where their thinking changed. Not to catch them and label them as cheaters, but to understand how they work so you can help them work better. That transparency is a teaching tool. Do not let it feel like a trap. Students who feel like you're genuinely curious about their process rather than surveilling it for violations will engage with it completely differently.
Then add the follow-up. After every submission, ask students to explain what they produced. Not a formal oral exam that requires scheduling and prep time. A targeted set of questions tied directly to the assignment that takes ten minutes and tells you immediately whether understanding was there would do the trick. Prova does this automatically, after every assignment, for every student, without adding hours to the professor's week.
The students who used AI and learned something will defend their work confidently. The ones who didn't will reveal themselves early, when there's still time to intervene and help them succeed. Both outcomes are better than what happens now, which is nothing.