Assessment

AI Didn't Kill Cheating. It Killed Visibility.

· 7 min read · By Prova Team
Professor reviewing student assignments unable to verify understanding

Professors are still receiving assignments. Perfectly formatted. Properly cited. Completely useless as evidence of anything.

The Problem

A key takeaway from the ASU+GSV Summit this year: the assignment is broken. Not because students stopped doing work. Because the work stopped meaning anything.

Think about what an assignment was actually for. Not the grade. The grade was a byproduct. The assignment existed so a professor could see a student's thinking develop over time. You assign it, they struggle with it, they submit something that reflects where their understanding actually is. That was the whole mechanism. That was the feedback loop that teaching ran on.

That loop is gone.

A professor at a mid-sized university told us recently that she stopped trusting her own gradebook. Which makes sense! After all, who wants to spend forty minutes reading a beautifully written case analysis just to have absolutely no idea whether the student who submitted it could explain a single concept in it if asked. The writing is clean. The argument holds together. And she knows, in the way that experienced professors just know, that it didn't come from that student's brain.

"So she grades it. Because what else is she going to do? Will you call that student in and label them a cheater? It's all hearsay. What good will that do?"

After all, isn't the goal to be able to write as coherently as a computer?

That is happening in every department, at every university, right now. Professors grading work they don't believe in, producing grades they can't stand behind, and handing out credentials that might mean nothing whether they fully know it or not. The assignment didn't disappear. It just stopped telling the truth.

That's the whole problem right there.

Why Everything Being Done About It Is Wrong

The edtech industry is full of a collection of tools that share one quality: they don't actually solve assessment.

AI detectors aren't stupid. The people who built them understood that something had changed and tried to build a response. They just measured the wrong thing. They look at the output and try to determine if it sounds like a machine wrote it. But the problem was never the writing. The problem is whether the person who submitted the writing understands what's in it. A student can run AI output through a paraphraser, change thirty words, and walk right through every detector on the market. Which they do. Every day.

So yeah, Turnitin is going to fix this. Sure it is.

Proctoring software has the same fundamental flaw. It watches the student. It doesn't examine what the student knows. That's not its job. Its job is to intimidate and lock down. Watched doesn't mean understood.

"Reweighting grades toward finals is the academic equivalent of rearranging furniture after a flood. This doesn't solve the checkpoint assessment issue."

The instrument of assessment itself is compromised the same way you're going to need to call up a restoration company and gut your floors. Changing where stuff is positioned, or how much grades count, doesn't fix that.

None of these tools answer the question professors actually need answered: does this student know what they submitted? Are they learning?

Nobody fixed it. They just built things around it.

The Real Insight

Now, why does this matter more than other edtech problems? It seems like there are a thousand AI agent builder or AI tutoring tools on the market. Why spend time solving this issue on our own? Because we have a massive disconnect between the actual problem and how we're solving it. The problem is we're attacking cheating the wrong way. We're calling the police! We're putting in strict policy to stop AI use in the classroom! That framing is wrong and when you get caught up in that, it will only lead to wrong answers. You get another Turnitin.

Of course students are using AI. Everyone is using AI. That's not the disease. That's the environment. The disease is that professors can no longer see thinking. And seeing thinking was the entire point of the exercise.

The written assignment was never a perfect proxy for understanding. It was a convenient one. We built an assessment infrastructure on the assumption that if someone could produce the right words in the right order, they probably understood the underlying material. It was a reasonable assumption for a long time. It stopped being reasonable the moment any student with a laptop could produce the exact right words in the exact right order in seven seconds.

Remember the feedback loop we mentioned at the beginning?

The one that teaching ran on? It's not just frayed. It's been replaced with a system that produces feedback on nothing. You're getting signal with no data behind it.

Can you call that signal? Can you call that an assessment?

Of course the oral exam always knew this. Ask someone to explain their reasoning out loud, defend a position, respond to a follow-up they didn't anticipate, and you find out immediately what they actually know. Law schools never stopped doing it. Medical schools never stopped doing it. The rest of higher education didn't abandon it because it was inferior. They abandoned it because it didn't scale.

The problem was never the exam. It was simply the scale.

What Actually Needs to Happen

A professor needs one thing right now. Not a better detector. Not another report. They don't need more AI agents in their workflow. They desperately need a way to ask a student to explain their own work. They need it done at scale, across an entire course, without it consuming every hour they have. This is now possible.

That means the assignment stays. The submission stays. And then something happens after the submission that couldn't happen before: the student gets asked about it. Follow-up questions. Oral exam style. The kind of questions that take thirty seconds to answer if you understood the material and are impossible to fake if you didn't.

Universities owe their students a degree that means something. They owe employers graduates who can actually perform. Right now they're handing out credentials that neither side can trust. That's not a small problem. That's an existential one.

Prova puts the oral exam back into the process — for every student, every assignment, without adding forty hours to a professor's week. One added layer. The layer that would have been there all along if it were possible.

The assignment was supposed to show you how a student was thinking. Now it shows you what their AI can produce. Those are not the same thing. They never were. And the universities that figure that out first are the ones whose degrees are going to mean something ten years from now. And the universities who don't adapt are in for a rude decade.