The Better Question
How do we create an environment where our campus communities can learn to use AI responsibly?
The institutions that will thrive are not the ones trying to ban AI or pretend it doesn't exist. They are the ones creating safe spaces where students and faculty can experiment, learn, and yes—make mistakes—without compromising data security or academic integrity.
Because we're not just preparing students for jobs that will use AI.
We're preparing them to be thoughtful humans in a world where AI is everywhere.
And that requires practice.
Lots of it.
The Wrong Way to Respond
But right now, many universities are responding to AI the wrong way—by trying to detect it.
In the past two years, universities have rushed to deploy AI-detection tools in an attempt to maintain academic integrity. But the results have been messy.
"Students are now being forced to prove they didn't cheat."
A recent New York Times story described students recording hours-long videos of themselves doing homework, compiling detailed editing histories, and saving screenshots of their writing process—just in case a detection algorithm flags them incorrectly. One student had to submit a 15-page PDF documenting her work process to overturn a zero she received after a tool suspected AI use. Researchers have also found that AI-detection tools falsely flag human-written text around 6–7% of the time on average, creating real risk for honest students.
The problem is not just technical.
It's structural.
Most current solutions are built around the same premise: catch the cheaters.
"Proctoring tools lock down computers, record webcams, monitor keystrokes, and flag 'suspicious behavior.' Detection software scans text for statistical patterns that might indicate AI use. Both approaches attempt to police submissions after the fact."
But in an AI-enabled world where students are expected to learn with AI, detection is a fragile strategy.
A Better Question Is Emerging
How do we design assessments where learning becomes visible?
This is the idea behind Prova.
The Problem With Detection-First Assessment
Detection tools face three core problems.
1. They punish honest students
When detectors misfire, the burden of proof shifts to students. As the NYT article shows, students now feel pressure to document their entire writing process to defend themselves.
2. They damage trust
When a professor receives a "78% AI-generated" score from a black-box algorithm, it's difficult to explain or defend that number to a student.
3. They focus on behavior, not learning
Proctoring tools can flag eye movement, background noise, or tab switching—but none of those signals actually demonstrate understanding.
They measure compliance, not learning.
And most importantly, they assume that AI must be banned or hidden, when the reality is that AI is here to stay.
The Opportunity: Assessment That Shows the Work
Educational researchers increasingly argue that AI should not simply automate existing testing models.
Instead, it creates an opportunity for new forms of assessment that capture real learning as it happens.
One recent education report put it this way:
"Instead of using AI as a new kind of Scantron, it could enable assessments that capture real-time performance as students work."
That shift—from static submissions to observable work—is exactly what Prova was built for.
The Prova Assessment Model
A Prova assessment is built around a simple idea:
Observe the work session, not just the final submission.
Instead of analyzing a finished essay or project after the fact, Prova captures the learning process itself.
Students complete assignments in a guided work session where instructors can see:
- how students approach the problem
- how they structure their thinking
- how they research and synthesize information
- how they use tools like AI
- how their ideas evolve during the task
This produces a transparent evidence trail of learning, without surveillance or lock-down restrictions.
Not Proctoring. Not Detection.
Prova is fundamentally different from proctoring software.
Traditional Proctor Tools
- • Lock down computers
- • Monitor webcams and environments
- • Flag "suspicious behavior"
- • Focus on catching misconduct
Prova
- • No installation required
- • No invasive monitoring
- • Provide insight into the work process
- • Focus on observing learning
Prova does not restrict what students can do on their computer. It does not require installing monitoring software. It does not turn exams into high-stress surveillance sessions.
Instead, it creates a lightweight observable work environment where students simply complete the assignment as they normally would.
A Different Philosophy of Assessment
The AI era is forcing education to confront a difficult truth:
We cannot simply detect our way out of AI-assisted work.
But that does not mean integrity is impossible.
It means assessments must evolve.
The institutions that succeed will not be those that build stronger surveillance systems.
They will be the ones that design assessments where authentic learning is observable by default.
That is what Prova enables.
Not detection.
Not lockdown.
But evidence of real work.