Assessment

Medicine and Law Kept the Oral Exam. Did Everyone Else Quit?

· 7 min read · By Prova Team
Medical and law students in oral examination settings showing continued use of verbal assessment

They never stopped asking students to explain themselves out loud. Not just because they're traditional, but because they couldn't afford to find out what happens when you don't.

The Problem

Every year, medical schools graduate physicians who will make decisions that kill or save people. Every year, law schools graduate attorneys who will stand in front of judges and argue for clients whose freedom depends on their competence. These are not fields where a well-rounded, cookie cutter essay is sufficient evidence that someone knows what they're doing.

So those fields never built their assessment systems around cookie cutter essays. They built them around performance under questioning. For example, the USMLE Step 2 Clinical Skills exam requires medical students to demonstrate diagnostic reasoning in real time with standardized patients. In law, bar exams include oral components in many jurisdictions. Medical licensing boards conduct oral examinations specifically because written tests, even rigorous ones, leave a gap between knowing the material and being able to apply it under pressure when someone is asking you to explain yourself.

That gap matters in medicine. It matters in law. And since nobody in higher education wants to admit it out loud, I will: it matters everywhere else too.

"A business school graduate who can't walk a client through their analysis is someone that will struggle on the job. An engineer who can't defend a design decision in a meeting is one that won't be attending the meetings much longer."

A business school graduate who can't walk a client through their analysis is someone that will struggle on the job. An engineer who can't defend a design decision in a meeting is one that won't be attending the meetings much longer. A policy analyst who falls apart when a senator asks a follow-up question is at risk of their career. The gap between producing answers and being able to explain things is not a medical school problem. It's a human problem. Medicine and law just had consequences serious enough that they couldn't pretend otherwise.

The rest of higher education could afford to look away. Until today.

Why Dropping the Oral Exam Was Always a Mistake

Higher education didn't abandon verbal assessment because someone proved it was inferior. That's never been the case, and it's worth saying clearly. The oral exam didn't lose a competition or credentials. It just got squeezed out by scale.

Standardized testing has real virtues. It's consistent, gradable, and defensible. A multiple choice exam can be administered to five hundred students simultaneously and scored by a machine. A written paper can be evaluated against a rubric with reasonable reliability. These are genuine advantages and the people who built modern assessment infrastructure around them weren't wrong to value consistency. They built these systems off the facts they knew. They were built for their reality.

But that reality is changing. They assumed consistency was the same thing as validity, and it's not true today.

"A consistent measure of the wrong thing is still measuring the wrong thing, consistently."

Written assessments measure the ability to produce organized written output. They always have. For most of the twentieth century that was close enough to a proxy for understanding that the gap didn't matter much. AI has made that gap enormous today.

Medicine knew this. The oral clinical exam persisted not out of nostalgia but because every attempt to replace it with written assessment produced graduates who could pass tests and couldn't diagnose patients. The field kept getting reminded, in the most direct way possible, that the proxy wasn't good enough.

The rest of higher education never got that reminder.

The Real Insight

So why does any of this matter for a marketing professor or a social science department chair? It matters because the same logic that kept oral exams in medical education applies directly to what's happening in every classroom right now.

Of course the oral exam reveals things written assessment can't, that's the point. Ask a medical student to explain their differential diagnosis and you find out immediately whether they're reasoning or reciting. Ask a law student to defend their brief against a hostile question and you find out whether they understood the argument or just assembled it. The format forces something that written output never does: real-time accountability for your own thinking. It's a hot seat that they need to sit in for their professional credentials.

That accountability is exactly what's missing from higher education's response to AI.

AI closed that gap in the wrong direction.

It made producing the output painfully easy for students who never developed the understanding behind it.

Remember the gap we described at the opening? The difference between knowing something and being able to explain it under questioning? AI closed that gap in the wrong direction. It made producing the output painfully easy for students who never developed the understanding behind it. And employers are starting to notice, in great volume, that a graduate who can't explain their own work is a liability, not an asset.

What Higher Education Can Learn From the Fields That Never Forgot

The lessons here aren't complicated. It's this: if the credential is supposed to mean something, the assessment has to verify the thing the credential claims. If you stay married to the old assessment styles, your credibility ages. If you adapt, the credibility is true.

A medical degree claims this person can practice medicine. The oral clinical exam tests whether that's true. A law degree claims this person can reason through legal problems. The oral argument tests whether that's true. What does a computer science degree claim? What does a psychology degree claim? And what assessment, anywhere in the four years it takes to earn it, actually verifies that claim?

For most programs the honest answer is nothing rigorous. The written work used to be a reasonable proxy but we've established that it isn't anymore. The fix isn't complicated, just ask students to explain what they submitted after every assignment. This isn't supposed to operate as a punishment, nor as a gotcha, but as the normal expectation that you can account for your own work. This concept isn't offensive. Prova makes this scalable in the way medicine and law never had access to.

The fields with the highest stakes never stopped asking students to explain themselves. The rest of higher education stopped because it was inconvenient.