AI Regulation NHS Technology

The UK Is Deciding How to Regulate Dental AI — Here's What Matters

Dr Ali Vatan Ali Vatan
·

The MHRA has launched a national commission on AI regulation in healthcare. As a dentist, here's what I think they need to get right.

The UK Is Deciding How to Regulate Dental AI — Here's What Matters

The MHRA has established a National Commission into the Regulation of AI in Healthcare and launched a call for evidence that ran from December 2025 to February 2026. This is the UK deciding how it wants to regulate AI in healthcare, and the decisions made here will shape what dental AI looks like in British practices for years to come.

I’ve been following this space closely. I want to lay out what I think they need to get right.

Where things stand

By the end of 2025, thirteen companies have received twenty-nine FDA-cleared AI products for dental imaging in the United States, all classified as Class II medical devices. Pearl and Overjet dominate, together accounting for roughly 34% of all dental AI clearances. These tools detect caries, assess periodontal disease, automate dental charting, and perform cephalometric analysis.

That’s the American picture. In the UK, the regulatory landscape is less defined. We’ve had the EU MDR framework, the UKCA marking process, and various piecemeal guidance documents, but no comprehensive dental-AI-specific regulatory framework. The MHRA’s national commission is the first serious attempt to build one.

Launched on 26 September 2025, the commission brings together global AI leaders, clinicians, regulators, and patient advocates to advise on a new framework, to be published in 2026. The call for evidence invited submissions from the public, patients, clinicians, technology companies, and healthcare providers. The questions they’re asking are the right ones: Is the current framework sufficient? How should safety be monitored post-deployment? How should responsibility and liability be shared across developers, deployers, and regulators?

Human in the loop is non-negotiable

If there’s one thing I’d want the commission to hear, it’s this: the human must stay in the loop. Always.

AI is a tool that humans use to do their job better. That’s the correct framing. It’s not a replacement for clinical judgement. It’s not an autonomous diagnostic system. It’s a powerful assistant that can identify patterns, flag abnormalities, and process information faster than any human, but the final decision must rest with a qualified clinician.

If you give the entire responsibility to the AI, how is the human supposed to take responsibility? That’s the paradox regulation needs to address head-on. You can’t hold a dentist accountable for an AI’s conclusions if you’ve designed a system where the dentist is merely rubber-stamping output without understanding the reasoning behind it.

Clinicians must understand the reasoning behind AI conclusions. Not the technical architecture of the neural network (that’s neither realistic nor necessary) but the clinical logic. Why did the AI flag this area? What features is it detecting? What’s the confidence level? Clinicians need to retain the ability to challenge AI claims with their own critical thinking. If a dentist looks at a radiograph, sees what the AI has flagged, and disagrees based on clinical experience, they need the knowledge and confidence to act on their own judgement.

Regulation should protect and strengthen that dynamic, not undermine it.

Safety without stifling innovation

This is the tightrope. Regulate too lightly and you risk patients being harmed by poorly validated tools. Regulate too heavily and you push innovation offshore; developers build for the US market first and treat the UK as an afterthought.

The FDA’s 510(k) pathway, while imperfect, has provided a relatively clear route to market. Twenty-nine clearances in a few years is evidence the system works, at least in terms of getting products to clinicians. The UK needs a framework that’s at least as clear and navigable. Innovators need to know what’s required, how long it takes, and what evidence they must provide. Ambiguity is the enemy of investment.

At the same time, the framework needs genuine teeth. Post-market surveillance is arguably more important than pre-market testing. AI systems can behave differently across populations, imaging equipment, and clinical contexts. A tool validated on American radiographs may perform differently on images from UK practices using different equipment and serving different patient demographics.

What dental-specific regulation should address

Dental AI has particular characteristics that generic healthcare AI regulation might miss:

  • Workflow integration. Dental AI tools sit within practice management systems, imaging software, and clinical decision-making processes. Regulation needs to consider the full workflow, not just the AI module in isolation.
  • Training data transparency. Clinicians should know what data a tool was trained on. If a caries detection algorithm was trained primarily on adult permanent teeth, it might not perform well on paediatric mixed dentition. That information should be readily available.
  • Performance in UK populations. The UK has its own demographic profile, oral health patterns, and imaging conventions. Regulatory clearance should require evidence of performance in relevant populations, not just extrapolation from US data.
  • Clear liability frameworks. When an AI tool misses a diagnosis, or a clinician overrides an AI flag and gets it wrong, who bears responsibility? The current ambiguity serves no one.

The market is real

The global AI dental imaging market is projected to reach over $3 billion by 2034. In the US, AI diagnostic tools are already deployed across thousands of practices, with VideaAI’s platform used by over 90,000 clinicians analysing hundreds of millions of radiographs annually.

The UK market is smaller but growing, and the MHRA’s decisions will determine whether British patients get access to these tools promptly or wait years behind other countries.

What I’d tell the commission

Keep the human in the loop. Make it a regulatory requirement, not a recommendation. Design the framework so clinicians are empowered to understand, evaluate, and challenge AI outputs, not just accept them.

Require transparency from developers about training data, performance metrics, and known limitations. Make post-market surveillance mandatory and meaningful. Create a clear, predictable pathway to market that gives innovators confidence to develop for the UK.

AI in dentistry is a tool that humans use to do their job better. The moment we lose sight of that, we’ve got the regulation wrong.

References

  • MHRA. “MHRA seeks input on AI regulation at ‘pivotal moment’ for healthcare.” GOV.UK, December 2025. gov.uk
  • MHRA. “Regulation of AI in Healthcare — Call for Evidence.” GOV.UK. gov.uk
  • MHRA. “National Commission into the Regulation of AI in Healthcare.” GOV.UK. gov.uk
  • For the FDA-cleared dental AI landscape: From PMC. “FDA-Approved AI Solutions in Dental Imaging: A Narrative Review of Applications, Evidence, and Outlook.” pmc.ncbi.nlm.nih.gov
  • Innolitics. “The Dental AI Revolution: A Comprehensive Analysis of 510(k) Clearances (2021-2025).” innolitics.com