+44 20 8079 7860 team@thekeyholeheartclinic.com
The Keyhole Heart Clinic Star Rating

Over 100 reviews with a perfect 5-star rating  ( Read Reviews )

The rise of AI deepfake medicine: When a Doctor you trust isn’t really a Doctor

We’ve all heard warnings about fake news, but we are now facing something far more dangerous: fake doctors.

And not just anonymous avatars giving dodgy advice online but AI-generated videos using the real faces and real voices of respected medical professionals.

A recent investigation by Full Fact uncovered hundreds of deepfake clips circulating on TikTok and other platforms. These videos show well-known doctors apparently endorsing supplements, “miracle cures,” or made-up medical conditions. None of it is real. None of it was said. But the technology is so convincing that even trained clinicians could be fooled at a glance.

This is not a future threat. It is happening right now.


When medical authority is hijacked

Deepfakes allow anyone to take footage of a doctor speaking at a conference, in Parliament, or even in an educational video, and manipulate it into something entirely fabricated.

One example:

A professor of children’s health appeared, without his knowledge, in a deepfake video promoting a menopause supplement containing everything from turmeric to Himalayan shilajit. AI rewrote his words, altered his mouth movements, and presented him as an enthusiastic endorser of a product he’d never heard of.

His real expertise didn’t matter. His reputation did.

Fraudsters know exactly what they’re doing: patients trust doctors. Borrowing that trust, even artificially, sells products. As a practising heart surgeon with educational content online, I can tell you: this could happen to any medical professional.


Why this is so dangerous for patients

Deepfake medical advice isn’t just misleading — it can be harmful.

  • Vulnerable groups, such as women experiencing menopause, are specifically targeted.
  • Fictional “symptoms” are invented to create demand — one video referenced a condition called “thermometer leg,” which does not exist.
  • Supplements are marketed as “science-backed” without any proper evidence.
  • Patients arrive in clinics quoting advice from videos their doctor never made.

When people cannot distinguish genuine medical guidance from AI-generated manipulation, the entire foundation of evidence-based medicine becomes shaky.

If you believe your doctor said one thing online and tells you something different in person, who do you trust?


The platforms aren’t ready

TikTok took six weeks to remove one doctor’s deepfakes — and even then, only admitted some of them violated guidelines.

Platforms have no reliable system for verifying who appears in health-related videos. There is no fast-track for doctors to request takedowns.

And supplement companies can simply say the videos were made by “unaffiliated creators,” absolving themselves of responsibility while continuing to benefit from the sales.

Offline, impersonating a doctor is a criminal offence. Online, there is effectively no consequence.

 

What must change NOW.

 

If we want to protect patients and preserve trust in medical expertise, three things must happen:

1. Mandatory watermarking for AI-generated content

Every platform should require visible labelling for synthetic media — especially when health claims are involved.

 

2. Legal accountability for businesses profiting from medical deepfakes

If a company benefits financially from fraudulent impersonation, “we can’t control our affiliates” cannot be a defence.

 

3. Verification for healthcare professionals in online video content

If you appear in a video giving medical advice, platforms should verify your identity just as they verify advertisers.

 

 

What Doctors can do

Medical professionals cannot solve this alone, but we can take proactive steps:

  • Monitor your digital footprint — know what content exists of you.
  • Educate patients — remind them to check your official channels.
  • Create trusted spaces online — a verified hub where your genuine advice lives.

If deepfakes are going to steal our voices, we must make sure our real ones are easy to recognise.

 


The BIGGER question

If AI can convincingly impersonate any doctor to endorse any treatment, how do we preserve public trust?

Do we retreat from public communication, making ourselves less accessible and leaving a vacuum for misinformation? Or do we step forward, speak clearly, and help shape the safeguards before the technology outpaces our ability to control it?

For the sake of patient safety and the future of evidence-based medicine, I believe we must choose the latter.

 

chat
Get In Touch With Our Expert Team At The Keyhole Heart Clinic.