The recent Reuters investigation into AI-enhanced medical devices has sparked renewed debate about artificial intelligence in healthcare. Reports of surgical complications, misidentified anatomy, and device malfunctions following AI integration are concerning and deserve serious attention. But the conversation that follows must be more sophisticated than simply whether AI belongs in medicine. That debate is already over. AI is here, and patients are using it whether we approve or not.
The real question is how we ensure AI serves health rather than undermines it.
The Regulatory Pathway Matters
What the Reuters report actually exposes is not AI as inherently dangerous, but a critical gap in how we govern its introduction into medical practice. When AI is embedded into medical devices like surgical navigation systems, it enters a regulated pathway. These devices must meet FDA standards, undergo testing, and demonstrate safety before approval. The question we should be asking is whether AI components within these devices are held to the same rigorous standards as the devices themselves.
This matters because regulation creates accountability. When AI is integrated into a medical device, someone owns the outcome. There are review processes, incident reporting systems, and mechanisms for corrective action. The device operates within a professional context where a skilled surgeon acts as a critical safety layer, a human checkpoint in what safety engineers call the Swiss cheese model of risk management.
Compare this to the AI tools patients are already using directly: chatbots offering medical advice, symptom checkers making diagnostic suggestions, wellness apps providing treatment recommendations, all operating largely outside regulatory oversight, without professional mediation, and without any duty of care. This is where the real governance challenge lies, and where our response as a profession will determine whether we remain relevant to the patients we serve.
The Patient AI Revolution Is Already Here
Here is the uncomfortable truth that the medical profession must confront: large and rapidly growing numbers of people are already using AI to inform health decisions. While precise usage statistics are still emerging, surveys suggest millions are turning to AI chatbots for symptom checking, health questions, and medical advice. For many, particularly those without consistent access to professional care, these tools have become a regular source of health guidance.
This creates a paradox that should deeply concern us. AI in patient hands is simultaneously:
Potentially safer than ever before: For the first time in human history, someone in a remote village, an underserved community, or simply between appointments has access to medical knowledge that would have been impossible to obtain a generation ago. They can ask questions, explore symptoms, understand conditions, and access evidence-based guidance that previously required professional consultation.
Potentially more dangerous than ever before: That same person, without training to assess probability, interpret context, or weigh trade-offs, may act on AI outputs with confidence that far exceeds their reliability. They may miss red flags, dismiss critical symptoms, delay necessary care, or pursue harmful interventions based on plausible-sounding but fundamentally flawed reasoning.
The same tool. The same technology. Two entirely different outcomes depending on how it is used.
The Defensive Response Is Failing
The medical profession's instinct has often been defensive: restrict access, warn against use, insist on professional mediation, delay adoption until perfect safety can be guaranteed. This response is understandable. It is also failing.
Defensive reasoning has always struggled against disruptive innovation. When photography emerged, portrait painters argued it lacked artistry. When calculators appeared, mathematicians worried about computational skill loss. When the internet democratised information, encyclopaedia publishers predicted chaos. The pattern repeats: established expertise resists tools that threaten to redistribute capability.
But here is what actually happens when we argue against use rather than engage with it: we remove ourselves from the conversation at precisely the moment our guidance matters most. We become irrelevant to the millions already using these tools. We lose the opportunity to shape how AI is used, when professional input is sought, and what safeguards might actually work.
When we say "patients shouldn't use AI," what patients hear is "doctors don't understand the tools I'm already relying on." Trust doesn't increase. It fragments.
The Access Argument We Cannot Ignore
The ethical dimension here is stark and unavoidable.
Billions of people worldwide lack access to consistent, sufficient, and reliable medical care. Not because they don't want it. Not because they don't need it. Because the system we have built cannot reach them. The distribution of medical expertise is profoundly unequal, concentrated in wealthy countries, urban centres, and settings where people can afford regular care.
We cannot, in good conscience, argue that populations we are failing to serve should be denied access to tools that offer them something approximating medical knowledge for free. That position is morally indefensible.
Yes, AI used poorly can cause harm. But so does untreated illness, delayed diagnosis, and decisions made in the complete absence of medical guidance. For someone who cannot access a doctor for weeks or months, who cannot afford consultation fees, or who lives where specialists simply don't exist, AI may represent the best health guidance they will ever receive.
The question is not whether they should use it. They already are. The question is whether we will help them use it wisely.
The Swiss Cheese Model: Where Safety Actually Lives
Safety engineers use the Swiss cheese model to explain how complex systems prevent catastrophic failure. Multiple layers of defence exist, each imperfect, each with holes, but aligned so that failures in one layer are caught by others.
In surgical settings, AI-enhanced devices operate within multiple protective layers:
- Regulatory pre-market approval
- Professional training and credentialing
- Institutional protocols and oversight
- Real-time human judgment from skilled practitioners
- Incident reporting and continuous improvement systems
- Legal accountability and professional liability
When an AI surgical navigation system contributes to harm, every one of these layers is activated. The surgeon can override. The institution investigates. Regulators are notified. The manufacturer must respond. Learning occurs.
In patient-facing AI use, most of these layers are absent. There is no professional checkpoint. No institutional oversight. No regulatory requirement for incident reporting. No legal liability when harm occurs. The patient operates alone, making consequential decisions based on outputs they may lack the training to properly interpret.
This is not an argument against patient AI use. It is already happening at massive scale. It is an argument that we need to build new safety layers appropriate to this reality.
What "Proper Use" Actually Requires
The challenge with AI in patient hands is not the technology itself. It is context collapse. Professional medical judgment involves:
- Assessing probability in individual circumstances, not populations
- Recognising when symptoms cluster into patterns that matter
- Knowing what can safely wait versus what demands immediate action
- Understanding how multiple conditions, medications, and risks interact
- Tolerating uncertainty rather than forcing premature conclusions
- Recognising the limits of information and when escalation is necessary
AI can provide information. It cannot reliably provide this contextual judgment, especially when the user lacks the framework to critically evaluate its outputs.
From my book A Question of Good Health, I use a simple example: asking AI about matcha tea. A generic query produces enthusiastic lists of benefits. A more carefully structured question that demands balanced critique, evidence quality, and potential downsides produces substantively different guidance. Same tool. Dramatically different value.
Most people don't know how to structure that second question. They accept the first answer and act on it. When the stakes are symptom interpretation, medication decisions, or whether to seek care, that difference matters enormously.
What We Should Be Doing Instead
Rather than arguing against AI use, a battle already lost, the profession should be actively engaged in making that use safer and more valuable:
Develop and teach AI literacy: Help patients understand what AI can and cannot do reliably. Teach them to recognise when outputs require professional verification. Show them how to structure questions that surface uncertainty rather than false certainty.
Create navigation protocols: Establish clear guidance on when AI outputs should trigger professional consultation. Not "never use AI," but "if AI suggests X, here's when you need to speak to someone."
Build hybrid models: Design ways for AI interactions to feed into professional oversight. If someone is using AI to manage a chronic condition, how can that information reach their doctor appropriately? How can we create feedback loops rather than parallel universes?
Advocate for governed AI: Push for regulatory frameworks that require patient-facing health AI to meet minimum standards: transparent limitations, incident reporting, clear disclaimers, and pathways to professional care when needed.
Participate actively: Medical professionals, and that includes both clinical and medical system architects or entrepreneurs, should be involved in designing, testing, and improving patient-facing AI tools. Our absence from this work doesn't stop it. It just makes it less informed.
The Baseline We're Actually Comparing Against
When we evaluate AI-related adverse events in medical devices, we must resist the temptation to measure against perfection. Medicine has never been risk-free. Every surgical procedure carries inherent uncertainty and potential for harm.
If a surgical navigation system helped 90 surgeons successfully complete complex procedures where historically 80 would have succeeded, but also contributed to 2 additional complications, the arithmetic is not simple. Those 2 harms are real, serious, and demand investigation. They are also occurring within a context where 8 additional successful outcomes occurred that might otherwise have failed.
When we evaluate patient AI use, the comparison is even more complex. What is the alternative? For someone without access to care, the baseline isn't "perfect medical consultation." It's folk remedies, internet searching, delayed action, or no action at all. AI that gets it right 70% of the time may still be better than those alternatives, while simultaneously being far worse than professional care.
This doesn't make errors acceptable. It just requires our evaluation framework to be more honest, more real life.
What AI Uniquely Offers
Artificial intelligence brings capabilities that human practitioners, however skilled, cannot reliably replicate:
- Tireless consistency: AI doesn't have bad days, doesn't lose focus during hour 14 of a shift, doesn't let personal stress affect judgment
- Comprehensive recall: AI remembers every relevant guideline, drug interaction, and contraindication without cognitive load
- Pattern recognition at scale: AI can analyse thousands of similar cases instantly, identifying patterns that might take a human career to recognise
- Real-time feedback: AI can warn of deviations from safe parameters in the moment, not after the fact
- Democratic access: AI can bring sophisticated medical reasoning to places and people that would never otherwise receive it
These advantages are not trivial. In complex procedures, in diagnostic reasoning, in medication management, and in health decision support, AI offers genuine potential to reduce harm that currently occurs through human limitation, fatigue, and the simple reality that no practitioner can know everything.
The Path We Must Take
The future is not "AI or doctors." It is "AI and doctors, working together in ways that serve patients."
For medical devices, this means ensuring AI components are scrutinised as rigorously as mechanical or software components. It means transparent reporting when AI-assisted procedures produce unexpected outcomes. It means ongoing surveillance as systems learn and evolve, not just pre-market approval.
For patient-facing AI, the path is less clear but no less critical. We need:
- Professional engagement, not resistance
- Education about intelligent use, not warnings against any use
- Hybrid care models that integrate AI into professional oversight
- Governance frameworks that create accountability without preventing access
- Honest evaluation that compares AI against realistic alternatives, not impossible ideals
Most importantly, we need the medical profession to show up. To participate. To help shape how millions of people are already using these tools, rather than standing aside and declaring it shouldn't be happening.
The Real Risk
The greatest risk is not that AI will cause harm in medicine. It will, and it is. The greatest risk is that we will fail to govern it intelligently, allowing it to proliferate in unaccountable forms while restricting it in settings where professional judgment could make it safer.
We cannot stop patients from using AI any more than we could stop them from using Google. What we can do is help them use it better, know when to seek human guidance, and build systems that catch errors before they become irreversible.
The alternative, professional withdrawal from the AI conversation, doesn't protect patients. It abandons them to navigate these tools alone, without the benefit of expertise that could make the difference between harmful misuse and genuine empowerment.
The question before us is not whether AI belongs in medicine, but whether we will engage seriously with how it is used and who it serves. The Reuters investigation should sharpen that question, not settle it. We owe it to patients to get this right, which means engaging with both the promise and the peril, rather than defaulting to defensive resistance.
The technology is already here. Patients are already using it. Millions who lack access to care are already benefiting from it, even as others are being harmed by it.
Our choice is not whether this happens. Our choice is whether we will be part of making it work, or whether we will watch from the sidelines as it unfolds without us. This is a pivotal moment of choice for many, with their position in the healthcare influence stack at stake, but be clear that choice is not "to AI or not to AI" as it appears so frequently.
I know which choice serves patients better.
This perspective does not dismiss the harms documented in recent reporting, nor the serious risks of unmediated AI use in health decisions. It argues that addressing those harms requires engagement and governance, not retreat, and that the ethical case for helping millions without access to care use these tools wisely is simply too strong to ignore.
