The AI Accountability Paradox

by Andrew Vincent | Feb 12, 2026

Is the AI Governance Debate Hyper-urgent or Hypocritical?

I care deeply about governance. I have spent my entire working life within regulated healthcare and life science environments, and I have been consistently vocal about the need for robust governance frameworks and principles. Governance is not bureaucracy. It is how we keep people safe, and honest. So when I hear clinicians and regulators raising concerns about patients using ChatGPT for medical advice, I am instinctively sympathetic. The argument is compelling: doctors are accountable for what they tell you; AI is not. That asymmetry is real, and it matters.

But I am also troubled, because something about it also doesn't quite hold up to scrutiny. Not the principle, but the consistency. Because the more I examine it, the more I find myself asking an uncomfortable question: if unaccountable health information is the problem, why are we only talking about AI?

The World We Already Live In

Patients have been making health decisions based on unregulated, unaccountable sources of information for as long as modern medicine has existed. A newspaper health column that cherry-picks a single study and ignores three contradicting meta-analyses. A wellness influencer with two million followers telling people to stop their statins. A Google search that leads to a forum where strangers swap anecdotal treatment advice.

None of these sources carries professional registration. None owes a duty of care. None is subject to clinical governance. The ASA in the UK can investigate a misleading advert. The FTC in the US can pursue deceptive health marketing. Health Canada can act on false product claims. But the sanctions are administrative, not clinical. Nobody gets struck off. Nobody faces a negligence suit.

I am not saying these sources are equivalent to a doctor. Obviously they are not. But here is my point: if the concern is that patients are receiving health information from sources that are not governed, not accountable, and not required to disclose risk, then that concern existed long before ChatGPT. We just weren't this agitated about it.

So why now? Why AI specifically? And is the intensity of this reaction really about governance at all, or something deeper?

Is This Really About Governance?

I think there are legitimate governance arguments made for treating AI differently, and I will come to those. But I want to pause on a prior question, because I think it matters.

When an influencer with no medical qualification tells millions of people to abandon a prescribed treatment, the professional response is typically a weary sigh, perhaps a pointed tweet, occasionally an ASA complaint. When a newspaper publishes a sensationalised health story that distorts the evidence, we roll our eyes and move on. We do not convene panels. We do not call for legislative intervention. We do not describe it as a crisis, and certainly not a major one.

But when a patient uses ChatGPT to ask about their symptoms, or perhaps more accurately, when patients in the mass plural of the word, the reaction is qualitatively different. The language is urgent. The framing is existential. And I find myself wondering whether what we are really responding to is not the governance gap, but the disruption.

AI is not just another information source. It is a technology that can synthesise a medical evidence base no individual clinician could read in a lifetime and deliver it, conversationally, to anyone with a phone. That is an extraordinary capability, and it is reasonable to find it unsettling. It challenges assumptions about who holds medical knowledge, who mediates access to it, and what the role of the clinician looks like in a world where patients can interrogate the evidence base directly.

I am not suggesting that anyone raising governance concerns is acting in bad faith, or that the discussion is not legitimate. But I do think we owe it to ourselves to ask honestly whether the governance argument, correct in principle, is being applied with a selectivity that reveals something else underneath. If it is really about unaccountable health information, we should be equally alarmed about influencers and newspaper health pages. The fact that we are not suggests that something beyond governance is driving the conversation. Whether that something is fear of disruption, anxiety about professional relevance, or simply the human instinct to resist what we do not yet understand, it is worth naming. Because if the concern is not applied evenly, we have to ask why.

Testing the Four Arguments

Four arguments are commonly made for treating AI as categorically different, and each deserves scrutiny.

The first is authority: AI sounds confident and authoritative, so patients will trust it too much. Fair enough. But have you read a broadsheet health article recently? Or a book by a doctor-turned-wellness-guru? The authority those formats carry, especially when a medical credential is attached to a commercial venture, is arguably more dangerous precisely because the trust is borrowed from the profession itself.

The second is scale: AI reaches millions, and every output is unique, making oversight impossible. True. But a single viral TikTok reaches tens of millions in a day. Scale is a feature of modern media, not a feature unique to AI.

The third is hallucination: AI can fabricate information, including invented references. This is a real technical limitation, and it concerns me. But newspapers publish inaccurate health claims regularly. Influencers distort evidence routinely. The question is whether we are concerned about inaccuracy in health information, or only about inaccuracy when AI produces it.

The fourth is the relational argument: the doctor-patient relationship carries a duty of care that AI cannot replicate. This is the strongest argument, and it is correct. But it goes to the nature of the clinical encounter, not to the nature of health information. When a patient asks a chatbot a question, they are not entering a clinical relationship. They are seeking information, in the same way they seek it from a book or a search engine.

The Part Nobody Wants to Talk About

There is another dimension to this debate, and it is the one I find most troubling in its absence. The accountability argument assumes that the patient has an alternative, that somewhere a governed, accountable clinician is available, and financially accessible, to provide the information the chatbot should not be providing.

For an extraordinary number of people, that assumption is false. And the gap is not limited to specialist referrals. A great deal of the concern about AI in healthcare relates to the kind of general advice that would typically sit within primary care: understanding symptoms, weighing up whether something needs attention, knowing what questions to ask. This is precisely the domain where access is most constrained.

In the United Kingdom, a 2024 CQC survey found that 59% of people reported difficulty accessing GP services. NHS England data shows that while around half of GP appointments are delivered same-day, 67 million appointments in 2024 involved a wait of two weeks or more. And when patients do get through the door, the traditional ten-minute appointment is widely acknowledged as insufficient for the complexity of what patients now present with. Beyond primary care, over 7.4 million cases sat on NHS referral-to-treatment waiting lists in 2025, with nearly 200,000 patients waiting over a year. In Canada, the Fraser Institute reported in late 2025 that the median wait from GP referral to treatment had reached 28.6 weeks. In the United States, over two-thirds of federally designated primary care shortage areas are in rural communities. The WHO estimates a global shortfall of 10 million healthcare workers by 2030.

I write this from the Cayman Islands, where I have seen first-hand what healthcare access looks like in a small island jurisdiction. The standard of care available here is high. But the reality for a vast proportion of the population is that health insurance is structured to favour specialist care, with minimal, in many cases almost non-existent, cover for primary care appointments. The very layer of healthcare where general advice, early assessment, and ongoing management sit is the layer that many residents cannot readily afford to access. Patients needing sub-specialist input frequently have to travel overseas to the US, Canada, or the UK, a financial and logistical barrier that some can absorb and many cannot. The same dynamic, in different forms, repeats across the Caribbean, across Pacific island nations, across rural communities on every continent.

And then there is the global picture. According to the WHO and the World Bank, 4.5 billion people, over half the world's population, do not have full coverage for essential health services. Nearly 2 billion face severe financial hardship from out-of-pocket healthcare costs. Meanwhile, approximately 91% of the global population owns a mobile phone. In many parts of the world, huge numbers of people have a phone in their pocket but no realistic access to a doctor.

For these patients, AI does not compete with a doctor. AI fills a void where no doctor is available. The patient in rural Manitoba waiting six months for a referral is not choosing ChatGPT over their physician. The patient in the Cayman Islands whose insurance does not cover a GP visit is not choosing AI over primary care. They are choosing it because the alternative is nothing, and whether we like to acknowledge it or not, the AI alternative comes with the backing of the world’s medical evidence base, even if navigating it still requires consideration.

And here is where the governance argument, if applied carelessly, becomes something worse than inconsistent. It becomes inequitable. To say that health information should only be trusted within the clinical encounter, while doing nothing to ensure that encounter is accessible or affordable, is to privilege those who already have a doctor and penalise those who do not. If governance is a moral imperative, and I believe it is, then so is access. You cannot champion one while ignoring the other. And if a reason for our governance concerns is the quality of advice, then we should in fact favour AI over articles, influencers and advertising.

If Not Restriction, Then What?

I want to be careful here, because the answer to a governance gap is not to shrug and accept the risk. But nor is it to regulate AI into oblivion or to treat restriction as the only tool available. There is a middle path, and I think it is more productive than either extreme.

Rather than trying to prevent patients from using AI for health information (which, practically speaking, is impossible anyway), we could focus on designing access routes that control for the potential for harm.

Consider what is possible even with today's technology. AI tools could be built with rules that require them to present risk information alongside any treatment suggestion, to explain probability in plain language, and to flag uncertainty where the evidence base is contested or incomplete. They could be designed to prompt patients with safety questions: "What other medications are you currently taking?", "Are you pregnant or breastfeeding?", "Do you have any allergies to medications?" They could be programmed to generate explicit contraindication warnings, to tell the patient not just what a treatment might help with, but the circumstances in which it must not be used.

None of this replaces clinical judgment. But it does something that a newspaper article, an influencer post, and a Google search result conspicuously do not: it builds safeguards into the information itself.

Beyond the technology, there is the patient. We could invest in educating people on how to use AI-generated health information sensibly, just as we have (imperfectly) tried to educate the public on how to evaluate health claims in the media. Teach patients to treat AI as a starting point, not a conclusion. Teach them to verify, to cross-reference, and to bring what they find to their next clinical encounter. That is not a counsel of perfection. It is a practical acknowledgment that patients will use these tools regardless, and that equipping them to do so wisely is better than pretending they won't.

Consent, Autonomy, and What Patients Actually Do

The informed consent framework, established through cases like Montgomery in the UK, Canterbury v Spence in the US, and Reibl v Hughes in Canada, exists to protect patient autonomy within the clinical relationship. When a patient consults a chatbot, they are not in that relationship. They are doing what patients have always done: seeking information to inform their own decisions.

The counterargument is that patients may not understand AI's limitations. Perhaps. But surveys consistently show that a large proportion of the public cannot distinguish between peer-reviewed research and marketing, between a registered dietitian and a self-styled nutritionist. The information literacy gap is real, but it is not AI-specific. And if the answer is education, that answer applies to all sources, not to one.

Where I Land, I Think

I am not arguing against regulation. The EU AI Act, the evolving product liability frameworks, the questions being asked about Section 230 immunity in the US: these are legitimate and necessary developments. AI deployed as a medical device should be regulated as one. AI providers should be transparent about limitations. Product liability should apply where outputs cause harm.

But I am arguing against exceptionalism, and perhaps against hypocritical existentialism. Against the instinct to treat AI as a unique governance crisis while remaining largely silent about the newspapers, influencers, search engines, and wellness brands that have been providing unregulated health information for decades. And I am arguing against the assumption that restriction is the primary answer, when smarter design, built-in safeguards, and patient education offer a path that is both more realistic and more equitable.

Most of all, I am arguing that any governance framework must begin by acknowledging what the current debate does not: that for millions of patients around the world, the alternative to AI is not a doctor. The alternative to AI is nothing. A governance model that ignores that reality is not protective. It is exclusionary.

And that, I would suggest, should trouble us at least as much as the chatbot.

The Author

The author, Andrew Vincent, is the founder of Optimal Healthcare Ltd, based in the Cayman Islands, Chairman of the Cayman Heart Foundation, and author of A Question of Good Health, examining a question whose answer affects us all, everyday but which precious few ever stop to think about; What do we really mean by health?

References

Case Law

  • Montgomery v Lanarkshire Health Board [2015] UKSC 11 (UK)
  • Canterbury v Spence 464 F.2d 772 (D.C. Cir. 1972) (US)
  • Reibl v Hughes [1980] 2 SCR 880 (Canada)

Legislation and Regulation

  • EU Artificial Intelligence Act, Regulation (EU) 2024/1689
  • EU Product Liability Directive, Directive (EU) 2024/2853
  • US Communications Decency Act, Section 230
  • US Federal Trade Commission Act, Section 5
  • Canada Food and Drugs Act
  • Cayman Islands Health Practice Act (2017 Revision)
  • EU Medical Device Regulation (MDR), Regulation (EU) 2017/745

Data Sources

  • WHO: projected global shortfall of 10 million healthcare workers by 2030
  • WHO/World Bank (2025): 4.5 billion people lack full coverage for essential health services; nearly 2 billion face severe financial hardship from healthcare costs
  • GSMA (2025): approximately 91% of the global population owns a mobile phone; 5.81 billion unique mobile subscribers
  • CQC (2024): 59% of people reported difficulty accessing GP services in the UK
  • NHS England: 67 million GP appointments in 2024 involved a wait of two weeks or more; RTT waiting list at 7.4 million cases in 2025
  • Fraser Institute (December 2025): Canadian median wait, GP referral to treatment, 28.6 weeks
  • US Rural Health Information Hub: 66.33% of Primary Care HPSAs in rural areas (September 2024)
  • FDA: over 1,000 AI/ML-enabled medical devices approved/cleared as of early 2025

Academic Sources

  • Budhu JA et al., "Health Equity Considerations in the Age of Artificial Intelligence," Neurology 105(12) (2025)
  • Osonuga A, "Bridging the digital divide: artificial intelligence as a catalyst for health equity," ScienceDirect (2025)