The AI-doctor dilemma: who do you trust when your GP and ChatGPT disagree?

Does the UK trust AI over their GP?

Imagine this: you’ve got worrying symptoms, so you ask AI for advice. It suggests one thing. Your GP says something completely different. Who do you trust?

If you’re scratching your head, you’re not alone. In a new survey of 940 Medichecks customers, 41.7% said they wouldn’t know who to believe if AI and their doctor disagreed.

And that’s not all. More than half of people surveyed (55.7%) have already turned to AI for medical advice, and a striking 95.7% said they’d do so again. Perhaps most surprisingly, nearly a quarter (23.8%) would actually follow AI’s advice over their doctor’s. One respondent put it bluntly: “I would 100% trust AI over doctors.”

Is AI quietly becoming the new first stop for health advice?
 

The five-year countdown


When asked who’ll make more accurate health predictions by 2030, over half of people surveyed (51.7%) backed AI over human doctors. Only 19.1% still put their faith in flesh-and-blood physicians.

That confidence in AI doesn’t come from nowhere. Pooled analyses show that today’s generative AI models already perform at a similar level to non-expert doctors, averaging around 52% diagnostic accuracy (and rising) [1]. Meanwhile, Microsoft’s AI Diagnostic Orchestrator reports accuracy rates of up to 85% on complex studies, outperforming physicians who don’t have access to decision-support tools [2].

Even so, most people still see AI as something that supports doctors rather than replaces them. Diagnosis is only one part of medicine. Clinicians navigate uncertainty, hold difficult conversations, interpret context, manage risk, and respond to the emotional realities of illness. These are all tasks that go far beyond pattern recognition.

For now, physicians remain the lead decision-makers. But how long is it until AI edges closer to error-free diagnoses? If AI capability continues doubling, as it has been, every six to 12 months, 2030 starts to look less like a bold prediction and more like a conservative one.
 

The gender and age divide


The Medichecks survey revealed a striking demographic split. Young people are leading the AI charge: nearly three-quarters (73.8%) of 25–34 year-olds have used AI for medical queries, dropping steadily to just 34.9% of people over 75. This mirrors broader research showing that more tech-savvy demographics embrace AI first, while older generations remain more cautious. Gen Z, having grown up with AI woven into everyday life, are often described as ‘AI-native’.

“We need the human element too.”

Women across all ages show more hesitation than men, a trend echoed by a large-scale study of 13,806 hospital patients [3]. In our own survey, 50.9% of women reported using AI for a medical concern, compared with 63.7% of men. But the reasons go beyond familiarity with the tool. Women were more likely to worry about healthcare becoming impersonal, to feel uneasy about the ethics of AI, and to say they would want training before using it [4]. Broader research also suggests that long-standing gender stereotypes surrounding technology shape how confidently different groups approach emerging tools [5].

As one Medichecks customer explained: “I like having the doctor give notes on all my tests. I’d prefer it if it doesn’t go to AI as it will lose its personal touch.” Another added:
“I think it [AI] might become more savvy than the mind of one GP, but there is no human connection and interaction there.”
 

AI is already here (you just don’t know it)


Many people don’t realise that AI is already contributing to decisions in healthcare. Right now, 100% of stroke units across England use AI to analyse brain scans [6], at least 54% of radiology departments deploy AI in clinical practice [7], and half of UK hospital trusts have implemented AI to help diagnose conditions like lung cancer [8].

The NHS's July 2025 10-Year Health Plan explicitly aims to make Britain ‘the most AI-enabled health system in the world’ [9]. A massive Microsoft trial involving 30,000 NHS workers demonstrated AI could save each staff member 43 minutes daily, equivalent to reclaiming five weeks per year [10].

Real-world results are even more dramatic. An AI system called Mia, analysing mammograms at NHS Grampian, analysed the mammograms of over 10,000 women. It successfully picked up all cases of breast cancer, plus 11 additional cases that radiologists didn’t identify [11]. One patient, Barbara from Aberdeen, said, “My cancer was so small that the doctors said it would not have been picked up by the human eye."

In this context, AI holds genuine potential for reducing human error or limitations.
 

AI gets it wrong too


AI is remarkably capable, especially for specific isolated tasks, but it’s far from flawless. And crucially, human clinicians don’t automatically improve just by adding it.

In fact, one 2024 study showed that when 50 doctors used AI to diagnose complex cases, their accuracy barely budged [12]. AI achieved 90% accuracy, whereas doctors scored 74%. But doctors with AI managed 76%. The challenge is knowing how to effectively integrate AI into clinical practice. As technology matures and training improves, this integration should get smoother, but right now, we’re still figuring it out.

And when AI assistance is removed, especially after doctors have grown accustomed to it, performance can drop below pre-AI baselines. A 2024 study found that colonoscopy doctors’ cancer detection rates fell 20% after AI tools they’d been using were taken away, suggesting their skills had atrophied from over-reliance [13].

Then there’s the risk of errors and hallucinations. Google’s Med-Gemini analysed brain scans and identified problems in the “basilar ganglia”, a body part that doesn’t exist, likely conflating the basilar artery and basal ganglia. Google labelled this a typo, while others suggested it was an AI-related hallucination. Regardless, it demonstrates the possibility of dangerous errors that can occur in a healthcare setting.

Beyond clinical errors, AI’s limitations can be dangerous in mental health contexts. Multiple teenagers have died by suicide after extensive conversations with AI chatbots, which are programmed to maximise engagement and validate users’ thoughts, not provide therapeutic care [14]. As such, these bots lack fundamental safety guardrails and often fail to signpost users to crisis resources.

Like humans, AI makes mistakes. But unlike humans, AI doesn’t always know when it doesn’t know. And that uncertainty can be dangerous.
 

The regulatory scramble


Medical authorities are racing to catch up with technology already embedded in patient care. The General Medical Council (GMC) updated its guidance in 2024 to emphasise that doctors remain fully responsible for AI-assisted decisions [15]. As AI continues to develop, doctors will likely be expected to take part in AI-related education and training.

The British Medical Association (BMA) has established seven core principles for healthcare AI including robust safety assessments, but admits there's a "paucity of robust evidence" for real-world clinical improvements [16]. Most studies evaluate AI in controlled settings rather than messy, complex patient encounters. The BMA also stresses that patients should have the right to opt out of AI involvement in their care.

An AI system is only ever as good as the training data. Algorithmic bias remains a persistent problem, which can have big consequences by amplifying deeply rooted societal biases.
 

So, who should you trust?


Right now, your GP. AI has no legal accountability, limited real-world testing, and can't understand your full context. It's a helpful research tool, not a replacement clinician.

But that calculation is shifting fast. If AI diagnostic accuracy continues improving at its current pace, doubling every 6–12 months, we're looking at exponential change. By 2030, AI may routinely outperform human doctors on pattern recognition, while drawing on your complete medical history in ways no individual GP ever could.

The question isn't whether AI will transform healthcare, but whether we'll build the infrastructure to use it safely. As the Health Foundation puts it, we need to "fund the change, not just the technology" by training clinicians to work alongside AI, establishing clear accountability, and ensuring that human judgment and interaction aren’t forgotten about in the process.

For now, use AI to prepare better questions, then take those questions to a doctor who can examine you and take responsibility for your care.

The future of medicine isn’t human or machine, it’s both working in concert, combining algorithmic precision with something AI will never be able to replicate: a human being who shares the room with you, who can be held accountable for getting it wrong, and who understands that behind every data point is a life that matters.

 



References

  1. Takita H,Kabata D, Walston SL, Tatekawa H, Saito K, Tsujimoto Y, et al. A systematic review and meta-analysis of diagnostic performance comparison between generative AI and physicians. npj Digit Med. 2025;8: 175. doi:10.1038/s41746-025-01543-z
  2. Limb M. Microsoft claims AI tool can outperform doctors in diagnostic accuracy. BMJ. 2025;390: r1385. doi:10.1136/bmj.r1385
  3. Busch F, Hoffmann L, Xu L, Zhang LJ, Hu B, García-Juárez I, et al. Multinational Attitudes Toward AI in Health Care and Diagnostics Among Hospital Patients. JAMANetw Open. 2025;8: e2514452. doi:10.1001/jamanetworkopen.2025.14452
  4. Global Evidence on Gender Gaps and Generative AI - Working Paper - Faculty & Research - Harvard Business School. [cited 21 Nov 2025]. Available: https://www.hbs.edu/faculty/Pages/item.aspx?num=66548
  5. Russo C, Romano L, Clemente D, Iacovone L, Gladwin TE, Panno A. Gender differences in artificial intelligence: the role of artificial intelligence anxiety. Front Psychol. 2025;16: 1559457. doi:10.3389/fpsyg.2025.1559457
  6. England NHS. NHS England» How artificial intelligence is helping to speed up the diagnosis and treatment of stroke patients. 26 Sept 2024 [cited 20 Nov 2025]. Available: https://www.england.nhs.uk/blog/how-artificial-intelligence-is-helping-to-speed-up-the-diagnosis-and-treatment-of-stroke-patients/
  7. Clinical radiology workforce census 2023. Royal College of Radiologists; 2023. Available: https://www.rcr.ac.uk/media/5befglss/rcr-census-clinical-radiology-workforce-census-2023.pdf
  8. New Commission to help accelerate NHS use of AI. In: GOV.UK [Internet]. [cited 20 Nov 2025]. Available: https://www.gov.uk/government/news/new-commission-to-help-accelerate-nhs-use-of-ai
  9. 10 Year Health Plan for England: fit for the future. In: GOV.UK [Internet]. 30 July 2025 [cited 20 Nov 2025]. Available: https://www.gov.uk/government/publications/10-year-health-plan-for-england-fit-for-the-future
  10. Major NHS AI trial delivers unprecedented time and cost savings. In: GOV.UK [Internet]. [cited 21 Nov 2025]. Available: https://www.gov.uk/government/news/major-nhs-ai-trial-delivers-unprecedented-time-and-cost-savings
  11. Laura. Breast Screening & Artificial Intelligence | Research News | Prevent Breast Cancer. In: Prevent Breast Cancer Charity UK [Internet]. 21 Mar 2024 [cited 21 Nov 2025]. Available: https://preventbreastcancer.org.uk/breast-screening-and-artificial-intelligence/
  12. Goh E, Gallo R, Hom J, Strong E, Weng Y, Kerman H, et al. Large Language Model Influence on Diagnostic Reasoning: A Randomized Clinical Trial. JAMANetw Open. 2024;7: e2440969. doi:10.1001/jamanetworkopen.2024.40969
  13. BudzyńK, Romańczyk M, Kitala D, Kołodziej P, Bugajski M, Adami HO, et al. Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: a multicentre, observational study. The Lancet Gastroenterology & Hepatology. 2025;10: 896–903. doi:10.1016/S2468-1253(25)00133-5
  14. Chatterjee R. Their teenage sons died by suicide. Now, they are sounding an alarm about AI chatbots. NPR. 19 Sept 2025. Available: https://www.npr.org/sections/shots-health-news/2025/09/19/nx-s1-5545749/ai-chatbots-safety-openai-meta-characterai-teens-suicide. Accessed 3 Dec 2025.
  15. Artificial intelligence and innovative technologies. [cited 21 Nov 2025]. Available: https://www.gmc-uk.org/professional-standards/learning-materials/artificial-intelligence-and-innovative-technologies
  16. Principles for artificial intelligence (AI) and its application in healthcare. [cited 21 Nov 2025]. Available: https://www.bma.org.uk/advice-and-support/nhs-delivery-and-workforce/technology/principles-for-artificial-intelligence-ai-and-its-application-in-healthcare