🩺 📰 18/8/25: AI in Healthcare This Week

AI in healthcare presents some of the greatest potential for impact, but also some of the highest risks. On one side, we’re seeing breakthroughs in admin efficiency, global health, and even novel approaches to drug discovery and diagnosis. On the other, the same opportunities bring high-stakes safety risks, scepticism, the danger of skill erosion, and overhyped promises. This week, I’m rounding up the most important news at the intersection of healthcare and AI.
🏥 AI in Hospital Discharges
Both sides of the Atlantic are trialling AI to free up beds and reduce paperwork.
- West Tennessee Healthcare just announced a pilot of Dragonfly Navigate, a new product designed by Xsolis aimed at helping case managers manage discharges faster and cut unnecessary hospital days (Tennessee Lookout). The technology enables real-time data on when patients should be discharged and where they should go next for their care.
- In London, Chelsea and Westminster NHS Trust is testing a human-in-the loop AI tool that auto-generates discharge summaries from patient records, part of a wider NHS plan to reduce admin load and speed up patient flow (The Guardian). This means hours saved in discharge delays and freed up beds.
✍️ The AI Scribe Gold Rush
Ambience Healthcare just raised $243 million from investors including a16z and the OpenAI Startup Fund to scale its ambient scribe tech (more info here)
The admin grind in healthcare is a widespread problem leading to delays in care and burnout: endless note-taking, coding, and documentation. AI scribes promise to solve this by summarising and updating the EHRs automatically.
But: competition is heating up, with many competitors solving the same problem (Tandem, Nabla, Abridge, and even Epic are moving fast). This is a crowded space but as with most things in healthcare - trust, compliance, accuracy and outcomes will decide who wins.
🧑⚕️ When AI Makes Doctors Worse
Not all progress is positive. A new study in The Lancet Gastroenterology & Hepatology (reported in TIME) shows that gastroenterologists who regularly used AI support during colonoscopies became worse at spotting cancer without it. Detection rates dropped about 6% when the AI was removed, after only 3 months.
Whilst critics of the study have warned against drawing conclusions from a single study, this is the first to prove that de-skilling is a real risk in medicine, and over-reliance on the technology could "dull human pattern recognition" over time. The question is, do we really want our doctors to be so fundamentally reliant on a technology that they can't make life-changing decisions without it?
🧃 Precision Nutrition for Maternal Health
On a brighter note, researchers are using AI-driven precision nutrition to boost maternal and child health in underserved regions (Bioengineer.org).
This marks an exciting step toward reducing maternal and child health disparities in low-income regions by introducing precision nutrition - personalised interventions that draw on genetics, microbiome data, local health trends, and socio-economic context.
👩⚕️ Patients Arriving with AI Diagnoses
Doctors are reporting a new dynamic: patients showing up with AI-generated test interpretations, prescriptions, or self-diagnoses (Economic Times).
This shifts the power balance in the consultation room and risks creating pressure and undermining trust in healthcare professionals. It's adding a new kind of “second opinion” that clinicians are having to navigate. Whilst AI can be a useful source of information, AI cannot yet be used to reliably diagnose or treat, given the risks of hallucinations and dangerous advice.
⚖️ The Blockers Still Standing
- Adoption remains slow: Whilst the use of AI in healthcare is growing, adoption remains limited: surveys suggest fewer than a third of health systems have embedded AI into daily workflows…. Regulatory complexity, staff training, and leadership buy-in are some of the key hurdles.
- Safety and oversight: Watchdogs like ECRI flagged unmanaged AI as one of the top health-tech hazards of 2025.
- Regulation catching up: Under the EU AI Act, most healthcare AI is classed “high risk”, meaning stricter testing, traceability, and reporting.
Healthcare shows AI’s double edge more starkly than any other sector. On one side, we’re starting to see tools that genuinely reduce costs, free up staff, and even improve global health equity. On the other, structural blockers remain - regulation, adoption gaps, and the uncomfortable fact that AI can de-skill doctors if used carelessly.
For founders innovating in the space, the opportunity for impact is real, but the bar is high. Success will come from tackling unglamorous but critical pain points like discharge delays or note-taking, while building trust and compliance in from the ground up.