This study could change how every frontline clinic in the world delivers care. Penda Health and OpenAI revealed that an AI tool called AI Consult, embedded into real clinical workflows in Kenya, reduced diagnostic errors by 16% and treatment errors by 13%—across nearly 40,000 live patient visits. This is what it looks like when AI becomes a real partner in care. The clinical error rate went down and clinician confidence went up. 🤨 But this isn’t just about numbers. It’s a rare glimpse into something more profound: what happens when technology meets clinicians where they are—and earns their trust. 🦺 Clinicians described AI Consult not as a replacement, but as a safety net. It didn’t demand attention constantly. It didn’t override judgment. It whispered—quietly highlighting when something was off, offering feedback, improving outcomes. And over time, clinicians adapted. They made fewer mistakes even before AI intervened. 🚦 The tool was designed not just to be intelligent, but to be invisible when appropriate, and loud only when necessary. A red-yellow-green interface kept autonomy in the hands of the clinician, while surfacing insights only when care quality or safety was at risk. 📈 Perhaps most strikingly, the tool seemed to be teaching, not just flagging. As clinicians engaged, they internalized better practices. The "red alert" rate dropped by 10%—not because the AI got quieter, but because the humans got better. 🗣️ This study invites us to reconsider how we define “care transformation.” It's not just about algorithms being smarter than us. It's about designing systems that are humble enough to support us, and wise enough to know when to speak. 🤫 The future of medicine might not be dramatic robot takeovers or AI doctors. It might be this: thousands of quiet, careful nudges. A collective step away from the status quo, toward fewer errors, more reflection, and ultimately, more trust in both our tools and ourselves. #AIinHealthcare #PrimaryCare #CareTransformation #ClinicalDecisionSupport #HealthTech #LLM #DigitalHealth #PendaHealth #OpenAI #PatientSafety
Using AI as a Reviewer in Clinical Workflows
Explore top LinkedIn content from expert professionals.
Summary
Using AI as a reviewer in clinical workflows means integrating artificial intelligence systems to assist clinicians by checking, verifying, or drafting notes, diagnoses, and reports, while always keeping human oversight. This technology acts as a partner, aiming to reduce errors, speed up processes, and support decision-making without replacing professional judgment.
- Collaborate with AI: Let AI systems flag inconsistencies or suggest improvements while maintaining the final say on patient care decisions.
- Streamline documentation: Use AI tools to generate drafts or preliminary reports, freeing up time for clinicians to focus on patient interaction.
- Validate and verify: Always review AI-generated outputs for context and accuracy, ensuring that technology augments your expertise rather than overriding it.
-
-
Clinicians working collaboratively with a custom GPT diagnostic assistant outperformed those using traditional resources, highlighting the power of AI-human synthesis for complex diagnostic reasoning. 1️⃣ A randomized trial with 70 US physicians tested two workflows using a GPT-based assistant: AI-first (AI suggests before clinician) and AI-second (clinician suggests before AI). 2️⃣ Both AI workflows significantly outperformed conventional tools, 85% (AI-first) and 82% (AI-second) vs. 75% with traditional resources. 3️⃣ Diagnostic accuracy gains were strongest in “clinically actionable decisions”, final diagnosis and next steps, especially when AI was used first (8.9% higher than AI-second). 4️⃣ AI support narrowed the spread of low scores, helping raise the floor of performance without sacrificing clinician autonomy. 5️⃣ When clinicians went first, the AI sometimes anchored its “independent” response to their input, despite explicit instructions not to, suggesting the model was influenced by human suggestions. 6️⃣ Clinicians were more likely to anthropomorphize and interact conversationally with the AI when it came second, indicating workflow order influences trust and engagement. 7️⃣ Using AI first was slightly faster per case (by ~1.5 minutes), though not significantly so unless strict protocol adherence was enforced. 8️⃣ Despite high performance, the AI’s recommendations occasionally worsened clinician answers (~8% of cases), pointing to real-world safety considerations. 9️⃣ Post-study, clinician willingness to use AI in complex reasoning increased significantly, with nearly all participants reporting high satisfaction and perceived value. 🔟 The study shows that workflow design, not just model capability, is crucial to unlocking AI’s potential in diagnostic support. ✍🏻 Selin Everett, Bryan Bunning, Priyank Jain, Ivan Lopez, Anup Agarwal, Manisha Desai, Robert Gallo, Ethan Goh, MD, Vinay Kadiyala, Zahir Kanjee, Jacob Koshy, Andrew Olson, Adam Rodman, Kevin Schulman, Eric Strong, Jonathan Chen, Eric Horvitz. From Tool to Teammate: A Randomized Controlled Trial of Clinician-AI Collaborative Workflows for Diagnosis. medRxiv. 2025. DOI: 10.1101/2025.06.07.25329176
-
AI wrote an X-ray report at AIIMS — but here’s the real story. A viral image of an AI-generated chest X-ray report from AIIMS is making rounds on social media today. It’s a significant milestone for medical technology and a reminder of how far AI has come in supporting clinical workflows. AI can genuinely add value: -Faster preliminary reads -Standardized reporting -Triage support in high-volume settings -Improved access in resource-limited areas But it’s equally important to note what the report itself emphasizes: AI findings are preliminary and must be clinically correlated. Radiologists bring context, pattern recognition, and decision-making that no algorithm can replace. They integrate patient history, nuances, and complex findings into a final diagnosis and treatment plan. AI is a powerful assistant — not a standalone diagnostician. The real future of radiology isn’t “AI vs Radiologist.” It’s AI + Radiologist working together to deliver safer, faster, and smarter patient care.
-
A recent study in NEJM AI (https://lnkd.in/eedJrt6N) shows that clinical text generated by LLMs can be reliably verified using another AI = LLM-as-a-Judge (a second LLM that checks whether statements generated by the initial LLM are true). 👉🏼In this study: → One model generates the clinical note → Another model assesses whether each statement is supported / not supported / not addressed based on retrieved EHR evidence → Inconsistencies and hallucinations are automatically flagged 👉🏼Key findings: → Agreement with clinicians reached 93% → Higher consistency than clinician–clinician review (88%) → Best performance when grounded in real patient data 💡 My key insight: This study challenges the simplistic idea that “human-in-the-loop” automatically equals #safety. For certain verification tasks, a well-designed AI system can be more consistent than humans, while clinicians remain responsible for interpretation, context, and final accountability. This is where the real discussion should take place: not whether humans should remain involved, but where human judgment truly adds value, and where transparant #AI-design and structured #AI-oversight may actually be safer. That conversation is essential for the responsible deployment of ambient AI, clinical documentation, and decision support and should be central to how we operationalize AI governance in healthcare. I guess much to learn and discuss....
-
Week 9: Generative AI for Clinical Notes Last week, we talked about gathering data from wearables and RPM. This week, we're talking about documentation. Clinical documentation is essential but often burdensome and one of the most common causes of burnout. Generative AI, particularly Large Language Models (LLMs), is transforming how we draft notes. GenAI is helping clinicians document more efficiently and accurately. Key Definitions -Generative AI / LLMs: Deep learning models are trained on massive text datasets, capable of producing coherent, context-aware narratives. In clinical settings, they can generate structured sections like HPI, ROS, or A&P from input prompts or audio. -Ambient AI Scribes: Systems that “listen” to physician–patient encounters and generate draft notes automatically (e.g., Microsoft’s Dragon Copilot, Abridge, Heidi Health) to streamline documentation. Healthcare Example A comparative study at Oregon Health & Science University (OHSU) evaluated ChatGPT‑4 in generating structured medical notes from recorded physician–patient dialogues. It demonstrated the feasibility (although not completely error-free) of using GPT‑4 to produce accurate, structured notes, validating generative AI’s role as an assistive tool in real-world clinical documentation workflows. https://lnkd.in/e2VdMefU Key Takeaway Generative AI is reshaping clinical documentation, not by replacing providers, but by acting as an assistant in the note-writing process. Tools like ChatGPT-4 can draft SOAP notes or summaries from audio, freeing clinicians to spend less time typing and more time thinking. The goal isn’t perfection, it’s faster, safer workflows with human review built in. Ambient AI scribes are here and being used and we have to learn and understand how to incorporate them into our workflows.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development