AI Boosts Emergency Medical Decisions with Cautious Trust

AI Boosts Emergency Medical Decisions with Cautious Trust

I’m thrilled to sit down with Ivan Kairatov, a biopharma expert with a profound understanding of technology and innovation in the healthcare industry. With a robust background in research and development, Ivan has been at the forefront of exploring how artificial intelligence can transform emergency medical settings. Today, we’ll dive into his insights on the integration of AI tools in high-stakes environments, the challenges of gaining trust from medical professionals, and the future potential of these technologies in saving lives.

Can you share the primary focus of your work with AI in emergency medical settings, particularly in high-pressure scenarios like trauma care?

Absolutely, Matthias. My work has centered on leveraging AI to support clinicians in emergency situations where every second counts. The main goal is to enhance decision-making by providing real-time, actionable insights. We’ve particularly focused on trauma care, like pediatric resuscitation, because these scenarios are incredibly complex and emotionally charged. The challenge lies in processing vast amounts of data—patient vitals, injury details, and more—under extreme time constraints. AI can help synthesize this information quickly, allowing doctors to focus on critical interventions rather than getting bogged down by data overload.

What inspired your team to zero in on emergency care as the area to apply AI technology?

Emergency care is a unique beast. It’s where the stakes are highest, and the margin for error is razor-thin. We saw a gap in how technology was being applied in these dynamic settings compared to more controlled environments like diagnostics or radiology. In emergencies, especially with children, the emotional and cognitive load on providers is immense. We believed AI could act as a supportive tool, not to replace human judgment, but to augment it by filtering noise and highlighting what’s most critical. It’s about giving clinicians a clearer picture when they need it most.

How did you approach the design of an AI tool like DecAide to meet the needs of emergency medical providers?

Designing DecAide was a deeply collaborative process. We started by engaging directly with emergency care providers to understand their workflows and pain points. Through surveys and interviews, we learned what information they rely on most during resuscitations—things like vital signs, injury mechanisms, and patient history. From there, we built a prototype that presents this data in a concise, visual format, using color coding to flag abnormalities. The goal was to make the interface intuitive so it wouldn’t add cognitive burden in an already stressful situation.

Can you walk us through how you decided what specific patient data to prioritize on the display?

Sure. We prioritized data based on what providers told us was mission-critical during emergencies. Vital signs like heart rate and blood pressure are non-negotiable, so those are front and center. We also highlight any abnormalities or sudden changes with visual cues to grab attention instantly. Details like the mechanism of injury—say, a car accident versus a fall—help contextualize the situation and guide treatment paths. It was all about balancing clarity with comprehensiveness, ensuring providers could glance at the display and immediately grasp the patient’s status without digging through clutter.

What’s the difference between the two versions of DecAide you developed, and why did you create them?

We created two versions of DecAide to test different levels of AI support. The first version focuses purely on information synthesis—it pulls together key patient data and presents it in an organized way without offering any guidance. The second version goes a step further by providing specific treatment recommendations, like suggesting a blood transfusion, along with a probability of success based on risk models. We wanted to explore whether providers preferred just having the data at their fingertips or if they valued actionable advice, and also to see how each approach impacted their decision-making accuracy and trust in the system.

What stood out to you from the experiment involving emergency care providers using DecAide?

The experiment was eye-opening. We tested 35 providers across various scenarios, and the results showed a clear improvement in decision accuracy when they had both AI-synthesized information and recommendations—correct decisions jumped to about 64% compared to around 56% with no support or just information alone. What struck me was the diversity in how providers interacted with the tool. Some embraced the recommendations as a helpful second opinion, while others were more cautious, often waiting to make their own call before even glancing at the AI’s suggestion. It highlighted how personal and situational trust in technology can be.

Why do you think some providers were hesitant about following AI recommendations in these scenarios?

Hesitancy often stemmed from a fear of losing autonomy. Emergency providers are trained to rely on their instincts and experience, especially in chaotic moments, so having an AI suggest a course of action can feel like an intrusion. Some worried it might bias their thinking or lead to over-reliance. Others felt the recommendations lacked transparency—they wanted to know the data or logic behind the suggestion. It’s a valid concern; without understanding the ‘why,’ it’s hard to fully trust a machine’s advice in a life-or-death situation.

Did the AI tool affect the speed of decision-making among the providers, or was that largely unchanged?

Interestingly, the speed of decision-making remained pretty consistent across all conditions—whether they had no AI support, just synthesized information, or full recommendations. This suggests that DecAide didn’t slow them down, which is crucial in emergencies. In fact, many providers made their decisions before the AI recommendation even appeared on the display, indicating they were using the tool as a confirmation rather than a primary guide. It’s a promising sign that the technology can integrate into their workflow without disrupting the pace.

When some participants ignored the AI recommendations, what factors do you think played a role in that behavior?

Trust was a big factor. If providers didn’t fully understand how the AI arrived at its recommendation or if it felt too generic, they were more likely to disregard it. Some also mentioned that the recommendations sometimes lacked the nuance their clinical judgment provided—like accounting for subtle patient cues that a system might miss. It’s a reminder that AI in these settings needs to be transparent and tailored. Without showing the underlying data or reasoning, it’s tough to win over professionals who’ve spent years honing their expertise.

Looking ahead, what is your forecast for the role of AI in emergency medical settings over the next decade?

I’m optimistic but cautious. Over the next decade, I foresee AI becoming a more integral part of emergency care, especially as systems get better at explaining their reasoning and adapting to individual provider preferences. We’ll likely see AI tools evolve to handle more complex scenarios and integrate seamlessly with electronic health records for real-time, personalized insights. However, the human element will remain central—trust and collaboration between clinicians and technology will be key. If we can address concerns around transparency and autonomy through better design and training, AI could truly revolutionize how we save lives in critical moments.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later