UC Pioneers AI-Powered Medical Training With AMA Grant

UC Pioneers AI-Powered Medical Training With AMA Grant

Today we’re joined by Ivan Kairatov, a Biopharma expert at the forefront of integrating technology and innovation into medical training. We’ll be exploring a groundbreaking initiative that uses ambient AI to transform medical education, moving from sporadic feedback to a model where every clinical interaction becomes a rich learning experience. This conversation will delve into how tools like AI-powered glasses and real-time feedback are being developed to refine trainees’ clinical reasoning and communication, the ethical considerations of deploying such technology in real patient encounters, and the collaborative effort required to scale this innovation from a university setting to a large-scale healthcare system.

Medical trainees often complete thousands of clinical hours with feedback on only a fraction of patient encounters. How will your ambient AI system turn more of these interactions into learning opportunities, and what key metrics will it track to improve clinical reasoning?

It’s a foundational problem we’re aiming to solve. Think about it—it’s like a professional athlete practicing for thousands of hours but only getting a coach’s notes on one or two sessions. It’s simply not an efficient way to achieve excellence. Our ambient AI system is designed to be that ever-present coach. By using environmental systems and devices, we can capture the nuances of far more patient interactions without being intrusive. We’re moving to a model where every single patient encounter becomes a data-rich learning opportunity. The system will specifically focus on two core pillars: clinical reasoning and communication skills. It will analyze the questions a trainee asks, the diagnostic pathways they explore, and how effectively they connect with and convey information to the patient, providing personalized feedback to refine those critical competencies.

Your team is developing AI glasses with a heads-up display and a smartphone app for feedback. Could you walk me through how a medical resident might use these tools during a patient consultation to receive real-time guidance on their communication skills and diagnostic approach?

Imagine a resident entering a patient’s room wearing what looks like a standard pair of eyeglasses. As they begin the consultation, the heads-up display could subtly project crucial information into their line of sight—perhaps key points from the patient’s history or a reminder to check for a specific symptom. The system’s ambient sensors would discreetly capture the dialogue and interaction. Later, after the encounter, the resident pulls out their smartphone. The app provides a detailed, personalized breakdown of the consultation. It might highlight that they used overly technical jargon, or it could praise them for asking a particularly insightful diagnostic question. It’s not about judgment; it’s about providing immediate, actionable data that helps them reflect and refine their approach before their very next patient encounter.

The 2-Sigma AI platform will be extended from simulations to authentic patient encounters. What are the greatest technical and ethical challenges in capturing this real-world data, and what specific steps are you taking to ensure patient privacy is protected throughout the process?

This is, without a doubt, the most critical part of the project. Moving from a controlled simulation to a live, authentic patient encounter introduces immense responsibility. The greatest technical challenge is ensuring the AI can accurately capture and interpret the complexities of a real clinical setting, with all its background noise and unpredictable variables. Ethically, the paramount challenge is protecting patient privacy. We cannot and will not compromise on this. Every step is being built with a privacy-first mindset. This involves rigorous data anonymization protocols, ensuring no personally identifiable information is stored, and implementing robust, encrypted systems. Furthermore, obtaining informed consent from patients will be a transparent and meticulous process, ensuring they understand exactly what data is being captured and how it is being used solely for the purpose of training better physicians.

With plans to test this technology on 600 trainees, what are the key milestones for this four-year project? Could you describe what a successful transition from a simulated environment to daily use with real patients will look like for a trainee?

This is a four-year, multi-stage journey. Our first major milestone is refining the AI algorithms and the feedback delivery systems—the glasses and the smartphone app—within our 2-Sigma AI simulated environment. Once we’ve validated the technology there, we will begin a phased rollout with the 600 trainees across our two test sites, starting with low-stakes clinical scenarios. A successful transition will be almost invisible to the trainee in terms of workflow disruption. The technology should feel like an extension of their own learning process, not an obstacle. Success looks like a resident finishing a patient visit, glancing at their phone for feedback, and having an ‘aha’ moment about their communication style or diagnostic line of questioning—a moment that directly and positively impacts how they care for their next patient.

Collaborating with Arizona State University and HonorHealth is a key part of this initiative. How will this partnership help scale the project from a university setting to a large healthcare system, and what unique insights do you expect each partner to contribute?

This collaboration is essential for taking the project from a promising concept to a scalable, real-world solution. The University of Cincinnati brings the core AI development and educational informatics expertise. Arizona State University contributes their incredible strengths in medical engineering, helping us refine the hardware and the user interface of the technology itself. Then, HonorHealth, as a large healthcare system, provides the crucial clinical environment to test and validate this technology at scale. They give us direct access to the realities of daily medical practice, ensuring what we build is not just innovative in a lab but practical and effective on the hospital floor. This synergy is what will allow us to create a truly impactful model for precision medical education.

What is your forecast for precision medical education?

I believe we are at the very beginning of a transformation as profound as the one data analytics brought to professional sports. Precision medical education will move us away from a one-size-fits-all curriculum to a deeply personalized journey for every single trainee. We will be able to identify a resident’s specific strengths and weaknesses in areas like diagnostic reasoning or patient empathy with incredible accuracy and provide targeted interventions to help them improve. The future physician will be trained in a continuous feedback loop, where technology augments human mentorship, not replaces it. This will ultimately lead to a more competent, confident, and compassionate physician workforce, fully equipped to deliver the highest quality of care to every patient.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later