A pediatrician opens a child’s electronic chart and scans years of well-visit notes, immunization stamps, brief school concerns, and sleep complaints, and in the space between those ordinary lines an algorithm quietly surfaces a pattern that suggests the child’s risk for attention-deficit/hyperactivity disorder is already elevated.The finding does not deliver a diagnosis; it spotlights a trajectory that might otherwise be missed in 15-minute appointments and crowded schedules.
Nut Graph: Why This Story Matters
Attention problems often appear long before formal evaluation, yet many children wait years for a diagnosis that unlocks supports at school and home. Delays can impair academic progress, strain family dynamics, and compound mental health risks. At the same time, primary care is stretched thin, and subtle signals scattered across time are easy to overlook.
A team at Duke trained a specialized artificial intelligence model to read what clinicians already document: routine electronic health records from early life through kindergarten. By age five, the model estimated later ADHD risk with strong accuracy and did so consistently across sex, race, ethnicity, and insurance status—an uncommon result in predictive tools. The promise is pragmatic: timely prompts that help clinicians steer families toward monitoring, standardized screening, and, when warranted, earlier referral.
Inside the Study: How the Signals Add Up
Researchers analyzed longitudinal EHRs from more than 140,000 children, focusing on everyday data such as developmental screens, clinic visits, behavioral notes, prescriptions, and referrals. Instead of zeroing in on a single red flag, the model searched for sequences—when concerns appeared, how they clustered, and whether they recurred alongside sleep or learning notes.
Elliot Hill, the study’s lead author, described the approach plainly: “We asked the model to learn the story that routine care is already telling, just scattered across years of entries.” Senior author Matthew Engelhard underscored the boundary line: “This is decision support, not diagnosis. A risk score should prompt a thoughtful conversation, not a label.”
Voices From Clinic and Lab: Trust, Equity, and Use
In validation tests, risk estimates for five-year-olds reached strong performance while holding steady across demographic groups. That consistency matters, said coauthor Geraldine Dawson: “Many algorithms work on average and fail at the margins. We monitored subgroup performance from the start and required stability.”
Clinicians sketched practical scenarios. A family doctor in the study’s network recalled an EHR prompt tied to repeated fidgeting concerns and sleep disruption: “The alert nudged me to use a structured screener right then. The referral moved faster, and the school team put supports in place within weeks.” Caregivers noted similar benefits. One parent said, “Having something concrete to discuss made it easier to get help without waiting for a crisis.”
Ethics advisors on the project emphasized oversight. As Naomi Davis explained, “Models drift, populations change, and small biases snowball. Continuous auditing, transparent reporting, and clear next-step pathways are table stakes.” The group linked this work to earlier efforts using real-world data for adolescent mental health risks, arguing that proactive, auditable tools can widen—not narrow—access to care.
What Comes Next: From Pilot to Practice
The team framed measured next steps: embed risk prompts into well-visit workflows at or before age five, pair every alert with standardized screening and referral options, and document clinician reasoning to keep judgment at the center. Health systems planned governance dashboards to watch calibration and subgroup performance over time, and to align prompts with care coordination, school liaisons, and behavioral health.
Policy and research agendas pointed to multi-site trials, clear consent language, privacy safeguards, and reimbursement models that recognize earlier assessment and evidence-based interventions. If those pieces advanced in tandem—technology, training, and trust—the quiet clues already living in EHRs could have guided more children to timely evaluation and support when it mattered most.
