Chloe Botaine engages Ivan Kairatov, a Biopharma expert with significant knowledge of tech and innovation within the industry, for a detailed discussion on the new AI-based system LILAC (Learning-based Inference of Longitudinal imAge Changes). Topics covered include the system’s workings, its technical aspects, demonstrations, adaptability, broader impacts, and future applications.
Can you briefly explain what LILAC (Learning-based Inference of Longitudinal imAge Changes) is and how it works? What makes LILAC different from traditional methods for analyzing longitudinal image datasets?
LILAC is an AI-based system designed for analyzing longitudinal image datasets, which are essentially series of images taken over time. Unlike traditional methods that require substantial pre-processing and customization, LILAC automatically performs corrections and detects relevant changes in the images. This flexibility and sensitivity make it unique and useful across a diverse range of medical and scientific applications.
What specific AI approach does LILAC utilize, and why was this approach chosen? How does LILAC automatically perform corrections and identify relevant changes in different imaging contexts?
LILAC employs a machine learning approach, chosen for its ability to learn from vast amounts of data and identify patterns without needing extensive pre-processing. It automatically adjusts for factors like view angles, size differences, and other artifacts, making it capable of highlighting important changes within the image series across various contexts.
Could you describe the proof-of-concept demonstrations that were conducted with LILAC? What level of accuracy did LILAC achieve in determining the order of embryo development images? How did LILAC perform in predicting time intervals and cognitive scores from MRI images?
In the proof-of-concept demonstrations, LILAC was trained on microscope images of in-vitro-fertilized embryos, healing tissue, and MRI scans of aging brains. It achieved about 99% accuracy in determining the order of embryo development images. For the MRI images, LILAC accurately predicted time intervals and cognitive scores with significantly less error compared to baseline methods, demonstrating its ability to discern time-related changes and predict relevant outcomes.
How can LILAC be adapted to highlight the most relevant image features for detecting changes or differences? In what ways could LILAC provide new clinical and scientific insights?
LILAC can be tailored to focus on the most pertinent image features for a given study. By identifying these features, it has the potential to reveal new clinical and scientific insights, especially in fields where there is significant variability across individuals or processes that are not well understood.
What are your plans for demonstrating LILAC in real-world medical settings? How do you foresee LILAC being used in the study of prostate cancer treatment responses?
The immediate plan is to demonstrate LILAC’s capabilities in real-world settings by predicting treatment responses from MRI scans of prostate cancer patients. This application could help in personalizing treatment plans based on predicted responses, thereby improving patient outcomes.
In which other medical or scientific fields do you see LILAC being particularly useful? How might LILAC help in situations where there is a lot of variability across individuals?
Beyond oncology, LILAC has potential applications in neurology, cardiology, and any field involving chronic disease management. In cases with high individual variability, LILAC’s ability to automatically adjust and identify critical changes can offer more personalized and accurate insights.
What were some of the key challenges you faced while developing LILAC? Are there any areas where you see potential for further refinement or improvement of the system?
One major challenge was ensuring the system’s accuracy across different imaging contexts and dealing with diverse types of artifacts within the data. Future refinements could include enhancing LILAC’s precision in detecting even subtler changes and expanding its adaptability to newer imaging technologies.
Can you tell us about the collaboration between Weill Cornell Medicine, Cornell’s Ithaca campus, and Cornell Tech in developing LILAC? Who were the major contributors to this project, and what roles did they play?
This project was a collaborative effort involving expertise from multiple institutions. Key contributors included Dr. Mert Sabuncu from Weill Cornell Medicine, who spearheaded the project, and Dr. Heejong Kim, who was instrumental in the design and development of LILAC. The collaboration leveraged diverse skills in radiology, electrical engineering, and computer science, driving the innovation behind LILAC.
Do you have any advice for our readers?
For those interested in the intersection of AI and medical imaging, my advice is to stay curious and continually seek out interdisciplinary knowledge. Understanding both the technical and clinical aspects can open up numerous opportunities for innovation and impactful research.