Can AI-Driven Optimization Solve the Antibiotic Crisis?

Can AI-Driven Optimization Solve the Antibiotic Crisis?

Ivan Kairatov is a leading voice in biopharmaceutical innovation, possessing an intricate understanding of how computational power can reshape drug discovery. With a background spanning deep research and development, Kairatov has dedicated his career to bridging the gap between digital prediction and clinical reality. In this conversation, he explores the transformative potential of ApexGO, a generative AI framework that moves beyond simple database screening to actively engineer superior antibiotic candidates. We discuss the mechanics of iterative molecular design, the role of Bayesian optimization in navigating chemical space, and how this systematic approach could soon revolutionize treatments for drug-resistant infections and beyond.

ApexGO focuses on refining imperfect peptides rather than just screening large databases. How does this iterative refinement process work in practice, and what specific advantages does it offer over traditional screening? Please describe the cycle from the initial molecular edit to the final lab validation.

Traditional screening is a bit like looking for a needle in a haystack; you are limited by what is already in the pile. ApexGO changes the game because it doesn’t just look for the needle; it takes a piece of straw and systematically re-engineers it into metal. The process begins with a “lead” peptide—a molecule that shows some promise but isn’t quite effective enough to be a drug. The AI then proposes precise molecular edits, such as swapping out specific amino acids, while a predictive model evaluates whether these changes will actually boost antimicrobial activity. This creates a feedback loop where the system moves toward versions that are statistically more likely to work when synthesized. In our work, we saw that this iterative method led to 72% of the generated molecules outperforming their original “parent” versions, a feat rarely achieved through manual trial and error.

Modern discovery requires balancing the exploration of unknown chemical regions with the exploitation of known active zones. How does Bayesian optimization help navigate this vast molecular space, and what specific criteria determine whether the model should prioritize a “risky” candidate over a more certain one?

The molecular space we are dealing with is truly staggering, far too large for any human or traditional computer program to map out entirely. Bayesian optimization acts as our compass, helping us make highly informed choices about which molecular “territories” to explore next without wasting time on dead ends. The model balances two needs: it exploits known “hot spots” where success is likely, but it also takes calculated risks on “uncertain” regions that might hold revolutionary improvements. If a specific region of the search space consistently shows high antimicrobial scores, the model focuses its effort there to fine-tune the candidates. However, if the model realizes it lacks data on a certain structural configuration, it will prioritize a “risky” candidate to learn more about the underlying chemical rules, effectively teaching itself how to be a better designer.

Lab results show that 85% of these AI-generated molecules successfully halted bacterial growth, with some even matching the efficacy of last-resort drugs like polymyxin B. What were the most surprising outcomes during the mouse trials, and how do these success rates compare to traditional antibiotic development?

The most striking moment was seeing the computational predictions hold up so robustly in living systems, which is where most AI models usually fail. In our mouse trials, two specific peptides engineered by the system reduced bacterial counts at levels comparable to polymyxin B, which is currently our “last-resort” defense for the most dangerous drug-resistant infections. Achieving an 85% success rate in halting bacterial growth is almost unheard of in early-stage discovery, where the vast majority of candidates usually wash out. Historically, antibiotics were found by sheer luck or accidental discovery, much like penicillin was nearly a century ago. This study proves we are transitioning from a period of biological “accidents” to a period of systematic, machine-guided engineering that can produce hundreds of viable candidates in just a few months.

Moving from laboratory success to human therapeutics requires balancing high potency with safety, stability, and longevity in the body. What are the primary hurdles in optimizing these early-stage peptides for human use, and how could AI-driven agents eventually streamline these pharmacological design choices?

Even with high potency against bacteria, these peptides are currently early-stage candidates that face significant physiological hurdles before they can reach a pharmacy shelf. We have to ensure they are safe for human cells, stay stable enough to reach the site of infection, and don’t get cleared out of the body too quickly by the kidneys. These factors—safety, stability, and longevity—are often at odds with one another, making the “perfect” drug a complex puzzle. In the near future, we envision AI agents that don’t just optimize for killing bacteria but simultaneously reason through these pharmacological trade-offs. By drawing on massive datasets of human physiology and previous clinical failures, these next-generation tools could predict how a molecule will behave in a human being before we ever start a clinical trial.

The logic of optimizing molecules for specific biological functions could potentially extend to oncology or immunology. How might this generative approach be adapted to target tumors or modulate immune responses, and what fundamental changes would be required in the underlying predictive algorithms?

The core logic of this generative approach is universal: if you can define a biological function and measure it, you can optimize for it. To adapt this for oncology, we would shift the “reward” criteria of the model from killing bacteria to recognizing specific markers on tumor cells or triggering an immune response against a growth. The underlying predictive algorithms would need to be retrained on specific datasets, such as the interactions between peptides and the human immune system’s T-cells. Instead of just looking for “antimicrobial activity,” the model would evaluate “binding affinity” or “selective toxicity” to ensure it attacks the cancer without harming healthy tissue. This shift would transform AI from a simple search engine for antibiotics into a flexible designer for the entire spectrum of modern medicine.

What is your forecast for AI-powered antibiotic discovery?

I forecast that within the next decade, the “trial and error” era of drug discovery will be viewed as a historical relic, replaced by a standard of “predictive engineering.” We are moving toward a future where we can generate thousands of potent, safe, and stable therapeutic candidates in a fraction of the time it used to take to find just one. As antibiotic resistance continues to rise globally, these AI-driven technologies will be our most critical tool, allowing us to outpace evolving pathogens by designing new defenses in months rather than decades. Eventually, we will see these platforms becoming so flexible that they can “reason” through the entire lifecycle of a drug, from the first molecular edit to the final clinical dosage, ensuring that we are never again caught without an effective treatment.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later