Matthias Aizenberg sits down with Ivan Kairatov, a biopharma expert whose work straddles gene editing and viral delivery, to unpack a new way of building better gene-editing vehicles by rethinking the cells that make them. Rather than only re-engineering virus-like particles, his team interrogated nearly every human gene in producer cells to find the bottlenecks that truly govern yield and potency. The study’s most striking result: disabling a single gene that brakes guide RNA output boosted functional cargo across multiple editors and even four other delivery platforms. Here, Ivan shares what it took to build that platform, why some knockouts paradoxically raise particle counts but blunt delivery, and how these engineered cell lines could speed the path from benchtop insight to patient-ready therapies.
Gene editing delivery remains a major bottleneck for therapies. What specific hurdles do you see in efficiency, safety, and scalable manufacturing, and how would you prioritize them? Can you share examples where a single constraint dominated outcomes and how your team addressed it?
I rank efficiency and potency first, because without enough functional cargo per particle, downstream safety and COGS simply don’t pencil out. Safety comes next—especially minimizing off-target effects and avoiding latent viral sequences—which is why we’ve leaned into virus-like particles that enter cells but carry no viral genes. Scalable manufacturing is the third leg of the stool: consistent batches, high yields, and stable producer lines that can run week after week. In one campaign, a single constraint—guide RNA scarcity inside producer cells—dominated everything; we could make plenty of shells, but editing stalled. Disabling one gene that acted as a brake on guide RNA output flipped that dynamic, letting each particle carry more functional cargo and shifting our attention from particle counts to particle quality.
Engineered virus-like particles enter cells without carrying viral genes. How do you weigh their safety profile against delivery efficiency in therapeutic contexts? What metrics or thresholds convince you a design is ready for preclinical scaling?
The absence of viral genes is a foundational safety feature that reduces genomic integration risk and long-lived expression, which regulators and clinicians appreciate. We still push hard on efficiency, because a safe vehicle that can’t deliver enough editor to the right cells won’t help patients. Before scaling, I look for consistent potency across multiple gene editors and particle designs, not just a one-off win—our most promising design improved performance across several editors and even four other delivery systems. I also look for clean cargo composition and minimal unintended RNA or protein passengers; when the safety signals and functional readouts align over repeated runs, that’s when we greenlight preclinical-scale batches.
Most improvements target the particles themselves, yet you focused on producer cells. What drove that pivot, and what early signals suggested cell-intrinsic factors were limiting yield or potency? Can you walk through key experiments that changed your mind?
We hit diminishing returns on particle tweaks—capsid swaps, linkers, and fusion configurations—while batch-to-batch variability suggested the bottleneck lived inside the producer cells. Early QC showed we could make abundant protein components, yet the delivered editing activity lagged, which hinted at cargo loading and RNA handling issues. That led us to a genome-wide approach in which nearly every gene was switched off somewhere in the producer-cell population, one gene per cell. The clincher was reading the genetic tags in the particles themselves and seeing clear pathway-level patterns: some knockouts spiked protein output but dulled potency, while one gene’s removal boosted functional guide RNA loading across the board. After that, focusing on producer biology wasn’t a gamble—it was the shortest path to better medicines.
You used a genome-wide approach where nearly every gene was switched off in different producer cells. How did you set up the library, control for off-target effects, and track outcomes? What data quality checks gave you confidence in hit calling?
We built a pooled library so that, at scale, almost every human gene was knocked out somewhere in the population, always enforcing one gene per cell to preserve clear attribution. To control for off-target effects, we used multiple guides per gene and tracked concordance; hits needed to show consistent directionality across guides. The elegance came from the particles themselves: cargo packaging carried a small genetic tag identifying which gene was disabled in the producer cell, so we could read outcomes directly from the final product. Internally, we looked for strong pathway enrichment and reproducibility across replicates; when entire pathways rose to the surface with tight variance, we knew the signal wasn’t a mirage.
Particles carried genetic tags indicating which gene was disabled in the producer cell. What were the technical steps to ensure accurate tag packaging and readout, and where did errors creep in? How did you benchmark sensitivity and false discovery rates?
We designed the tags to piggyback on the same packaging logic used for functional cargo so their fate mirrored the biology we cared about. Library balance at transduction, stable producer expansion, and clean harvests helped minimize skew; on the back end, careful extraction and sequencing protocols preserved representation. Errors crept in mostly during uneven amplification and occasional bias in packaging under stress conditions, which we flagged by cross-checking particle protein abundance against tag distribution. Sensitivity and false discovery were benchmarked by requiring multi-guide agreement per gene and pathway-level coherence; when both the single-gene signals and the broader pathways aligned, we accepted the hits and set aside outliers.
The strongest hit was a gene that brakes guide RNA output in producer cells. How did disabling it change guide RNA abundance, loading, and functional cargo per particle? What fold-improvements did you see, and how did those translate to editing outcomes in target cells?
Knocking out that brake gene removed a choke point in guide RNA biogenesis, so producer cells generated more guides and packaged more of them per particle. We saw a qualitative shift from “particles with shells” to “particles with purpose,” where the functional cargo defined potency rather than sheer count. Because the effect acted at the level of guide RNA supply and loading, it scaled across several different editors without us rewriting their core designs. In target cells, that translated into higher editing activity per delivered particle and more reliable performance at the same input dose—an efficiency win that we could feel at the bench and see in downstream assays.
The improvement extended across multiple gene editors and four other delivery systems. What common mechanisms explain that cross-platform boost, and where did performance diverge? Can you share comparative potency data and any surprising edge cases?
The common mechanism is simple but powerful: guide RNA loading is basically universal across different cargo types and particle types. By increasing the availability and packaging of functional guides in producer cells, we lifted the tide for several editors and four other delivery vehicles built in outside labs. Divergence showed up when a platform’s limiting factor wasn’t RNA at all—for example, when capsid-receptor engagement or endosomal escape dominated, the gains were more muted. The edge cases taught us to pair producer-cell engineering with context-specific particle tweaks; when both levers aligned, the potency improvements were most striking.
Some gene knockouts increased particle protein components but reduced delivery potency. What biological trade-offs drive that paradox, and how do you decide when higher particle counts are still worth it? Describe a decision framework with measurable criteria.
Those paradoxes arise when assembly outpaces meaningful cargo loading—more shells, less payload. Certain knockouts upregulated structural protein synthesis yet interfered with the steps that package guides and editing complexes, so potency dipped even as counts rose. Our framework weighs functional editing per particle over total physical titer; if potency falls while counts rise, we only proceed in specialized scenarios where protein cargo is the limiting reagent. We stage-gate decisions using head-to-head assays and accept “more particles” only when they rescue a cargo-limited process or slot into a manufacturing niche that values throughput over per-particle activity.
In specialized production settings where protein cargo is limiting, the same modified cells substantially boosted potency. What manufacturing scenarios fit this profile, and how would you optimize upstream and downstream steps to capture that gain? Any process maps or timelines to illustrate?
When the editor protein or a fusion component is hard to express stably, producer lines that favor assembly can be an asset. In those cases, we lean on short, repeatable production runs so the cells stay in the sweet spot, and we synchronize harvests to catch peak output. Upstream, media and feed strategies are tuned to maintain the phenotypes unveiled by the screen; downstream, gentle clarification and polishing preserve fragile complexes and the packaged guides. A practical timeline looks like this: qualify the engineered line, run pilot batches to lock in the window, then cycle production and release with rapid analytics so we don’t miss the potency peak.
Beyond single-gene knockouts, you’re exploring other cellular changes. Which perturbations—overexpression, epigenetic tuning, or metabolic rewiring—look most promising, and why? Describe the screening design, readouts, and how you’ll validate hits for translational use.
We’re expanding beyond “off switches” to include overexpression modules and chromatin-focused tweaks that nudge RNA biogenesis and cargo handling. Epigenetic tuning is attractive because it can coordinate whole pathways rather than whacking a single node. Our screens reuse the same particle-borne tag logic, now layered with readouts that track both particle composition and delivery potency across several gene editors and four other platforms. Translational validation follows a ladder: reproduce in multiple producer lines, lock the phenotype over time, and show that the gains persist under scaled production and in cell types that matter clinically.
You’re sharing engineered producer cell lines with other teams. What standardization, QC assays, and tech transfer practices help collaborators reproduce results? Can you outline a checklist for labs adopting these cells, including common pitfalls and troubleshooting tips?
Standardization starts with frozen master stocks, documented passage limits, and matched media and feeds. Our QC bundle includes particle protein quantification, tag representation checks, and potency assays across several gene editors, so collaborators see the same fingerprints we do. Tech transfer pairs SOPs with small pilot runs; we ask teams to reproduce core benchmarks before scaling. The checklist is simple: thaw from the master, respect passage ceilings, confirm the signature QC panel, and watch for drift in tag distribution—if it shifts, reset from the bank rather than chasing ghosts.
Improving delivery into immune cells and neurons is a priority. What targeting strategies, entry pathways, or cargo configurations have shown the best signal so far? Share specific potency benchmarks, off-target assessments, and any cell-type–specific manufacturing tweaks.
In both immune cells and neurons, we’re layering cell-type targeting motifs onto particles while leveraging producer-cell edits that enrich functional guides. The encouraging signal is that the guide-centric improvements carry over, even as we fine-tune entry pathways with capsid selections built for those cells. Off-target risk remains a watchpoint, so we pair our potency assays with RNA and protein profiling to make sure we’re not smuggling unintended passengers. Manufacturing-wise, we adjust the production window to capture the most potent fraction; for delicate neuron-focused batches, we keep downstream steps especially gentle to protect cargo integrity.
Scaling for clinical manufacturing brings regulatory and CMC challenges. How would you adapt this platform for GMP settings—raw materials, analytics, release criteria, and stability? Walk through a step-by-step plan from pilot to Phase 1-ready production.
Step one is to qualify the engineered producer line under a documented history—bank it, test it, and lock passage numbers. We then port media, feeds, and critical reagents to GMP-grade or well-qualified alternatives, running bridging batches to confirm equivalence. Release criteria blend identity, purity, particle composition, and functional potency across several gene editors, with stability studies that track both particle proteins and the packaged guides over time. From there, a pilot under GMP-like controls sets the stage for Phase 1 readiness: finalize SOPs, validate analytics, complete stability pulls, and execute engineering runs that mirror clinical lots.
What safeguards help prevent unwanted genomic or transcriptomic changes in producer cells that might affect particle content? Describe your orthogonal assays, from proteomics to RNA profiling, and any in-process controls that caught issues before batch completion.
We watch the cells as closely as we watch the particles. Routine RNA profiling flags unexpected transcripts that could hitch a ride, while proteomics keeps us honest about the particle’s protein makeup. In-process, we sample tag distributions and potency to catch drift early; when we see deviations—especially in stress conditions—we halt, troubleshoot, and often revert to a clean bank. Those orthogonal checks let us intercept issues before they reach release testing, protecting both safety and consistency.
Do you have any advice for our readers?
Treat delivery as a systems problem, not a single widget to be optimized in isolation. Lean into platforms that let the product report on itself—the particle-borne genetic tags were a turning point for us—and insist on signals that reproduce across several gene editors and even across four other delivery systems. When trade-offs appear, choose potency per particle over raw counts unless you’re clearly in a cargo-limited niche. Most of all, don’t be afraid to pivot upstream: the biggest gains we saw came from listening to the producer cells and letting their biology show us where the levers really are.
