I had the chance to speak with Petrina Kamya, Global Head of AI Platforms and VP at Insilico Medicine, as well as President of Insilico Medicine Canada. Petrina’s background is in both chemistry and computations.
Insilico Medicine is what Petrina calls a “tech bio”—developing both AI platforms and therapeutic assets, with a flexible model licensing both. Their pharma.ai platform was created to address challenges in drug discovery all the way from target identification to the clinic. In just a few years, they've gone from having two core products to a suite of about 12, all built with a heavy emphasis on validation.
When I think about AI in drug development, I think about all the failures in clinical trials. I’ve always wondered: are the molecules themselves to blame, or is the reason for so many failures rooted in the aspects that surround their testing—like patient selection, procedures, or trial design? Petrina confirmed that the two biggest reasons for failure are safety and efficacy. Many failures are turn out to be preclinical issues—either the wrong target was selected, or the molecule causes unintended side effects. AI and machine learning are being used to better predict both, by identifying high-confidence disease targets and designing safer molecules.
But predicting toxicity is still a major challenge. There are models at every stage—from in silico predictions to in vitro and animal models—but each layer adds complexity, and good data to train AI models is notoriously hard to come by. A lot of data around failed molecules never makes it into the public domain because it’s proprietary. That means valuable insights about toxicity are often lost, though some substructures known to be problematic are at least captured in public databases. I realize that companies need a return on their investment and even failure data has competitive value. But you have to wonder how much money is wasted chasing dead ends that could have been avoided.
The other question I always have is about the mechanics of drug binding. Most approaches focus on the active site—the orthosteric site—where the protein normally interacts with its natural ligand. I asked about the possibility of other strategies like allosteric binding (where a drug binds somewhere else on the protein to inhibit function). Petrina validated that idea along with degraders, which are molecules designed to bring a protein into contact with the cellular machinery that destroys it. These newer modalities, including molecular glues, offer ways to selectively disable problem proteins without relying on traditional binding.
Nothing is straightforward. Allosteric sites can offer greater selectivity, which could reduce toxicity. But finding those sites is incredibly difficult because proteins are dynamic and mobile. It’s not just about structure; it’s about motion within the protein itself and context.
The body’s backup systems—redundant pathways, mutations, and rescue mechanisms—can undermine even well-designed drugs. This is especially relevant in oncology. Proteins like KRAS have so many variants that it’s not enough to design one effective inhibitor—you often need a panel of drugs to address different mutations. Petrina noted that the human body has many fallback mechanisms, which makes targeting disease pathways more difficult but also explains why drugs that seem perfect in vitro don’t always deliver in the clinic.
Not subscribed? Let’s fix that. No spam, just good content wherever I find it.
Getting back to clinical trials, AI is mostly being applied operationally right now—to optimize patient selection, identify clinical sites with the right patient profiles, and monitor for trial reporting issues. The big advantage is in stratifying patients to improve the signal-to-noise ratio. As Petrina noted, sometimes a drug works for a subset of patients, but that signal is lost in the broader trial data. That resonated with my previous interview with Kurt Mussina who used AI to identify ideal site locations based on logistics and patient demographics—a very practical, high-impact use of the technology.
What if we could recover some therapies that have previously failed because it wasn’t tested on the right people? AI could help salvage and reposition those compounds by uncovering hidden signals in the data. You have to believe that improvements in AI will find a few lost nuggets—digging back through data with better tools to find value that’s already there.
Developing therapies aren’t the only application for new molecule discovery. Insilico is also working with companies in the herbicide space, and as Petrina explained, discovering herbicides isn’t all that different from designing drugs for people. You still need target specificity, safety, and cost-efficiency—but at an even greater scale of production. If people or animals are exposed, or if the herbicide lingers in the environment, it has to meet a high safety bar.
The unique challenge here is complexity and scale. It comes down to economics. We may spare no expense to extend a human life with doses in the milligram range. In agriculture, you’re looking for a simple compound that is cheap, can be produced in massive quantities, and can be stored in almost any conditions. It’s a new set of constraints.
AI in discovery isn’t about magic. It’s about building better foundations—more accurate models, more validated data, and more thoughtful decision-making—to improve every step from discovery to clinical success.
Your deepest insights are your best branding. I’d love to help you share them. Chat with me about custom content for your life science brand. Or visit my website.
Share this post