top of page

AI Can Now Bypass Our Biological Security Systems: What Microsoft Just Discovered



By Dr. Wil Rodriguez

TOCSIN Magazine


ree

Imagine if someone could rewrite a recipe for a deadly poison, changing all the ingredients and instructions, but still end up with the same lethal result. Now imagine they could do this in a way that fools all the safety systems designed to stop them. That’s essentially what Microsoft researchers just discovered AI can do with dangerous biological materials—and it’s a wake-up call for everyone concerned about biosecurity.


On October 2, 2025, Microsoft published groundbreaking research in the prestigious journal Science that reveals a critical vulnerability in the systems we rely on to keep dangerous biological materials out of the wrong hands. The implications extend far beyond the laboratory, touching on national security, public health, and the future of medical innovation.



The Problem: A Dangerous Loophole



Here’s what you need to know: When scientists want to create proteins in a lab, they typically order custom DNA sequences from specialized companies. It’s like ordering building blocks that they’ll use to construct what they need. These aren’t your average online purchases—we’re talking about the fundamental code of life itself.


To keep everyone safe, these companies use sophisticated screening software—think of it as a security checkpoint that compares each order against a database of known dangerous substances like toxins and diseases. This system has been our primary defense line, working quietly in the background to prevent bad actors from obtaining materials that could be weaponized.


If someone tries to order the DNA for something harmful—say, a deadly toxin or a dangerous pathogen—the system flags it and stops the order. It’s been our safety net against bioterrorism, and until now, we thought it was pretty reliable.


But Microsoft’s research team, led by their chief scientist Eric Horvitz and senior applied bioscientist Bruce Wittmann, discovered a serious problem: artificial intelligence can now redesign these dangerous substances in ways that slip right past our security systems. It’s like having a lock on your door that can’t detect when someone has made a copy of your key.



How AI Breaks Through



The researchers used AI tools that are designed to help create new proteins—technology that’s normally used for good things like developing new medicines, creating better enzymes for industrial processes, and understanding how diseases work at the molecular level. These tools have been celebrated as game-changers in biology and medicine.


But the Microsoft team showed these same tools could be misused in frightening ways.


Here’s the scary part: The AI created over 75,000 different versions of toxic proteins. Most of these redesigned toxins would pass through the security screening undetected, even though they could still be just as dangerous as the originals. The success rate was alarmingly high—meaning the vulnerability wasn’t just theoretical, it was very real and very exploitable.


Think of it this way: If security is looking for the word “dangerous,” the AI can rewrite it as “hazardous,” “perilous,” or “risky”—completely different words that mean the same thing. The security system doesn’t recognize the new version, but the end result is just as harmful. In biological terms, the AI was changing the “spelling” of the genetic code while keeping the deadly “meaning” intact.


The researchers call this “paraphrasing” the genetic code. Just as you might rewrite a sentence to avoid plagiarism detection software while keeping the same message, AI can rewrite genetic sequences to avoid biosecurity detection while potentially keeping the same dangerous function.


What makes this particularly concerning is that the AI tools used in this research aren’t secret military technology—they’re open-source programs that anyone with sufficient computing power can access. Some are even available for free online. The barrier to entry for this kind of work has dropped dramatically in recent years.



The Good News: They Fixed It (Mostly)



Before publishing their findings, Microsoft did something responsible and, frankly, quite rare in today’s world of instant information sharing. Instead of just announcing “Hey, we found this huge security hole,” they spent 10 months working quietly with DNA companies, government agencies, and biosecurity experts to fix the problem first.


This wasn’t easy. It required bringing together competitors in the DNA synthesis industry, getting government agencies on board, and coordinating across different countries and time zones. But everyone recognized the stakes were too high for the usual bureaucratic delays or corporate rivalry.


They developed a “patch”—like a software update for your computer or phone—that makes the screening systems better at catching these AI-redesigned threats. This patch has already been distributed to DNA synthesis companies worldwide, strengthening our defenses before the vulnerability became public knowledge.


The collaboration itself was remarkable. As Horvitz noted, one of the biggest surprises wasn’t just the technical findings, but how quickly and effectively people from different sectors could come together and work at speed. Companies that normally compete with each other sat down at the same table. Government regulators worked alongside private industry. Academic researchers shared insights with corporate scientists.


Adam Clore from Integrated DNA Technologies, one of the major DNA manufacturers involved in the project, was refreshingly honest about the situation: “We’re in something of an arms race.” The patch helps, but it’s not perfect. Some AI-created threats can still get through, and as AI gets smarter, the security systems will need to keep evolving too.


This isn’t a one-and-done fix. It’s the beginning of an ongoing process of testing, updating, and strengthening our defenses. Think of it like antivirus software on your computer—it needs regular updates to stay effective against new threats.



Why This Matters to You



You might be thinking, “I’m not a scientist. I don’t work in a lab. Why should I care about this?”


Here’s why: The same AI technology that could be misused to create biological threats is also revolutionizing medicine in ways that could directly benefit you and your loved ones. We’re talking about potential breakthroughs in cancer treatment, new vaccines that could be developed in months instead of years, personalized therapies tailored to your specific genetic makeup, and cures for diseases we can’t effectively treat today.


Right now, AI is helping researchers design new proteins that could neutralize snake venom, making antivenom treatments safer and more effective. It’s being used to develop enzymes that break down plastics, potentially helping solve our pollution crisis. Scientists are using these tools to understand Alzheimer’s disease at a molecular level, opening pathways to treatments that might actually slow or reverse cognitive decline.


Startups like Generate Biomedicines and Isomorphic Labs (a Google spinout) are using AI protein design to accelerate drug discovery dramatically. What used to take years of trial and error in the lab can now be simulated and tested virtually in weeks or months. The potential for medical breakthroughs is enormous.


This is what experts call a “dual-use” technology—it can be used for tremendous good or serious harm. It’s like how a knife can be used to prepare food or as a weapon. Nuclear technology can power cities or level them. The internet can connect people and spread knowledge, or spread misinformation and enable cybercrime. The tool itself isn’t inherently bad; it depends on who’s using it and for what purpose.


The challenge we face as a society is: How do we get all the benefits of this AI technology while preventing the dangers? How do we enable the life-saving research while blocking the life-threatening misuse?



Understanding the Stakes



To really grasp why this matters, consider what’s at stake. We live in a world where biological threats—whether naturally occurring pandemics or deliberately engineered pathogens—represent one of the most serious risks to global security and public health.


The COVID-19 pandemic showed us how quickly a biological threat can spread and how devastating the consequences can be. Now imagine if someone with malicious intent could use AI to design something even more dangerous, specifically engineered to evade our detection systems and defenses.


At the same time, we’re on the cusp of a biological revolution. The ability to design custom proteins opens doors we couldn’t even approach before. We could potentially:


  • Design proteins that target and destroy cancer cells with precision

  • Create universal vaccines that protect against entire families of viruses

  • Develop treatments for genetic diseases by designing corrective proteins

  • Engineer solutions to environmental problems like oil spills or toxic waste

  • Produce sustainable alternatives to chemicals and materials that currently harm our planet



These aren’t science fiction dreams—they’re active areas of research that AI is accelerating dramatically. The Microsoft findings don’t mean we should stop this research. They mean we need to pursue it more carefully and thoughtfully.



The Bigger Picture: Where Should We Defend?



This discovery raises an important strategic question: Where should we focus our defenses?


Right now, our main security checkpoint is at DNA synthesis companies—the places where people order the genetic material they need for their research. This makes sense for several practical reasons. In the United States, there are only a handful of major companies doing this work, and they cooperate closely with government security agencies. It’s a natural chokepoint where we can monitor what’s being created.


These companies have invested heavily in screening technology and biosecurity protocols. They train their staff to recognize suspicious orders. They have relationships with law enforcement and intelligence agencies. They’re already part of our biosecurity infrastructure.


Some experts argue we need to move our defenses earlier in the process—building security directly into the AI systems themselves. This could mean limiting what information AI can provide, restricting what designs it can generate, or building in safeguards that prevent the creation of dangerous molecules. It’s like putting a safety lock on a gun rather than just securing the ammunition.


But others point out that AI technology is already widespread and getting more accessible every day. Unlike DNA synthesis, which is concentrated in a few specialized companies, anyone with enough computing power can now train AI models. The software is often open-source, meaning the code is freely available. The knowledge of how to use these tools is spreading rapidly through academic papers, online tutorials, and scientific conferences.


As Clore put it: “You can’t put that genie back in the bottle.” If someone has the resources and knowledge to try to trick DNA companies into making a dangerous sequence, they probably also have the resources to train their own AI model. The cat is out of the bag, as they say.


This doesn’t mean we should give up on controlling AI systems—it just means we need multiple layers of defense. Think of it like home security: you want good locks on your doors, but you also want an alarm system, maybe cameras, and engaged neighbors who watch out for suspicious activity. No single measure is perfect, but together they create effective security.



What Makes This Research Different



What sets the Microsoft research apart is not just what they found, but how they handled it. In the world of cybersecurity, there’s an established practice: when you discover a vulnerability, you quietly alert the affected parties and give them time to fix it before going public. This is called “responsible disclosure.”


Microsoft applied this principle to biosecurity. They could have published their findings immediately and grabbed headlines. Instead, they recognized that announcing the vulnerability before fixing it could actually make the problem worse—potentially giving bad actors a roadmap for exploitation.


The team adapted techniques from cybersecurity and applied them to biology. They played both attacker and defender, using AI to find weaknesses and then figure out how to fix them. This kind of testing—trying to break your own systems to find vulnerabilities before someone else does—needs to become standard practice in biosecurity.


They also made careful ethical choices. All their testing was done entirely through computer simulation. They never actually produced any of the toxic proteins they designed. They didn’t want any perception that Microsoft was developing biological weapons, and they didn’t want to risk any accidental release of dangerous materials.


Furthermore, they’re not sharing all the details of their methods. The published paper describes what they found and how they fixed it, but some specifics about their techniques and which toxins they tested remain confidential. This is deliberate—they want to inform the scientific and security communities without creating a how-to manual for bad actors.



What Happens Next: The Road Ahead



Microsoft’s research does more than just expose a problem—it shows us a path forward. But that path requires sustained effort and commitment from multiple stakeholders.


First, we need constant vigilance. Security systems need regular testing and updates, just like your phone or computer. The patch that Microsoft helped develop is important, but it’s just the beginning. As AI capabilities continue to advance, our defenses must advance with them. This means ongoing investment in biosecurity research, regular testing of screening systems, and rapid response when new vulnerabilities are discovered.


Second, we need better teamwork. The success of the Paraphrase Project (as Microsoft called their 10-month collaboration) shows what’s possible when companies, government agencies, and researchers work together toward a common goal. This kind of cooperation needs to become routine, not exceptional. We need established channels of communication, pre-existing relationships built on trust, and clear protocols for handling emerging threats.


The DNA synthesis industry, despite being competitors in the marketplace, needs to continue sharing threat intelligence and best practices. Government agencies need to facilitate this cooperation while also providing oversight and support. Academic researchers need to stay engaged with practical biosecurity challenges, not just theoretical ones.


Third, we need smart regulations. Our current regulatory framework for biological research was designed for a different era—one where creating new biological materials required years of expertise and expensive laboratory equipment. AI is changing that calculus dramatically.


Policy makers need to understand these technologies well enough to regulate them effectively. This means investing in scientific advisors, holding hearings with experts, and crafting regulations that are flexible enough to adapt as technology evolves. The executive order on biological research safety issued in May 2025 is a start, but we need concrete recommendations and implementation plans.


Dean Ball from the Foundation for American Innovation emphasizes the urgency: This discovery “demonstrates the clear and urgent need for enhanced screening procedures coupled with a reliable enforcement and verification mechanism.” In other words, we need both better technology and better systems for ensuring that technology is actually used.


Fourth, we need built-in safeguards. New AI tools should have security features from the start, not added as an afterthought. When researchers develop new AI models for protein design, biosecurity should be a core consideration from day one, just like safety testing is a core part of developing new drugs.


Some in the AI research community are already working on this. They’re exploring ways to build models that simply refuse to generate dangerous outputs, or that flag concerning queries for human review. It’s technically challenging—you want to block malicious uses without hindering legitimate research—but it’s necessary work.



A Warning and A Blueprint



Eric Horvitz, who led the Microsoft research, emphasizes an important point: we can pursue innovation and safety at the same time. We don’t have to choose between advancing AI in medicine and protecting against misuse. We need both, and they can reinforce each other.


“By building guardrails, policies and technical defenses, we can help ensure that people and society benefit from AI’s promise while reducing the risk of harmful misuse,” he explains. This isn’t just nice rhetoric—the Paraphrase Project proved it’s achievable in practice.


The Microsoft team showed that when the right people work together with urgency and purpose, we can identify and fix security problems before they’re exploited. They found the vulnerability, developed a solution, and implemented it globally—all before making the discovery public. That’s the blueprint we need to follow for future challenges.


But it requires commitment. It requires resources. It requires people to take these threats seriously, even when they seem abstract or unlikely. It requires companies to invest in security even when there’s no immediate profit motive. It requires government agencies to move with unusual speed and flexibility. It requires all of us to stay informed and engaged.



The Bottom Line



Think of this research as a fire drill. Microsoft deliberately started a small, controlled fire to test whether our smoke detectors work. They found that many of them wouldn’t go off when they should. So they upgraded the system before a real fire could start.


That’s good news—we found the problem in time. But it’s also a warning: there will be more tests to come. As AI systems become more capable, they’ll find new ways to bypass our defenses. We need to stay ahead of that curve.


The uncomfortable truth is that as AI gets more powerful, we’ll face more of these challenges. Every major breakthrough in science comes with potential risks. The discovery of nuclear fission led to both power plants and weapons. The invention of the internet brought both unprecedented connectivity and new forms of crime. Genetic engineering offers miracle cures and potential dangers.


This is the nature of powerful technologies. The key is staying ahead of the risks while capturing the benefits. It’s not easy, but the Microsoft research shows it’s possible.


We’re entering an era where AI can help us cure diseases and save lives on an unprecedented scale. Researchers working in labs right now are using these tools to tackle problems that seemed unsolvable just a few years ago. Children being born today might grow up in a world where cancer is routinely curable, where genetic diseases can be corrected before symptoms appear, where new pandemics can be stopped in their tracks because we can design and deploy vaccines in weeks instead of years.


But we’re also entering an era where the same technology, in the wrong hands, could create new threats. A malicious actor with access to AI protein design tools and enough knowledge could potentially design biological weapons that are harder to detect and defend against than anything we’ve faced before.


The Microsoft research shows we have the capability to manage both realities—but only if we commit to doing the work. Only if we stay vigilant. Only if we cooperate across boundaries that usually divide us. Only if we invest in defenses as much as we invest in capabilities.


The race between AI-enabled threats and AI-enhanced defenses has begun. The encouraging news is that when we work together proactively, we can stay ahead. The Microsoft “Paraphrase Project” proves it’s possible. Now we need to make this kind of vigilance and collaboration the norm, not the exception.


This isn’t someone else’s problem. It’s not just for scientists or security professionals to worry about. The future of biological security—and the future of medical innovation—depends on all of us being informed, engaged, and supportive of the necessary investments in safety and security.


The good news? We caught this one in time. We found the vulnerability, fixed it (mostly), and made our systems stronger. That’s a win. But it’s just one battle in a longer war. The question now is: will we learn the lessons and apply them going forward?


Based on what Microsoft and their collaborators achieved, there’s reason for optimism. But optimism needs to be matched with action. The blueprint is there. Now we need to follow it.





Reflection Box



This report forces us to look at the fragile balance between progress and protection. We celebrate AI’s promise to revolutionize medicine, but we must also face the sobering reality of its potential misuse. True leadership in this new era will require vigilance, humility, and cooperation across borders and disciplines. The lesson here is clear: innovation without responsibility becomes a threat, but responsibility with innovation becomes a pathway to hope.


— Dr. Wil Rodríguez




🔔 Join the Conversation at TOCSIN Magazine

For more in-depth articles on technology, society, and the future, visit tocsinmag.com and become part of our community of critical thinkers and visionaries.

1 Comment

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Teo Drinkovic
Teo Drinkovic
Oct 03
Rated 5 out of 5 stars.

Nice one Dr. Will

Like
bottom of page