- Bill Gates warned AI could help non-government groups develop bioterrorism weapons using open-source software
- He stressed AI poses significant global threats despite its societal benefits, urging careful governance
- Gates highlighted AI's impact on job markets, especially disrupting software development roles
Microsoft co-founder Bill Gates has warned that artificial intelligence (AI) could be used to develop bioterrorism weapons by non-government groups who may have access to open-source software. In his annual letter titled 'Optimism with footnotes', released on Friday (Jan 9), Gates claimed that although AI will change society the most, it also carries considerable threats to the global population.
"In 2015, I gave a TED talk warning that the world was not ready to handle a pandemic. If we had prepared properly for the COVID-19 pandemic, the amount of human suffering would have been dramatically less," wrote Gates.
“Today, an even greater risk than a naturally caused pandemic is that a non-government group will use open source AI tools to design a bioterrorism weapon.”
Gates said humanity needed to be deliberate about how AI is developed, governed and deployed. He added that there was no upper limit on how intelligent the AIs would become.
"I believe the advances will not plateau before exceeding human levels," said Gates.
Apart from the threat of bioweapons, Gates highlighted that the next big challenge will be the disruption to the job market. He said the pace of growth of AI was already disrupting the job demand in areas like software development.
Threat Of Bioweapons
Biological weapons involve the use of bacteria, viruses, or toxins to cause disease or inflict harm on people, animals, and agriculture. This practice is categorised as biowarfare when conducted by nation-states and bioterrorism when orchestrated by non-state actors. Beyond these direct threats, the illicit trafficking of biological materials also represents a critical challenge for the future of humanity.
In October last year, Microsoft bioengineers exposed the flaws in safeguards intended to block non-state actors from developing bioweapons. In the study published in Science, researchers selected 72 different proteins that are subject to legal controls, such as ricin -- a bacterial toxin.
Scientists used AI protein design tools to come up with more than 70,000 DNA sequences that would generate variants of these proteins, with some of them being toxic in nature. Afterwards, the researchers asked four suppliers of biosecurity screening systems used by DNA synthesis labs to run these sequences through their software.
"The tools failed to flag many of these sequences as problematic. Their performance varied widely. One tool flagged just 23 per cent of the sequences," the study highlighted.
The study authors urged that DNA vendors update and introduce stringent screening software, while AI companies implement additional safeguards into their protein design tools.














