Last updated Nov 29, 2025
Chamath @ 00:43:44Inconclusive
sciencehealth
As advanced biological design tools become widely accessible over the coming decades, the absolute number of malicious actors able to attempt catastrophic bio-attacks will increase substantially, creating a materially higher global bio-risk driven by scale rather than by a change in per-capita malice rate.
you then have a law of large numbers problemView on YouTube
Explanation

Reasoning about time horizon

  • The prediction is explicitly about what will happen "over the coming decades." As of Nov 2025, only ~5 years have elapsed since the 2020 statement, so the forecast period is far from over.

What we do see by 2025

  • Advanced biological and AI-enabled design tools have clearly become more capable and somewhat more accessible:
    • AI-driven biodesign systems (e.g., the Moremi Bio agent) have been shown experimentally to generate large numbers of novel toxic proteins and small molecules, raising concerns about dual-use misuse and accessibility to people with limited expertise. (arxiv.org)
    • A Microsoft-led study generated tens of thousands of toxic protein sequences that evaded existing DNA-synthesis screening, illustrating how modern AI tools can create dangerous biological designs at scale. (washingtonpost.com)
    • A 2023 review on AI and biological misuse finds that large language models and specialized biological design tools together could both lower barriers for non-experts and expand capabilities for sophisticated actors, potentially broadening the set of people who could attempt serious biological misuse. (arxiv.org)
    • The U.S. National Academies (2025) conclude existing AI-enabled biological tools can already design or redesign toxins, while stressing that physical production and technical limits still constrain their use for large-scale or pandemic-level attacks. (nationalacademies.org)

But key parts of the prediction remain unverified

  • Major policy and security assessments emphasize that, so far, AI’s effect on overall biological risk has been limited and mostly theoretical:
    • A 2024 CNAS report states that, although AI could greatly increase biocatastrophic risk in the future, its actual impact on biological risk to date has been marginal, and many worst‑case capabilities remain speculative. (cnas.org)
    • OpenAI’s 2024 evaluation of GPT‑4 for biological threat creation found at most a mild uplift in performance, not a dramatic enabling of non‑expert attackers. (openai.com)
    • A 2025 CSIS report frames AI‑enabled bioterrorism as a domain where barriers are expected to fall and risks may accelerate, but again treats this mainly as a forward‑looking concern requiring preparation, not as an already‑realized surge in catastrophic capability among many actors. (csis.org)
  • There is no public evidence by late 2025 of a measurable, large increase in the number of malicious actors actually attempting catastrophic bio‑attacks, nor of catastrophic attacks carried out using these new tools.

Conclusion

  • The premise of the prediction (tools getting more powerful and more accessible, prompting expert concern about more potential malicious users) is increasingly reflected in the literature and in policy debates.
  • However, the core claim—that this will materially raise global catastrophic biorisk via a substantially larger pool of malicious actors over the coming decades—is about a longer timeframe and depends on developments (capabilities, governance, actual attacker behavior) that have not yet fully unfolded.

Because the relevant decades-long period is far from complete and the most consequential part of the claim (a large realized increase in catastrophic bio‑risk from many new malicious actors) cannot yet be empirically assessed, the prediction is too early to call and is therefore inconclusive.