And what we will do by racing to create a new FDA is destroying American innovation in the sector. And other countries will not slow down. They will beat us to the punch here.View on YouTube
The prediction is framed as a conditional: if the U.S. “creates a new FDA” for AI in the near term, then U.S. innovation will be destroyed and other countries will overtake it in AI.
-
Antecedent has not occurred.
- As of late 2025, the U.S. still has no single, FDA‑style federal AI regulator. Federal oversight is decentralized and sector‑specific, with many agencies (FTC, FDA, etc.) applying existing laws rather than a new, overarching AI regulator.
- Legal and policy surveys explicitly note that “there is no single federal agency charged with regulating AI” and that the U.S. lacks an EU‑style comprehensive AI statute, instead relying on guidance (e.g., NIST AI Risk Management Framework), executive actions, and a patchwork of state laws. (en.wikipedia.org)
- The 2023 Biden Executive Order 14110 on AI created a coordinated federal approach and directed agencies to set standards, but did not establish a licensing regulator with FDA‑like powers, and it was rescinded in January 2025. (en.wikipedia.org)
- The U.S. AI Safety Institute created under NIST is a testing and standards body, not a broad regulatory agency. (en.wikipedia.org)
- The Trump administration’s 2025 AI Action Plan explicitly moves toward less federal AI regulation, not a new regulator. (lexmundi.com)
Because the U.S. has not, in fact, created an FDA‑like AI regulator in the 2023–2025 “near term,” the condition that triggers Sacks’s predicted outcome has not been met.
-
Current evidence on AI leadership doesn’t test the counterfactual.
- Under the actual light‑to‑moderate regulatory regime, recent analyses (e.g., Stanford’s 2025 AI Index) find that the U.S. remains the global leader in cutting‑edge AI models and private AI investment, though China is closing the gap. (axios.com)
- European assessments describe the U.S. as having “hegemonic power” in AI and warn that the EU risks becoming a “digital colony” relative to the U.S. and China, again indicating continued U.S. strength. (euronews.com)
However, these data only describe what happened without an FDA‑style AI regulator. They do not show what would have happened had such a regulator been created. Since the key condition in the prediction never materialized, the causal claim about its consequences cannot be empirically verified or falsified.
Because the necessary trigger (“creating a new FDA for AI” in the near term) did not occur, the prediction’s if‑then statement remains a counterfactual. Its accuracy is therefore inconclusive, not demonstrably right or wrong.