This will be beneficial only for political insiders who will basically be able to get their projects and their apps approved with a huge deadweight loss for the system, because innovation will completely slow down.View on YouTube
Since April 2023, several AI-focused public bodies have been created, but none cleanly match the strong condition Sacks described (a centralized U.S.-style gatekeeper that must approve most AI models/apps before deployment), and the available data are too early and noisy to show the “innovation will completely slow down” outcome he predicted.
Regulatory bodies that did emerge
- The U.S. created the AI Safety Institute within NIST in late 2023. Its mandate is to develop testing, evaluation methods, and safety guidelines and to collaborate with firms like OpenAI and Anthropic on pre‑ and post‑release model testing—not to license or approve models before they reach the market. Companies still launch models without formal government approval in most domains. (nist.gov)
- The EU established the European AI Office to implement and enforce the EU AI Act, especially for general‑purpose AI. It coordinates enforcement, issues codes of practice, and can sanction non‑compliance, but most high‑risk AI uses are handled via national market‑surveillance authorities and notified bodies, not a single centralized approvals agency. Key high‑risk provisions have only just started to phase in and, as of November 2025, many are delayed to 2027 to avoid harming competitiveness. (digital-strategy.ec.europa.eu)
- China’s Cyberspace Administration (CAC) did adopt binding generative‑AI regulations in August 2023 that require government approval before public deployment of many models, fitting Sacks’ scenario better than the U.S. or EU. Yet since then China has become second only to the U.S. in the number of large‑scale AI models, with over 40 approved and multiple heavily funded startups (e.g., Moonshot AI) emerging under this regime. (time.com)
Innovation outcomes to date Globally, 2023–2025 saw an acceleration in AI activity: rapid revenue growth at firms like Anthropic, continuing waves of generative‑AI startup cohorts, and tens of billions of dollars invested into AI startups and tools. (reuters.com) In Europe, industry figures warn the AI Act could slow innovation, but even they generally emphasize that it’s too early to know its long‑run impact. (verdict.co.uk)
Why the prediction is not yet testable
- The U.S. never created the kind of centralized licensing body Sacks was arguing against, so his conditional claim has not really been triggered in his primary jurisdiction.
- Where more centralized approval regimes do exist (notably China, and partially the EU for general‑purpose AI), they are relatively new, still evolving, or not fully in force, and are entangled with other factors (geopolitics, chip export controls, macroeconomy) that also affect innovation.
- Available evidence does not yet show that the overall rate or diversity of AI innovation has “completely” or “dramatically” slowed in those jurisdictions; at the same time, formal empirical work quantifying any chilling effect is only beginning.
Given these points, the prediction cannot be cleanly scored as right or wrong as of November 30, 2025; it remains inconclusive (too early to tell).