Last updated Nov 29, 2025
Chamath @ 01:07:23Inconclusive
aigovernment
Over time, AI regulation will evolve into a set of domestic regulatory bodies in major jurisdictions (US, EU, Canada, Japan, China), analogous to FDA/EMA, whose AI safety guardrails and standards will significantly overlap and share substantial commonality rather than diverging completely.
I think you need to have a domestic organization that protects us. And I think Europe will have their own again. FDA versus EMA Canada has its own, Japan has its own, China has its own. And they have a lot of overlap and a lot of commonality in in the guardrails they use. And I think that's what's going to happen here.View on YouTube
Explanation

The prediction is explicitly long term ("over time"), about how AI regulation will evolve structurally, so by November 30, 2025 there has not been enough time for the end-state to emerge.

What we see so far:

  • The EU has adopted the AI Act with a centralized AI Office in the European Commission plus national competent authorities, a risk-based framework, and phased enforcement into 2026–2027, which is the closest thing to a dedicated AI regime akin to medicines regulation. However, implementation details and even potential watering‑down or delays are still being debated, so the structure is not yet settled. (euronews.com)
  • In the US, Biden’s 2023 Executive Order 14110 created a national AI governance program and an AI Safety Institute at NIST, but Trump rescinded that order in January 2025 and reoriented the institute (now CAISI) toward competitiveness and security, leaving no FDA‑like, independent AI regulator and making the long‑term institutional path highly uncertain. (en.wikipedia.org)
  • Canada’s proposed Artificial Intelligence and Data Act (AIDA), which would have created a federal AI regulator, stalled in committee and then died on the order paper when Parliament was prorogued in January 2025; as of early 2025, Canada has no binding, comprehensive federal AI law or dedicated AI authority. (dentons.com)
  • Japan continues to favor a soft‑law, principles‑based, risk‑based approach with guidelines rather than a horizontal, binding AI statute or a standalone AI regulator, even as it leads the G7 Hiroshima AI Process for global principles and a code of conduct. (csis.org)
  • China has binding, sectoral AI rules (e.g., the 2023 Interim Measures for generative AI) enforced by the Cyberspace Administration of China alongside several other ministries, but not a single medicines‑style AI regulator; its guardrails are tailored to domestic political and social stability goals. (loc.gov)

Convergence vs. divergence so far: There is emerging cross‑jurisdictional commonality at the level of principles and voluntary standards: the G7 Hiroshima AI Process produced shared Guiding Principles and an International Code of Conduct, now backed by an OECD reporting framework meant to promote interoperability of risk‑management practices across countries. (international.gc.ca) This supports the idea of overlapping "guardrails," but these are mostly non‑binding and sit on top of still‑divergent domestic legal structures.

Given that:

  • several of the named jurisdictions (US, Canada, Japan) do not yet have stable, FDA‑like AI regulators,
  • the EU and China have taken quite different hard‑law approaches,
  • and political shifts (e.g., in the US and EU) are actively reshaping AI governance in real time, we cannot yet say whether the long‑run outcome will be a set of domestically anchored, EMA/FDA‑analog regulatory bodies with substantially overlapping standards, or a more fragmented landscape.

Therefore, the evidence to date is insufficient to judge the prediction as either right or wrong, and the appropriate classification is "inconclusive" (too early).