Last updated Nov 29, 2025
aipolitics
Broad, coordinated global regulation that effectively stops or tightly restricts AI model development and deployment worldwide will not occur; even if the U.S. imposes strong regulations on AI models, many other countries will not follow in lockstep, and advanced AI models will continue to be developed and exploited competitively outside the U.S.
If the US tries to regulate it or the US tries to come in and stop the application of models in general or regulate models in general. You're certainly going to see those models continue to evolve and continue to be utilized in very powerful ways that are going to be advantageous to places outside the US. There's over 180 countries on Earth. They're not all going to regulate together... to try and get coordination around the software models that are being developed. I think is is pretty naive.View on YouTube
Explanation

Evidence since 2023 shows that no broad, tightly coordinated global regime has emerged that halts or uniformly constrains AI model development worldwide, and advanced models are in fact being aggressively developed outside the U.S.

  1. No global, lockstep regulatory regime that “stops” AI development

    • The UN General Assembly has adopted global AI resolutions (e.g., the March 2024 resolution on “safe, secure and trustworthy” AI and later follow‑ups), but these are non‑binding norm‑setting texts, not an enforceable global regulatory regime that blocks model development. (reuters.com)
    • The 2023 Bletchley Declaration from the UK’s AI Safety Summit and the subsequent G7 Hiroshima Process likewise create high‑level principles among subsets of countries, not binding global restrictions, and they explicitly aim to unlock benefits while managing risk rather than pause frontier models. (gov.uk)
    • The Council of Europe’s 2024 Framework Convention on AI is a real treaty, but it has a limited membership (mostly European and a few partners) and focuses on aligning AI with human rights and rule of law, not forbidding advanced AI systems or stopping their development globally. (en.wikipedia.org)
    • The EU AI Act is binding but only within the EU and is risk‑based: it phases in obligations and bans only certain “unacceptable risk” uses while explicitly permitting broad AI R&D and deployment under conditions. It is repeatedly described as a regional framework, not a global moratorium. (en.wikipedia.org)
    • Meanwhile, major powers diverge in approach: China has its own generative‑AI rules; the U.S. used an executive order in 2023 to direct federal AI oversight and then, under a new administration, has been rolling back or refocusing some safety structures (e.g., rebranding the AI Safety Institute and emphasizing competitiveness). Together this points to fragmented, nationally driven regulation, not a single coordinated global clampdown. (en.wikipedia.org)
  2. Continued, competitive development of advanced AI outside the U.S.

    • Chinese firms (Alibaba, ByteDance, DeepSeek, Baidu, Huawei, MiniMax, etc.) have continued to train and release cutting‑edge large language models, including many open‑weight or open‑source variants (Qwen 2.5, Qwen3, DeepSeek‑V3/R1). Studies now show China leading the world in downloads of “open” AI models, indicating vigorous non‑U.S. development. (ft.com)
    • To work around U.S. chip export controls, Chinese tech giants are training frontier models like Qwen and Doubao in overseas data centers in Southeast Asia using Nvidia GPUs, demonstrating that even strong U.S. export and safety policies are not preventing competitive model development abroad. (reuters.com)
    • In Europe, Mistral AI has become a leading non‑U.S. frontier‑model developer, repeatedly releasing powerful open‑weight and reasoning models and attracting multi‑billion‑dollar funding, explicitly positioning itself as a European answer to OpenAI. (en.wikipedia.org)
    • Globally, the list of large language models now includes many non‑U.S. or widely open‑licensed systems (DeepSeek, Qwen, Mistral, MiniMax, etc.), confirming that frontier‑level AI capabilities are not confined to U.S. firms and are not being universally throttled by harmonized regulation. (en.wikipedia.org)

Taken together, the world has moved toward patchwork national and regional AI rules plus soft global norms, while powerful models proliferate and are strategically developed (and sometimes routed around U.S. controls) outside the U.S. That matches Friedberg’s claim that a fully coordinated global regulatory stop on AI was unlikely, and that models would continue to evolve and be exploited competitively in many jurisdictions even if the U.S. tried to impose strong regulations.