Last updated Nov 29, 2025

E124: AutoGPT's massive potential and risk, AI regulation, Bob Lee/SF update

Fri, 14 Apr 2023 08:39:00 +0000
Back to episodes
Chamath @ 00:20:12Inconclusive
ai
AI agents will not be able to fully replace good human judgment for several decades (i.e., not before the 2040s–2050s).
I think that humans have judgment, and I think it's going to take decades for agents to replace good judgment.View on YouTube
Explanation

As of November 30, 2025, we are far from the 2040s–2050s window that Chamath is talking about. The claim is explicitly about what will not happen until several decades from the podcast date (April 2023). Since we have not yet reached those decades, we cannot evaluate whether AI agents will or will not fully replace good human judgment by that time. Therefore, the prediction’s truth value cannot currently be determined and is inconclusive (too early) to judge.

Chamath @ 00:12:27Inconclusive
ventureai
Given the impact of generative AI on company formation efficiency, a $1B venture fund will be oversized; for roughly the next 3–4 years from April 2023, an appropriately sized fund for new investments would be on the order of $50M deployed over that four-year period.
Look, fund four for me was $1 billion. Does that make sense?... For the next 3 or 4 years, no. The right number may actually be $50 million invested over the next four years.View on YouTube
Explanation

The prediction is explicitly about the next 3–4 years from April 2023, i.e., approximately April 2026–April 2027. As of today (November 30, 2025), only about 2.6 years of that window have elapsed, so the full period Chamath was talking about has not yet played out.

Available evidence is mixed and not decisive either way:

  • Venture markets have clearly shifted away from the 2020–2021 mega‑bubble: reporting in 2023–2024 shows harder fundraising, smaller checks, and underperformance/strain at large multi‑stage and late‑stage funds, with commentary that big VC funds have been underperforming smaller ones and are struggling to raise and deploy capital efficiently. (institutionalinvestor.com) This is directionally consistent with the idea that very large funds can be oversized in a tougher environment.
  • At the same time, large funds still dominate total capital raised across private markets: one fundraising review finds that funds larger than $1B accounted for about 81% of total capital raised in 2024, indicating that LPs continue to back very large vehicles despite headwinds. (aranca.com) And within AI specifically, major AI‑focused funds have been raised in the high hundreds of millions (e.g., Radical Ventures’ nearly $800M AI fund in 2024), which is close to the $1B scale Chamath argued would be oversized. (ft.com)
  • The average U.S. venture fund size fell to around $105M in early 2024, with fewer billion‑dollar-plus funds closing, and press coverage explicitly framing this as a “moment” for smaller and mid‑sized funds—even while giants like a16z still raise multi‑billion vehicles. (wsj.com) That suggests partial support for the idea that smaller funds may be better matched to the current opportunity set, but it does not validate a specific “right” number like $50M, nor prove that $1B is categorically too large.
  • Chamath himself has since emphasized pulling back from traditional VC fundraising and largely investing his own sizable balance sheet, criticizing the large‑fund VC model as misaligned—behavior that is consistent with his skepticism about big funds, but not by itself proof that his numerical sizing call was correct. (institutionalinvestor.com)

Because (1) the time horizon he specified has not yet finished, and (2) there is no clear, agreed‑upon metric that would already let us declare that $1B vehicles are definitively “oversized” or that ~$50M over four years is empirically “right” for new-company formation in a generative‑AI era, this prediction cannot yet be judged as right or wrong. It is therefore best classified as inconclusive (too early).

Chamath @ 00:21:57Inconclusive
aitech
Large, sales- and marketing-heavy enterprise software organizations will begin to be materially cannibalized by AI-agent-based competitors, making their traditional go-to-market and sales motions unnecessary over the coming years (no exact year given, but framed as near- to medium-term).
So I think it's just a matter of time until we start to cannibalize these extremely The expensive, ossified, large organizations that have relied on a very complicated go to market and sales and marketing motion. I don't think you need it anymore. In a world of of agents and auto gpts.View on YouTube
Explanation

As of late 2025, there is clear movement toward AI-agent-based software, but not clear evidence that large, sales- and marketing-heavy enterprise software vendors have been materially cannibalized or that their traditional go-to-market motions have become unnecessary.

  1. Large incumbent enterprise vendors remain dominant and sales-led. Salesforce is still the world’s largest enterprise applications vendor, generating tens of billions in annual revenue and maintaining a workforce of over 76,000 employees, with tens of thousands in sales and support roles, indicating that a classic sales-intensive GTM model remains central to its business. (en.wikipedia.org)

  2. Incumbents are adding AI agents, not being displaced by agent-native competitors. Salesforce and other major enterprise software firms (e.g., Snowflake, ServiceNow) have launched their own AI agent platforms such as Agentforce and similar offerings, folding agents into their existing products and sales channels rather than being replaced by external AutoGPT-style competitors. (investors.com)

  3. AI agent startups are growing but have not obviously cannibalized the large vendors. Startups like Sierra (customer-service agents), Decagon (AI support agents), Docket (AI sales engineer/seller), and Alta (AI SDR and GTM agents) have raised substantial capital and are scaling, with Sierra reportedly reaching a ~$10B valuation and >$100M ARR. But these numbers are still small relative to incumbents’ revenue bases, and coverage frames them as new competitors operating alongside — not yet hollowing out — giants like Salesforce. (sfchronicle.com)

  4. Industry analyses describe agentic AI as emergent and largely ahead of its full impact. Gartner estimates that only about 5% of enterprise software includes AI agents today, with a forecast of 40% by 2026, explicitly portraying the agent transition as still in its early phases. Articles on “agentic AI” stress that many enterprises are not yet ready to rely heavily on autonomous agents and are focusing on narrow, supervised deployments, again implying that a broad structural upheaval has not yet played out. (blockchain.news)

  5. AI is automating internal roles, but that’s not the same as external cannibalization of vendors or their GTM. Some companies, including Salesforce and Klarna, have reduced or reshaped support headcount by deploying AI agents, showing internal labor substitution. But this is different from AI-agent-based competitors making entire enterprise vendors or their sales organizations obsolete; the core vendors are, in fact, the ones deploying these agents. (businessinsider.com)

Because Chamath did not give a specific end date beyond a vague “matter of time” / “over the coming years,” and we are only about 2.5 years past the prediction, the medium-term horizon he seemed to imply has not clearly expired. At the same time, available evidence shows early competitive pressure and experimentation rather than the kind of large-scale cannibalization and obviation of traditional enterprise sales motions he envisioned.

Given the combination of (a) clear partial movement toward agents, but (b) ongoing dominance and sales-led GTM of incumbents, and (c) the absence of a firm deadline in the prediction, the fairest current verdict is that it is too early to definitively judge whether his forecast will ultimately prove right or wrong.

Within less than one year from April 2023, one-person teams using AI agents (e.g., AutoGPT-like systems) will be able to reconstruct full-stack equivalents of major enterprise software stacks, demonstrating viable end-to-end replacements.
you actually want to arm the rebels and arming the rebels, to use the Tobi Lutke analogy here would mean to cede hundreds of one person teams, hundreds and just say, go and build this entire stack all over again using a bunch of agents. Yeah. And I think recursively you'll get to that answer in less than a year.View on YouTube
Explanation

Within a year of April 2023 (i.e., by April 2024), AutoGPT‑style agents were still experimental and unreliable, not robust tools for rebuilding complex enterprise software.

AutoGPT itself was widely covered in 2023 as an autonomous agent framework, but contemporaneous reporting emphasized severe limitations: it frequently got stuck in loops, hallucinated, and was explicitly described as not production‑ready and better suited as a sandbox or proof‑of‑concept rather than something to run real systems on. (en.wikipedia.org) Broader surveys of agentic AI note that, even as of 2025, there are few real‑world applications of AI agents, and controlled studies found that such agents generally fail to complete the majority of tasks in simulated software‑company settings—indicating that robust, autonomous end‑to‑end software engineering was still out of reach, not something hundreds of solo teams had already accomplished a year after April 2023. (en.wikipedia.org)

The strongest real example in this direction is Devin, an "AI software engineer" announced in March 2024. Devin can autonomously complete some software tasks and build small applications, but reporting and benchmarks portray it as a tool that can handle portions of development work with non‑trivial failure rates, not as a system that single‑handedly rebuilds full, Salesforce‑ or SAP‑scale enterprise stacks for solo developers. (en.wikipedia.org) In parallel, while there are case studies of one developer using ChatGPT‑style tools to build and scale relatively simple SaaS products quickly, those are niche applications, not full‑fledged replacements for major enterprise platforms. (vocal.media)

Finally, industry anecdotes through 2023–2024 still describe companies struggling even to clone existing Salesforce orgs or to replace Salesforce with internal tools, often requiring teams and consultants rather than "hundreds of one‑person teams" succeeding with agentic AI. (reddit.com) Given the lack of any documented examples of solo, agent‑driven rebuilds of major enterprise software stacks by April 2024—and the consistent evidence that autonomous agents remained immature at that time—the prediction that this would be achieved "in less than a year" from April 2023 did not come true.

aitech
Text-to-video / AI VFX tools like Runway will reach visual quality comparable to The Mandalorian TV show within approximately two years of April 2023 (by around April 2025); the gap between AI-generated storyboards and final production-quality output will substantially close within about 30 months of April 2023 (by around October 2025).
And I said, hey, when would this reach the level that The Mandalorian TV show is? And he said within two years... The difference between the storyboards and the output is closing in the next 30 months, I would say.View on YouTube
Explanation

By late 2025, text‑to‑video and AI VFX tools have improved dramatically, but they still fall short of consistently matching something like Disney’s The Mandalorian in full production quality and reliability.

  1. State of text‑to‑video around and after April 2025

    • OpenAI’s Sora and successors can produce highly photorealistic, cinematic short clips (tens of seconds up to about a minute), but reviewers and OpenAI’s own system card note ongoing problems with complex physics, causality, and fine details such as faces and crowded scenes. External assessments explicitly say it is impressive but not perfect and that it will be a long time before text‑to‑video threatens traditional filmmaking. (en.wikipedia.org)
    • Runway’s Gen‑4 model (released March 2025) generates only 5–10‑second clips at 720p and is documented to suffer from motion artifacts, physics issues, and character inconsistency across separately generated clips. Reviews and Runway’s own positioning emphasize its use for previsualization, concept work, and short clips, not for end‑to‑end TV‑episode‑length production. (en.wikipedia.org)
    • Survey and benchmark work on text‑to‑video models in 2024–2025 highlights that even top models (Runway, Sora, Kling, etc.) still exhibit complex distortions, unnatural actions, and other artifacts that make high‑end, long‑form production challenging. (arxiv.org)
  2. Industry and critical judgment vs. Mandalorian‑level production

    • Hands‑on reviews of Sora after public release conclude that, despite its strengths, it is not ready for mainstream video production or commercial storytelling, especially for longer, controlled narratives; it performs best for relatively simple short clips and still fails unpredictably on complex prompts. (theverge.com)
    • Major studio partnerships (e.g., Lionsgate and AMC Networks with Runway) explicitly frame AI video as a tool for storyboarding, pre‑vis, background elements, and post‑production assistance, not as a direct replacement for full VFX pipelines on big‑budget shows. (theguardian.com)
    • A 2024 analysis of Runway’s film‑studio collaboration notes that while AI video can now create realistic moving imagery quickly, making an entire feature film with AI alone is still not expected in the near term; the tech is mainly used for segments and shorts. (ispr.info)
    • Large, high‑profile productions that leaned heavily on AI‑enhanced visuals, such as the Wizard of Oz adaptation for the Las Vegas Sphere, have drawn mixed or negative criticism specifically for the AI‑generated imagery quality, suggesting that fully AI‑driven visuals are still not consistently at top‑tier blockbuster TV/film standards. (en.wikipedia.org)
    • VFX‑community discussions in 2025 repeatedly point out persistent problems with long‑scene consistency, reproducibility, and character control in AI video tools, reinforcing that current systems are not yet a drop‑in replacement for traditional VFX workflows on dialogue‑heavy, multi‑shot scenes like those common in The Mandalorian. (reddit.com)
  3. Gap between AI storyboards / pre‑vis and final output

    • There has been clear progress toward closing the gap Jason talked about: tools like Runway Gen‑4, Open‑Sora 2.0 and other modern models are explicitly used for cinematic previsualization and concept development, and research systems such as CineVision integrate scripts with real‑time AI pre‑vis to accelerate storyboard creation and director–cinematographer collaboration. (en.wikipedia.org)
    • However, these tools are still primarily positioned as aids at the previz / ideation stage; the last mile from AI animatic to final, broadcast‑quality VFX is still handled largely with traditional techniques. Industry commentary and technical evaluations do not support the view that, by October 2025, the gap has substantially closed in the sense that AI can routinely generate final shots at Mandalorian‑like quality with the required control and length.

Conclusion: By the April 2025 two‑year mark (and even by late 2025), text‑to‑video and AI VFX systems can produce striking short clips and are heavily used for pre‑visualization, but they are not yet at a level where their typical, controllable output is comparable to the overall visual standard of a show like The Mandalorian, nor has the storyboard‑to‑final‑shot gap substantially closed in mainstream production pipelines. Given that the time windows in the prediction have now passed and the core claim has not materialized, the prediction is best judged as wrong, albeit directionally pointing toward real (but insufficient) progress.

aitech
Within the near term (implicit few-year horizon from April 2023), AI tools will enable hobbyists and amateurs to input a screenplay and automatically generate reasonably good-looking animated movies, but achieving full theatrical-quality, fully AI-generated motion pictures will take significantly longer than two years from April 2023 (i.e., not before April 2025).
So yeah, in theory, you should be able to train the model where you just give it a screenplay and it outputs essentially an animated movie... So yeah, I think we're close to it now... So yeah, I think we're we're pretty close for, let's call it hobbyists or amateurs to be able to create pretty nice looking movies using these types of tools. But again, I think there's a jump to get to the point where you're just altogether replacing.View on YouTube
Explanation

Evidence up to November 30, 2025 lines up well with Sacks’s two-part claim.

  1. Hobbyists/amateurs generating reasonably good-looking animated movies from a script (near term)
  • By 2024–2025, multiple consumer-accessible tools let individuals turn scripts or text prompts into multi‑scene animated videos:
    • LTX Studio (released Feb 2024) explicitly markets itself as an AI “movie maker” that turns scripts or text prompts into characters, scenes, storyboards and video sequences, effectively automating much of the pipeline an amateur would need to create a short film. (en.wikipedia.org)
    • Research systems such as MovieFactory (2023) and AniMaker (2025) demonstrate end‑to‑end automatic movie/animation generation from text, including story expansion, shot planning, video clips and audio/voiceover, specifically aiming at "fully automated" movie creation from natural‑language input. (arxiv.org)
    • Mass‑market text‑to‑video models — Runway’s Gen series, Luma’s Dream Machine, Google’s Veo 2, and OpenAI’s Sora (wider release Dec 2024) — all allow non‑experts to produce visually impressive clips from text with no traditional animation skills required. (en.wikipedia.org)
  • Independent creators have used these tools to release AI‑animated shorts (e.g., The Lost Father, made with Runway Gen‑2 and other off‑the‑shelf tools) that are publicly showcased as “animated AI short films,” visually coherent and cinematic by hobbyist standards. (soorai.com)
    Taken together, this supports Sacks’s expectation that within a few years of April 2023, amateurs could feed in a script and get “pretty nice looking” animated movies out of AI-driven pipelines.
  1. Fully AI‑generated, theatrical‑quality motion pictures taking longer than two years (i.e., not by April 2025)
  • There are feature‑length works created entirely or almost entirely with generative AI within that two‑year window:
    • Window Seat and later DreadClub: Vampire’s Verdict (87‑minute animated feature released on Prime Video in July 2024) were made entirely by a single filmmaker using AI for visuals, performances, sound, music and even some editing, on a ~$405 budget. (en.wikipedia.org)
    • The Death of Film (856‑hour experimental project described as “the first feature‑length film fully made from generative artificial intelligence”) likewise shows that fully AI‑generated long‑form films exist by early 2025, but it is explicitly framed as an experimental art piece, not mainstream theatrical fare. (en.wikipedia.org)
  • However, these projects are micro‑budget or avant‑garde experiments, not studio‑scale, broadly marketed theatrical releases comparable in polish and resources to major animated features. They’re generally discussed as proofs of concept and milestones, not as replacements for big‑budget studio productions.
  • By contrast, the first widely publicized studio‑backed, mostly‑AI animated feature aimed at global theaters, Critterz, is targeting a Cannes debut and theatrical rollout in 2026, and even that project is described as “mostly AI‑generated” / “AI‑assisted,” still relying on human writers, voice actors and artist sketches rather than being a fully automated pipeline. (theverge.com)
  • The fact that the flagship industry example of an AI‑driven theatrical animated feature is scheduled after 2025, and is still human‑assisted rather than purely AI‑generated, indicates that a generally accepted, fully AI‑produced, big‑studio‑quality theatrical movie had not arrived by April 2025. The state of the art in mid‑2025 is short, high‑fidelity clips and indie/experimental features, not full Pixar‑ or Disney‑level films created end‑to‑end by AI.

Why classify this as “right” rather than “ambiguous”?

  • Sacks’s key distinction was between:
    1. Hobbyist‑grade “pretty nice looking” animations from a script – which clearly exist by 2024–2025 via tools like LTX Studio, Sora, Dream Machine, Runway, etc., plus numerous AI‑animated shorts. (en.wikipedia.org)
    2. “Altogether replacing” traditional production with fully AI‑generated, theatrical‑quality films in under two years – which, judging by industry behavior and the 2026 timetable for the first high‑profile AI‑heavy animated feature, has not happened. (theverge.com)
  • While some fully AI features did appear earlier than many expected, they are low‑budget/experimental and not yet what the industry or audiences would generally label “theatrical‑quality” replacements for studio animation.

Given those facts, the spirit of Sacks’s prediction — rapid empowerment of amateurs, but a significantly slower path to mainstream, fully AI‑generated theatrical films — is borne out by events through April 2025 and beyond. Hence the verdict: right.

aitechventure
Fully replacing entire functional teams in startups (e.g., full sales, marketing, or engineering teams) with AI systems will not be possible for at least several years after April 2023; it will not occur within mere months.
I think we're still a ways away from startups being able to replace entire teams of people... Well, it's in the years I think for sure we don't know how many years.View on YouTube
Explanation

Evidence through late 2025 indicates that startups have not, in general, been able to fully and reliably replace entire functional teams (sales, marketing, engineering) with autonomous AI systems, and that such capability remains years away, consistent with Sacks’s April 2023 prediction.

Key points:

  1. AI agents still struggle with real jobs. A 2025 benchmark from the Center for AI Safety and Scale AI tested frontier agents on 240 real freelance projects (software, design, etc.). Even the best agent automated only about 1–2.5% of the economic value, with frequent failures on complex, long-running tasks—hardly a drop‑in replacement for full human teams. (techbrew.com)

  2. Industry surveys show low adoption of fully autonomous agents. Gartner-related reporting in 2025 finds that only ~15% of IT leaders are even exploring fully autonomous agents; most organizations prefer supervised or “copilot” style AI, and only a small minority expect AI to replace human workers outright in the next 2–4 years. (techradar.com) This indicates that, even 2+ years after April 2023, most firms cannot or will not run whole functions purely with AI.

  3. Leaders in AI infrastructure say full automation is very hard. Databricks CEO Ali Ghodsi emphasizes that people underestimate how difficult it is to completely automate tasks; human oversight is still required even where AI agents are used for customer service and HR workflows. (businessinsider.com) OpenAI cofounder Andrej Karpathy likewise argued in 2025 that current agents are not yet effective and may take around a decade to truly work as hoped. (businessinsider.com) These are strong validations of Sacks’s view that team‑level replacement was not just “months” away.

  4. Where AI does “replace a team,” it is narrow and recent, not a broad 2023 reality. Blog‑style case studies in mid‑2025 describe a company that “replaced an entire sales team with 12 AI agents,” but this is: (a) a single, highly specific sales function; (b) reported more than two years after Sacks spoke; and (c) still framed around automating repetitive pipeline work rather than running the whole company independently. (agilegrowthlabs.com) This timing aligns with his “years, not months” claim rather than contradicting it.

  5. ‘AI‑only startups’ exist, but only as very small, tool‑stack businesses and again appear years later. A 2025 article describes an “AI‑only startup” with no employees where a solo founder uses tools like ChatGPT, AutoGPT, Midjourney, Zapier, etc., for product, marketing, support, and analytics. (medium.com) These are micro‑businesses operated by a founder orchestrating tools—not credible evidence that, by late 2023, typical startups could drop entire sales/marketing/engineering teams and hand them over to fully autonomous AI.

  6. Enterprise and analyst consensus still emphasizes hybrid human–AI teams. Multiple 2024–2025 analyses of agentic AI stress that agents will automate parts of jobs and routine decisions, but not remove humans from the loop in the near term; many agentic AI projects are being canceled or overhyped (“agent washing”), underscoring how far reality lags the marketing. (reuters.com)

Taken together, the data show that: (a) within the first “few months” after April 2023, full functional‑team replacement was clearly not achieved; and (b) even by late 2025, such replacement is rare, experimental, and unreliable. That matches Sacks’s assertion that this capability was years away rather than months, so his prediction is best judged as right.

aitech
If AI capabilities continue improving on a roughly 48–72 hour cycle, then by roughly six months after April 14, 2023 (i.e., by mid-October 2023), the effective progress in AI will be comparable to 10–12 years of progress at traditional technology innovation rates.
And this is a perfect example where when you start to compound technology at the rate of 24 hours or 48 hours, which we've never really had to acknowledge, most people's brains break and they don't understand what six months from now looks like. And six months from now, when you're compounding at 48 or 72 hours is like 10 to 12 years in other technology solutions.View on YouTube
Explanation

Evidence cuts both ways, and there is no objective yardstick for “10–12 years of progress” or a well-defined “traditional innovation rate,” so the claim can’t be cleanly verified or falsified.

Why it might look roughly right (in spirit):

  • Between April and October 2023, there was a flood of new frontier and open(-ish) models and tooling: Anthropic’s Claude 2 (July 2023), Meta’s Llama 2 (July 2023), Alibaba’s Tongyi/Qwen (beta April, public release September 2023), IBM’s Granite models (announced September 2023), and many similar efforts, all riding the wave started by GPT‑4 in March 2023.(en.wikipedia.org) 2023 is widely described as an “AI boom,” with large language models and generative AI becoming central in science and tech news.(en.wikipedia.org) A major policy analyst reviewing 2023 AI developments said it felt like “10 years worth of news” compressed into one year, reflecting a broadly shared sense of unusually fast change.(csis.org) This supports Chamath’s qualitative intuition that progress in AI that year felt vastly compressed in time.

Why it might be overstated or wrong in detail:

  • Chamath framed this as “compounding at 48 or 72 hours,” implying extraordinarily fast, near-continuous leaps in capability. Contemporary quantitative work on language-model progress instead finds that, over 2012–2023, the compute needed to hit a given performance level halves roughly every 5–14 months (median ~8 months), not every 2–3 days.(arxiv.org) And by 2024, multiple analyses described generative AI as entering a more incremental phase, with plateauing qualitative gains and underwhelming real‑world usefulness relative to the hype, suggesting that 2023 did not trigger a sustained, runaway 48–72‑hour compounding regime.(wired.com)

Because (1) the prediction is partly rhetorical, (2) the 48–72‑hour compounding premise is not supported by quantitative data, yet (3) many observers still describe 2023’s AI changes as compressed into a period that felt like many years of ordinary progress, the claim cannot be judged clearly right or wrong using available evidence; it remains ambiguous.

Sacks @ 01:01:24Inconclusive
aigovernmenteconomy
If the United States creates an FDA-style regulatory body for AI in the near term (before clear technical standards exist), then U.S. innovation in AI will slow substantially and other countries that do not impose equivalent constraints will advance their AI capabilities faster than the U.S. and surpass it in AI leadership.
And what we will do by racing to create a new FDA is destroying American innovation in the sector. And other countries will not slow down. They will beat us to the punch here.View on YouTube
Explanation

The prediction is framed as a conditional: if the U.S. “creates a new FDA” for AI in the near term, then U.S. innovation will be destroyed and other countries will overtake it in AI.

  1. Antecedent has not occurred.

    • As of late 2025, the U.S. still has no single, FDA‑style federal AI regulator. Federal oversight is decentralized and sector‑specific, with many agencies (FTC, FDA, etc.) applying existing laws rather than a new, overarching AI regulator.
    • Legal and policy surveys explicitly note that “there is no single federal agency charged with regulating AI” and that the U.S. lacks an EU‑style comprehensive AI statute, instead relying on guidance (e.g., NIST AI Risk Management Framework), executive actions, and a patchwork of state laws. (en.wikipedia.org)
    • The 2023 Biden Executive Order 14110 on AI created a coordinated federal approach and directed agencies to set standards, but did not establish a licensing regulator with FDA‑like powers, and it was rescinded in January 2025. (en.wikipedia.org)
    • The U.S. AI Safety Institute created under NIST is a testing and standards body, not a broad regulatory agency. (en.wikipedia.org)
    • The Trump administration’s 2025 AI Action Plan explicitly moves toward less federal AI regulation, not a new regulator. (lexmundi.com)

    Because the U.S. has not, in fact, created an FDA‑like AI regulator in the 2023–2025 “near term,” the condition that triggers Sacks’s predicted outcome has not been met.

  2. Current evidence on AI leadership doesn’t test the counterfactual.

    • Under the actual light‑to‑moderate regulatory regime, recent analyses (e.g., Stanford’s 2025 AI Index) find that the U.S. remains the global leader in cutting‑edge AI models and private AI investment, though China is closing the gap. (axios.com)
    • European assessments describe the U.S. as having “hegemonic power” in AI and warn that the EU risks becoming a “digital colony” relative to the U.S. and China, again indicating continued U.S. strength. (euronews.com)

However, these data only describe what happened without an FDA‑style AI regulator. They do not show what would have happened had such a regulator been created. Since the key condition in the prediction never materialized, the causal claim about its consequences cannot be empirically verified or falsified.

Because the necessary trigger (“creating a new FDA for AI” in the near term) did not occur, the prediction’s if‑then statement remains a counterfactual. Its accuracy is therefore inconclusive, not demonstrably right or wrong.

Chamath @ 01:07:23Inconclusive
aigovernment
Over time, AI regulation will evolve into a set of domestic regulatory bodies in major jurisdictions (US, EU, Canada, Japan, China), analogous to FDA/EMA, whose AI safety guardrails and standards will significantly overlap and share substantial commonality rather than diverging completely.
I think you need to have a domestic organization that protects us. And I think Europe will have their own again. FDA versus EMA Canada has its own, Japan has its own, China has its own. And they have a lot of overlap and a lot of commonality in in the guardrails they use. And I think that's what's going to happen here.View on YouTube
Explanation

The prediction is explicitly long term ("over time"), about how AI regulation will evolve structurally, so by November 30, 2025 there has not been enough time for the end-state to emerge.

What we see so far:

  • The EU has adopted the AI Act with a centralized AI Office in the European Commission plus national competent authorities, a risk-based framework, and phased enforcement into 2026–2027, which is the closest thing to a dedicated AI regime akin to medicines regulation. However, implementation details and even potential watering‑down or delays are still being debated, so the structure is not yet settled. (euronews.com)
  • In the US, Biden’s 2023 Executive Order 14110 created a national AI governance program and an AI Safety Institute at NIST, but Trump rescinded that order in January 2025 and reoriented the institute (now CAISI) toward competitiveness and security, leaving no FDA‑like, independent AI regulator and making the long‑term institutional path highly uncertain. (en.wikipedia.org)
  • Canada’s proposed Artificial Intelligence and Data Act (AIDA), which would have created a federal AI regulator, stalled in committee and then died on the order paper when Parliament was prorogued in January 2025; as of early 2025, Canada has no binding, comprehensive federal AI law or dedicated AI authority. (dentons.com)
  • Japan continues to favor a soft‑law, principles‑based, risk‑based approach with guidelines rather than a horizontal, binding AI statute or a standalone AI regulator, even as it leads the G7 Hiroshima AI Process for global principles and a code of conduct. (csis.org)
  • China has binding, sectoral AI rules (e.g., the 2023 Interim Measures for generative AI) enforced by the Cyberspace Administration of China alongside several other ministries, but not a single medicines‑style AI regulator; its guardrails are tailored to domestic political and social stability goals. (loc.gov)

Convergence vs. divergence so far: There is emerging cross‑jurisdictional commonality at the level of principles and voluntary standards: the G7 Hiroshima AI Process produced shared Guiding Principles and an International Code of Conduct, now backed by an OECD reporting framework meant to promote interoperability of risk‑management practices across countries. (international.gc.ca) This supports the idea of overlapping "guardrails," but these are mostly non‑binding and sit on top of still‑divergent domestic legal structures.

Given that:

  • several of the named jurisdictions (US, Canada, Japan) do not yet have stable, FDA‑like AI regulators,
  • the EU and China have taken quite different hard‑law approaches,
  • and political shifts (e.g., in the US and EU) are actively reshaping AI governance in real time, we cannot yet say whether the long‑run outcome will be a set of domestically anchored, EMA/FDA‑analog regulatory bodies with substantially overlapping standards, or a more fragmented landscape.

Therefore, the evidence to date is insufficient to judge the prediction as either right or wrong, and the appropriate classification is "inconclusive" (too early).

Sacks @ 01:07:43Inconclusive
aigovernmentpoliticseconomy
If a new centralized AI regulatory body is created to approve AI models/apps, the approval process will primarily advantage politically connected incumbents, and the overall rate of AI innovation (e.g., number and diversity of new entrants and products) will slow dramatically compared to the preceding permissionless-innovation period.
This will be beneficial only for political insiders who will basically be able to get their projects and their apps approved with a huge deadweight loss for the system, because innovation will completely slow down.View on YouTube
Explanation

Since April 2023, several AI-focused public bodies have been created, but none cleanly match the strong condition Sacks described (a centralized U.S.-style gatekeeper that must approve most AI models/apps before deployment), and the available data are too early and noisy to show the “innovation will completely slow down” outcome he predicted.

Regulatory bodies that did emerge

  • The U.S. created the AI Safety Institute within NIST in late 2023. Its mandate is to develop testing, evaluation methods, and safety guidelines and to collaborate with firms like OpenAI and Anthropic on pre‑ and post‑release model testing—not to license or approve models before they reach the market. Companies still launch models without formal government approval in most domains. (nist.gov)
  • The EU established the European AI Office to implement and enforce the EU AI Act, especially for general‑purpose AI. It coordinates enforcement, issues codes of practice, and can sanction non‑compliance, but most high‑risk AI uses are handled via national market‑surveillance authorities and notified bodies, not a single centralized approvals agency. Key high‑risk provisions have only just started to phase in and, as of November 2025, many are delayed to 2027 to avoid harming competitiveness. (digital-strategy.ec.europa.eu)
  • China’s Cyberspace Administration (CAC) did adopt binding generative‑AI regulations in August 2023 that require government approval before public deployment of many models, fitting Sacks’ scenario better than the U.S. or EU. Yet since then China has become second only to the U.S. in the number of large‑scale AI models, with over 40 approved and multiple heavily funded startups (e.g., Moonshot AI) emerging under this regime. (time.com)

Innovation outcomes to date Globally, 2023–2025 saw an acceleration in AI activity: rapid revenue growth at firms like Anthropic, continuing waves of generative‑AI startup cohorts, and tens of billions of dollars invested into AI startups and tools. (reuters.com) In Europe, industry figures warn the AI Act could slow innovation, but even they generally emphasize that it’s too early to know its long‑run impact. (verdict.co.uk)

Why the prediction is not yet testable

  1. The U.S. never created the kind of centralized licensing body Sacks was arguing against, so his conditional claim has not really been triggered in his primary jurisdiction.
  2. Where more centralized approval regimes do exist (notably China, and partially the EU for general‑purpose AI), they are relatively new, still evolving, or not fully in force, and are entangled with other factors (geopolitics, chip export controls, macroeconomy) that also affect innovation.
  3. Available evidence does not yet show that the overall rate or diversity of AI innovation has “completely” or “dramatically” slowed in those jurisdictions; at the same time, formal empirical work quantifying any chilling effect is only beginning.

Given these points, the prediction cannot be cleanly scored as right or wrong as of November 30, 2025; it remains inconclusive (too early to tell).

aigovernment
As AI advances, law-enforcement agencies will adopt AI-based tools and copilots that make large-scale malicious activities such as mass phishing-site creation relatively easy to detect and counter, reducing the advantage that AI-empowered criminals might otherwise gain.
So there will be new tools that law enforcement will be able to use. And if somebody is creating phishing sites at scale, they're going to be probably pretty easy for law enforcement eyes to detect. So let's not forget that there'll be copilots written for our law enforcement authorities. They'll be able to use that to basically detect and fight crime.View on YouTube
Explanation

Evidence since April 2023 shows partial confirmation of Sacks’ prediction, but not clearly enough to call it definitively right or wrong.

1. Adoption of AI tools and “copilots” by defenders (including public sector)

  • Microsoft has rolled out Security Copilot, an AI assistant for security analysts that helps summarize incidents, analyze vulnerabilities, and investigate threats such as phishing, and expanded its availability in 2024; it’s explicitly marketed for security operations centers and can be (and is) used by public-sector and government customers. (kelo.com)
  • Microsoft is now promoting “agentic AI” via Security Copilot as a way to automate and speed up cyber‑defense workflows, emphasizing reduced response times and more effective handling of complex threats. (news.microsoft.com)
  • Interpol’s innovation lab in Singapore uses AI systems to monitor millions of cyberattacks daily and to counter deepfake scams, sextortion, and phishing schemes, showing that an international law‑enforcement body is already using AI at scale for exactly these kinds of online threats. (ft.com)
  • India has launched specialized training for “cyber commandos” to detect and neutralize AI‑driven cyberthreats, including automated phishing attacks, indicating that police units are being equipped and trained to confront AI‑enabled scams with advanced technical tools. (timesofindia.indiatimes.com)
  • India’s Vastav AI deepfake‑detection platform is offered free to law‑enforcement and government agencies to combat AI‑generated media used in fraud and disinformation, further illustrating deployment of AI tools on the defensive side. (en.wikipedia.org)

Taken together, these examples support the first part of the prediction: law‑enforcement and closely allied public‑sector security teams are adopting AI‑based tools, some of which are directly branded and function as “copilots,” to assist in detecting and investigating online crime, including phishing and related scams.

2. Has this made large‑scale phishing “relatively easy” to detect and reduced criminals’ advantage?
The available data cut against a clear "yes":

  • Europol’s 2025 Serious and Organized Crime Threat Assessment warns that AI is turbocharging organized crime, enabling more scalable scams, deepfakes, and online fraud, and stresses how AI makes criminals “harder to detect,” rather than reporting that detection has become easy. (reuters.com)
  • In the U.S., FBI Internet‑crime data show that overall losses from online crime are still rising sharply, and reporting from states like Indiana attributes a big part of the surge in effective phishing, romance scams, and impersonation fraud to criminals’ use of AI, not to successful neutralization by law enforcement. (axios.com)
  • Google and others are deploying powerful AI‑based scam and phishing detectors (e.g., Google Messages’ AI scam detection on billions of texts), but these are primarily consumer‑ and provider‑side defenses, and even they are framed as responses to rapidly escalating AI‑enabled scams, not as having turned phishing into a solved or trivially detectable problem. (wired.com)
  • Major law‑enforcement operations like Operation First Light 2024 coordinated global crackdowns on phishing, investment, and romance scams, seizing hundreds of millions of dollars, but public reporting does not show that AI‑based copilots have made mass phishing “easy” to detect in a way that clearly shifts the overall balance of advantage away from criminals; instead, such operations are presented as partial successes in an expanding problem space. (en.wikipedia.org)

Overall, both sides are arming themselves with AI: law‑enforcement and defenders are indeed adopting AI tools and copilot‑like systems, but organized criminals are simultaneously leveraging AI to scale and sophisticate phishing and other fraud. Leading agencies (Europol, FBI) currently emphasize the increasing threat from AI‑enabled crime, not that AI has made mass phishing straightforward to detect or has clearly reduced criminals’ net advantage.

Because one core component of the prediction (widespread adoption of AI tools/copilots by law enforcement) is happening, while the stronger claim (that this makes large‑scale phishing “relatively easy” to detect and reduces AI‑criminals’ advantage) is not clearly supported and arguably contradicted by threat‑assessments and loss data, the overall forecast cannot be cleanly scored as simply right or wrong. The situation is mixed and depends heavily on how one interprets “relatively easy” and “reducing the advantage,” so the most accurate verdict is ambiguous.

aipolitics
Broad, coordinated global regulation that effectively stops or tightly restricts AI model development and deployment worldwide will not occur; even if the U.S. imposes strong regulations on AI models, many other countries will not follow in lockstep, and advanced AI models will continue to be developed and exploited competitively outside the U.S.
If the US tries to regulate it or the US tries to come in and stop the application of models in general or regulate models in general. You're certainly going to see those models continue to evolve and continue to be utilized in very powerful ways that are going to be advantageous to places outside the US. There's over 180 countries on Earth. They're not all going to regulate together... to try and get coordination around the software models that are being developed. I think is is pretty naive.View on YouTube
Explanation

Evidence since 2023 shows that no broad, tightly coordinated global regime has emerged that halts or uniformly constrains AI model development worldwide, and advanced models are in fact being aggressively developed outside the U.S.

  1. No global, lockstep regulatory regime that “stops” AI development

    • The UN General Assembly has adopted global AI resolutions (e.g., the March 2024 resolution on “safe, secure and trustworthy” AI and later follow‑ups), but these are non‑binding norm‑setting texts, not an enforceable global regulatory regime that blocks model development. (reuters.com)
    • The 2023 Bletchley Declaration from the UK’s AI Safety Summit and the subsequent G7 Hiroshima Process likewise create high‑level principles among subsets of countries, not binding global restrictions, and they explicitly aim to unlock benefits while managing risk rather than pause frontier models. (gov.uk)
    • The Council of Europe’s 2024 Framework Convention on AI is a real treaty, but it has a limited membership (mostly European and a few partners) and focuses on aligning AI with human rights and rule of law, not forbidding advanced AI systems or stopping their development globally. (en.wikipedia.org)
    • The EU AI Act is binding but only within the EU and is risk‑based: it phases in obligations and bans only certain “unacceptable risk” uses while explicitly permitting broad AI R&D and deployment under conditions. It is repeatedly described as a regional framework, not a global moratorium. (en.wikipedia.org)
    • Meanwhile, major powers diverge in approach: China has its own generative‑AI rules; the U.S. used an executive order in 2023 to direct federal AI oversight and then, under a new administration, has been rolling back or refocusing some safety structures (e.g., rebranding the AI Safety Institute and emphasizing competitiveness). Together this points to fragmented, nationally driven regulation, not a single coordinated global clampdown. (en.wikipedia.org)
  2. Continued, competitive development of advanced AI outside the U.S.

    • Chinese firms (Alibaba, ByteDance, DeepSeek, Baidu, Huawei, MiniMax, etc.) have continued to train and release cutting‑edge large language models, including many open‑weight or open‑source variants (Qwen 2.5, Qwen3, DeepSeek‑V3/R1). Studies now show China leading the world in downloads of “open” AI models, indicating vigorous non‑U.S. development. (ft.com)
    • To work around U.S. chip export controls, Chinese tech giants are training frontier models like Qwen and Doubao in overseas data centers in Southeast Asia using Nvidia GPUs, demonstrating that even strong U.S. export and safety policies are not preventing competitive model development abroad. (reuters.com)
    • In Europe, Mistral AI has become a leading non‑U.S. frontier‑model developer, repeatedly releasing powerful open‑weight and reasoning models and attracting multi‑billion‑dollar funding, explicitly positioning itself as a European answer to OpenAI. (en.wikipedia.org)
    • Globally, the list of large language models now includes many non‑U.S. or widely open‑licensed systems (DeepSeek, Qwen, Mistral, MiniMax, etc.), confirming that frontier‑level AI capabilities are not confined to U.S. firms and are not being universally throttled by harmonized regulation. (en.wikipedia.org)

Taken together, the world has moved toward patchwork national and regional AI rules plus soft global norms, while powerful models proliferate and are strategically developed (and sometimes routed around U.S. controls) outside the U.S. That matches Friedberg’s claim that a fully coordinated global regulatory stop on AI was unlikely, and that models would continue to evolve and be exploited competitively in many jurisdictions even if the U.S. tried to impose strong regulations.