Last updated Nov 29, 2025

Biggest LBO Ever, SPAC 2.0, Open Source AI Models, State AI Regulation Frenzy

Fri, 03 Oct 2025 16:39:00 +0000
Back to episodes
Chamath @ 00:05:34Inconclusive
marketsaitech
If EA, under its new private ownership, successfully (1) cleans up its operating expense model, (2) adopts next‑generation AI tools, and (3) finds ways to distribute its games outside the Xbox/PlayStation gatekeepers, then Electronic Arts’ equity value will grow into the hundreds of billions of dollars (substantially above the $55B take‑private valuation) over the subsequent years.
If you do those things, this is a multi hundred billion dollar asset. And in that I think it could be just an enormous win.View on YouTube
Explanation

Chamath’s claim is a conditional, long‑term prediction: if the new private owners of EA (1) streamline operating expenses, (2) successfully adopt next‑gen AI tools, and (3) break some dependency on Xbox/PlayStation distribution, then EA could become a “multi‑hundred‑billion‑dollar” asset, far above the ~$55B take‑private valuation.

As of November 30, 2025:

  • EA has only recently agreed to a $55B leveraged buyout led by Saudi Arabia’s Public Investment Fund, Silver Lake, and Affinity Partners; the deal is described as the largest take‑private/LBO ever. (ft.com)
  • The transaction is not yet closed; reports state it is expected to complete in the first half of 2026 / early fiscal 2027, pending shareholder and regulatory approvals. (barrons.com)
  • Because the company is only in the process of going private, there is no observable market‑based equity valuation under new ownership, and “the subsequent years” Chamath refers to have not occurred.

Given that (a) the key operational and strategic changes he conditions the prediction on cannot reasonably be evaluated yet, and (b) the multi‑year horizon for reaching “hundreds of billions” in value has not elapsed, there is no way to determine whether this forecast is right or wrong at this time.

Therefore the status of the prediction is inconclusive (too early to tell).

Chamath @ 00:15:58Inconclusive
marketseconomy
Because of the massive inflow of capital into private equity, risk‑adjusted returns in the broad private equity asset class will trend down to roughly zero excess return over the next several years, similar to what has already happened in venture capital and hedge funds.
when you see that kind of graph… the returns go to zero. And so we've seen this in venture capital. We've seen this in hedge funds, and we're now going to see this in private equity.View on YouTube
Explanation

The prediction is explicitly about what will happen "over the next several years" (multi‑year horizon) to risk‑adjusted excess returns in the broad private equity asset class. The podcast was released on October 3, 2025 and today is November 30, 2025, so not even two months have elapsed.

Private equity performance is only observable with significant lag (funds are valued quarterly or annually, and risk‑adjusted excess returns versus public benchmarks can only be assessed reliably over multi‑year periods). Given both the short time that has passed and the inherently long evaluation window for this kind of claim, there is nowhere near enough data yet to determine whether broad PE excess returns have in fact trended toward zero.

Because the stated timeframe (“next several years”) has not come close to expiring, the accuracy of this prediction cannot yet be judged.

Chamath @ 00:16:36Inconclusive
marketseconomy
Over the coming years, investor capital will increasingly leave the broad private equity category and become concentrated in a small number of top‑performing private equity firms (such as Silver Lake), while a significant portion of the capital will shift into private credit, creating a major speculative bubble in private credit.
I think what's going to happen is that the money is going to come out of private equity, and it's going to get concentrated into the few companies that know what they're doing… Where does the money go? The money's already leaked into private credit, which is the next big bubble that's building.View on YouTube
Explanation

There isn’t enough time yet to definitively judge this multi‑year prediction, although early data partly supports the described trends.

1. Capital concentration in a few top PE firms

  • Industry data for 2020–2024 show that the six largest private‑equity firms raised about 60% of total PE funds, indicating strong concentration of new capital into a relatively small group of large managers.(visualcapitalist.com)
  • Megafunds (>$5B) accounted for more than half of all PE capital raised in 2024, with large vehicles such as EQT X (~$23.7B) and Silver Lake Partners VII (~$20.5B) closing successfully even as overall fundraising slowed.(businessday.ng)
  • At the same time, total global PE fundraising has declined in recent years, hitting multi‑year lows and reflecting tighter capital availability for the broader universe of PE managers.(businessday.ng)
    Taken together, this is directionally consistent with Chamath’s claim that money would move away from the broad PE universe and become more concentrated in a handful of top firms.

2. Shift into private credit

  • Private credit assets under management have grown rapidly to around $1.7 trillion globally—more than triple their 2015 level—and private credit has become a major source of deal funding and corporate refinancing. Large asset managers and sovereign wealth funds are actively scaling private‑credit platforms (e.g., BlackRock’s proposed acquisition of HPS, TPG’s acquisition of Angelo Gordon, Abu Dhabi and Korea sovereign funds seeding or expanding private‑credit strategies).(pwc.com)
  • News on Blue Owl and other managers highlights both the sector’s growth and emerging stresses (illiquidity concerns, fund‑structure backlash, some borrower defaults), but this is far from a clear consensus that a fully fledged “bubble” has formed and burst.(reuters.com)

3. Why the prediction is still inconclusive

  • Chamath framed this as a process happening “over the coming years.” As of November 30, 2025, we are only about two months past the October 3, 2025 podcast date, so the full multi‑year trajectory he described has not yet played out.
  • The direction of the trends (capital concentration in top PE firms and strong capital flows into private credit) matches current evidence, but the core evaluable claim—that private credit becomes “the next big bubble” (i.e., a speculative excess that later unwinds)—cannot yet be confirmed or falsified. The market has grown fast and drawn concern, but there has not been a decisive boom‑and‑bust event that would clearly validate or invalidate the “big bubble” characterization.

Because the structural shift he describes is multi‑year and the bubble aspect has not yet clearly materialized or burst, the fairest assessment as of late 2025 is inconclusive (too early) rather than definitively right or wrong.

Chamath @ 00:24:05Inconclusive
markets
Within the next iteration of SPAC structures (what he calls "Raptor 3"), SPACs will evolve into vehicles where the sponsor pre‑arranges a large, flexible pool of common equity capital (on the order of $1–3B) from the outset, so that de‑SPAC transactions function as fully pre‑baked IPOs with no redemption risk and all money rolling over as common stock.
I think that the future may be just prognosticating and guessing… I think the Raptor three will look like where somebody, a sponsor like me, rolls everything up into one thing so that it's already pre-wired from the beginning… so that it's a totally pre-baked IPO at a very fair price. I think that I think that that's what the Raptor three version of a Spac will look like.View on YouTube
Explanation

Based on current information, there isn’t enough evidence yet to say whether Chamath’s prediction has come true.

  • In the Oct 3, 2025 episode, he explicitly frames this as a future “Raptor 3” SPAC structure: a sponsor would pre‑wire $1–3B of flexible common equity capital so that the de‑SPAC is a “totally pre‑baked IPO” with no redemption risk and all money rolling into common stock. (podscripts.co)
  • Subsequent coverage in October 2025 (e.g., ABMedia / 8V summaries of his remarks) describes Raptor 2 as the current SPAC 2.0 structure and says Raptor 3 is still being prepared, with Chamath planning to raise $1–3B in a pre‑structured pool so companies can list at a fair price. These articles present Raptor 3 as an upcoming design, not something that has already been launched or adopted. (abmedia.io)
  • The available write‑ups on “SPAC 2.0 (Raptor 2/3)” discuss Raptor 3 conceptually as the next evolution of SPACs that would integrate capital sources and pre‑configure the deal, again framing it as a proposal rather than a live market structure. (note.com)
  • Broader news and filings as of November 30, 2025 do not show any completed SPACs (by Chamath or the wider market) that clearly match his described Raptor 3 pattern—i.e., a standardised, widely used structure where sponsors universally pre‑arrange multi‑billion‑dollar common equity pools eliminating redemption risk. The only references we find to “Raptor 3” are explanatory or forward‑looking, not reports of executed deals. (podscripts.co)
  • Only about two months have passed between the prediction (Oct 3, 2025) and the current date (Nov 30, 2025). Launching a new SPAC vehicle, raising $1–3B, completing a de‑SPAC under a novel structure, and having it recognized as the “next iteration” of the market generally takes longer than this.

Given that Raptor 3 is still in the planning / proposal stage and there has not yet been time for the SPAC market to adopt or reject this structure, the correct status of the prediction today is “inconclusive (too early)”, not clearly right or wrong.

Chamath @ 00:40:41Inconclusive
aitech
Consumer video‑generation apps like the Sora‑based "Slop" app will improve rapidly so that within about 1–2 years from this October 2025 discussion, their quality and usability will be "legitimately excellent" for mainstream users, significantly better than the current version.
The thing that I keep in mind whenever I try these apps for the first time is today is the worst it'll ever be. It only gets better from here. And so if you look at the starting point, it won't take but a year where this thing I think, or maybe two years where this thing is legitimately excellent.View on YouTube
Explanation

The prediction’s horizon is 1–2 years from the early‑October 2025 discussion, i.e. roughly between October 2026 and October 2027 for consumer video‑generation apps like a Sora‑based “Slop” app to become “legitimately excellent” for mainstream users.

Today is November 30, 2025, less than two months after the prediction was made, so only ~10–23% of the forecast window has elapsed. We are still well before the earliest point at which Chamath expected the prediction to be realized, so it is not yet possible to determine whether mainstream‑quality, widely usable consumer video‑generation apps will meet his standard within that 1–2 year timeframe.

Because the forecast period has not yet ended, the correctness of the prediction cannot be evaluated at this time.

Chamath @ 00:51:45Inconclusive
aieconomyclimate
Absent major new solutions (such as cross‑subsidies or widespread behind‑the‑meter storage), average retail electricity prices will roughly double over the five years following this October 2025 conversation, driven largely by AI‑related data center demand.
what this energy CEO told me is, look, the next five years are baked and if we don't find some compelling solves… electricity rates will double in the next five years.View on YouTube
Explanation

The prediction specifies a five‑year horizon starting from the October 2025 conversation, i.e., roughly until October 2030. As of the current date (November 30, 2025), only about two months have elapsed, so there is not yet enough elapsed time to assess whether average retail electricity prices will have doubled over that full five‑year period. Therefore, the prediction cannot currently be judged right or wrong.

Chamath @ 00:52:20Inconclusive
economy
Absent major new solutions to electricity supply or cost structure, average consumer electricity rates in the United States will be roughly twice their current level within five years of this 2025 discussion (by around 2030).
if we don't find some compelling solves, electricity rates will double in the next five years.View on YouTube
Explanation

The prediction specifies a five‑year horizon from the October 3, 2025 discussion, i.e., roughly until October 3, 2030. Today is November 30, 2025, so less than a year—let alone five years—has passed. Because the end date of the forecast period has not yet been reached, we cannot determine whether average U.S. consumer electricity rates will have doubled by that time; the prediction is therefore too early to evaluate.

aieconomyclimate
Between 2025 and 2035, deployment and use of AI systems will drive a very large increase in electrical power demand in the United States (materially above current levels attributable to non‑AI uses).
there's no question that AI is going to create a huge need for power over the next 5 or 10 years.View on YouTube
Explanation

As of November 30, 2025, it is far too early to definitively judge a 5–10 year prediction about U.S. power demand driven by AI.

What we can say based on current evidence:

  • Multiple analyses (e.g., from the International Energy Agency and major U.S. utilities) project that rapidly growing data center and AI workloads will significantly increase electricity demand through the late 2020s and early 2030s, with some U.S. regions revising long‑term load forecasts sharply upward and explicitly citing AI/data centers as a key driver.
  • However, these are projections covering much of the 2025–2035 period; they do not yet constitute retrospective evidence that a “very large increase” has actually occurred over the full window the prediction refers to.
  • Only ~2 months have elapsed since the October 3, 2025 podcast release, and we are only at the very beginning of the stated 5–10 year horizon (2025–2035). Even if early signs and forecasts strongly support the prediction’s plausibility, it cannot yet be confirmed or falsified in outcome terms.

Because the time window has not come close to ending and the claim is explicitly about what will happen over the next 5 or 10 years, the correct status today is “inconclusive (too early)”, not “right” or “wrong.”

In the United States, additional electricity generation capacity added to meet AI‑driven demand over the next five years (through ~2030) will be predominantly natural gas–fired, while over the subsequent 5–10 years (through ~2035) nuclear power will become a major component of new capacity serving that demand.
I think on a 5 to 10 year time frame, the answer is probably nuclear, or at least that's a big part of it. But nuclear takes at least five years. Within the next five years, it's probably gas, natural gas.View on YouTube
Explanation

The prediction concerns (1) the mix of new U.S. electricity generation capacity built to serve AI demand over the next five years (roughly 2025–2030) and (2) the mix over the subsequent 5–10 years (roughly 2030–2035). As of the current date (2025‑11‑30), less than a year of the first five‑year window has elapsed, and none of the 2030–2035 window has begun. It is therefore too early to determine whether AI‑driven capacity additions through ~2030 will be predominantly natural‑gas‑fired, or whether nuclear will become a major component in the 2030–2035 period. Early policy moves or project announcements cannot conclusively establish the realized generation mix across those full future windows, so the prediction cannot yet be judged.

Sacks @ 01:12:35Inconclusive
aigovernmentpolitics
To comply with Colorado SB 24-205, commercial AI model developers serving Colorado will implement additional fairness or DEI-oriented constraints that suppress or alter outputs which could create disparate impact on protected classes, resulting in so‑called 'woke AI' behavior in deployed systems within that jurisdiction.
The only way that I see for model developers to comply with this law is to build in a new Di layer into the models, to basically somehow prevent models from giving outputs that might have a disparate impact on protected groups. So we're back to woke AI again, and I think that's the whole point of this Colorado law.View on YouTube
Explanation

As of November 30, 2025, Colorado’s SB 24-205 (the Colorado Artificial Intelligence Act) has not yet gone into effect, and its implementation has been formally delayed to June 30, 2026. Multiple legal summaries note that lawmakers pushed the effective date from February 1, 2026 to June 30, 2026 during an August 2025 special session, precisely to reconsider and possibly revise the law’s obligations before they bite. (orrick.com)

The law, in its current form, imposes a duty of reasonable care to avoid “algorithmic discrimination” on developers and deployers of high‑risk AI systems used in consequential decisions (employment, credit, housing, healthcare, etc.). Compliance is framed around risk‑management programs, impact assessments, transparency, and disclosures, with a rebuttable presumption of reasonable care if those process requirements are met. (hrlawwatch.com) It does not explicitly require building a separate DEI or fairness “layer” that censors or reshapes all model outputs.

Importantly, the statute’s stringent provisions apply mainly to high‑risk decision systems, not to general‑purpose chatbots. Commentaries underline that generative AI systems like ChatGPT are generally out of scope unless they are actually used to make consequential decisions, in which case they are regulated only in that specific use context. (wsgr.com)

Public guidance for businesses preparing for the Colorado law focuses on governance—risk assessments, documentation, bias testing, consumer notice, and AG reporting—and does not show evidence that major model developers have rolled out Colorado‑specific DEI layers or a distinct “woke AI” behavior profile to comply with SB 24‑205. (seyfarth.com) The article quoting Sacks about a looming “woke AI” patchwork is commentary on what he thinks will happen, not documentation that such changes have actually been implemented. (meritalk.com)

Because (1) the law is not yet in force, (2) its final form may still change before June 30, 2026, and (3) there is no concrete evidence that developers have already adopted the specific compliance strategy Sacks predicted (a new DEI layer that suppresses outputs causing disparate impact, leading to jurisdiction‑specific “woke AI”), it is too early to determine whether his prediction will ultimately be right or wrong. Therefore the correct status is **“inconclusive (too early).”

Chamath @ 01:13:50Inconclusive
aieconomy
If the current trend toward divergent AI regulations in all 50 U.S. states persists (with no federal preemption) over the coming years, the U.S. AI industry as a whole will fail to generate significant net positive economic output and will fall far short of its potential contribution to national productivity and GDP.
If you have 50 sets of rules, what you will have are some conservative versions of AI. You'll have some progressive leaning versions of laws. These 50 series of laws will essentially just render this industry impotent and incapable of maximizing itself, and actually doing what's necessary to drive productivity and GDP on behalf of the country... Can you imagine? Instead of two sets of rules, you have 50. I think you know what the economic consequences will be. You'll render this entire category incapable of being able to generate any positive economic output.View on YouTube
Explanation

It’s too early to evaluate this prediction.

Chamath’s statement is explicitly conditional and long‑term: “If” the trend toward 50 divergent state AI regulatory regimes persists over the coming years, then the U.S. AI industry will be rendered “incapable” of generating positive economic output or maximizing its contribution to productivity and GDP. That involves (a) a structural regulatory outcome that hasn’t stabilized yet and (b) macroeconomic effects that would take multiple years to measure.

Regulatory landscape as of late 2025

  • Several U.S. states (e.g., Colorado, California, Tennessee, Utah) have passed or are advancing AI‑related or automated decision‑making laws, but they are still in early phases and often sector‑specific or focused on transparency, risk management, or specific use cases (like hiring, consumer protections, or deepfakes).
  • At the federal level, there have been ongoing efforts (e.g., NIST’s AI Risk Management Framework, executive‑branch actions, and multiple congressional proposals), but no settled, comprehensive national AI regulatory regime with clear long‑term preemption of state laws has fully taken effect. (This is consistent with broad coverage in major news and policy analyses through late 2025.)
  • Because these laws are nascent and many are not yet fully implemented or enforced at scale, the long‑run interaction between state and federal AI rules is unresolved.

Economic outcomes not yet observable

  • The claim that the U.S. AI industry will be “render[ed]…incapable of being able to generate any positive economic output” is extremely strong: it implies either negligible or net‑negative economic contribution in the aggregate.
  • As of November 30, 2025, U.S. AI companies are still attracting substantial investment, filing patents, deploying models, and generating revenue across sectors (cloud providers, model labs, enterprise software, etc.). Measuring their net contribution to national productivity and GDP—and whether it is “far short of potential” because of state‑by‑state regulation—would require:
    • Several years of data on AI adoption and productivity across industries.
    • Clear attribution separating the effect of regulatory fragmentation from other factors (business cycles, interest rates, global competition, etc.).
  • That kind of causal, macro‑level assessment simply cannot be made only ~2 months after the October 3, 2025 podcast date, and no credible economic studies yet isolate the effect of 50‑state AI rule divergence on overall U.S. GDP.

Because:

  1. The regulatory condition (“50 sets of rules” persisting without meaningful federal harmonization) has not clearly materialized or failed yet, and
  2. The economic consequence (industry‑wide impotence and failure to generate positive net output / major productivity gains) requires years of data and is not currently measurable,

the prediction cannot yet be judged as right or wrong. The appropriate classification is therefore “inconclusive (too early)”.

Sacks @ 01:20:40Inconclusive
politicsaigovernment
Over the next few years, Democratic‑leaning U.S. states will enact and enforce AI regulations targeting 'algorithmic discrimination,' which will cause major AI providers to deploy state‑specific versions of their models with DEI‑oriented constraints ('woke AI') for use in those jurisdictions.
what you're going to see is that the blue states will drive this ban on quote unquote, algorithmic discrimination, which will lead to Di being promoted in models, which is what the Biden administration wanted. You will see the return of woke AI at the state level.View on YouTube
Explanation

As of November 30, 2025, it is too early to determine whether Sacks’s multi‑year prediction is right or wrong.

  1. Timeframe not yet elapsed. The prediction explicitly concerns what will happen “over the next few years.” Only about two months have passed since the October 3, 2025 podcast. Key state AI laws that could test this prediction—like Colorado’s Artificial Intelligence Act—do not begin substantive effect until 2026 and have even had their effective dates pushed back, so their real-world impact on model behavior has not yet materialized. (infographics.bclplaw.marketing)

  2. Blue / Democratic‑leaning states are advancing AI rules around algorithmic discrimination. Colorado’s AI Act (enacted 2024) is the leading example: it creates a comprehensive framework for “high‑risk” AI systems and expressly aims to prevent algorithmic discrimination in consequential decisions (employment, housing, credit, etc.). (infographics.bclplaw.marketing) New York and California have also pushed AI legislation and enforcement approaches that emphasize bias, civil rights, and consumer protection in automated decision‑making and hiring tools. (aitechtrend.com) This supports the first part of the forecast—that blue states would drive regulation framed around “algorithmic discrimination.”

  3. No evidence yet of state‑specific “woke AI” model variants. The crucial second part of the prediction is that such state laws would force major AI providers to deploy state‑specific, DEI‑constrained model versions (“return of woke AI at the state level”). Public reporting on how companies are planning for Colorado-, California-, and New York‑style rules shows emphasis on documentation, risk‑management programs, transparency, and impact assessments, but does not report OpenAI, Google, Meta, or others shipping separate “Colorado‑only” or “California‑only” model forks with uniquely constrained outputs. (lathropgpm.com) In practice, commentators note that large companies often respond to the strictest state rules by raising their baseline standard nationwide (as happened with privacy and emissions rules), rather than building many jurisdiction‑specific products. (govfacts.org) That cuts against the mechanism Sacks predicts, but the relevant laws aren’t yet in full force, so this might still change.

  4. Regulatory and political environment is still in flux. There is an active fight over whether states can keep or expand these rules: a bipartisan coalition of state attorneys general has urged Congress not to preempt state AI regulations, while the Trump administration and allied groups have attacked certain state regimes as creating “woke” AI and are pushing for federal preemption or a single national approval regime. (reuters.com) Because this preemption battle is unresolved, it remains unclear whether blue‑state rules will ultimately be strong enough to compel the kind of state‑specific model behavior Sacks describes.

Given (a) the explicitly multi‑year horizon, (b) the fact that the most relevant state laws have not yet been implemented or enforced at scale, and (c) the lack of current evidence of state‑specific “woke AI” model variants, the prediction cannot yet be judged as clearly right or clearly wrong. The appropriate classification at this time is inconclusive (too early).

politicsaigovernment
During the Trump administration beginning in 2025, the White House will publicly support, and a growing number of Republican legislators will eventually back, federal legislation to preempt state‑level AI regulations and establish a single national AI regulatory standard.
So the feds preempted that. And I think we should do the same thing on AI. That's what the president basically said in his speech. So I think the administration ultimately will support this. And I think I think more Republicans will come on board as they realize what the blue states are doing here is not helpful for conservatives.View on YouTube
Explanation

Evidence from 2025 shows that the dynamic Sacks described has clearly materialized, even though Congress has not yet enacted a comprehensive AI law:

  • White House support for preempting state AI laws. President Trump’s signature budget reconciliation package, the One Big Beautiful Bill Act (H.R. 1), originated from the White House and was framed as his core second‑term agenda. The House‑passed version explicitly contained a 10‑year moratorium on “state‑level enforcement of any law or regulation regulating artificial intelligence (AI),” i.e., a federal preemption of state AI rules built into Trump’s megabill.(en.wikipedia.org) Separately, reporting shows Trump personally asked Congress to add a provision blocking state AI laws to the National Defense Authorization Act, with plans to sue and defund states that passed such laws.(reuters.com) A leaked draft executive order, “Eliminating State Law Obstruction of National AI Policy,” would direct DOJ to challenge state AI statutes and declares the administration’s goal of a “minimally burdensome national standard — not 50 discordant state ones,” further confirming White House backing for a single, nationally determined AI regime.(arstechnica.com)

  • Growing Republican legislative backing. The 10‑year moratorium was written by House Republicans into Trump’s flagship reconciliation bill and passed the House on a near‑party‑line vote, meaning a large share of GOP representatives voted to strip states of AI‑regulatory authority.(en.wikipedia.org) Coverage of the provision describes Republicans as arguing it is needed to avoid a “patchwork” of state rules and to give Congress time to pass a comprehensive federal AI framework — precisely the “single national standard” rationale Sacks anticipated.(notus.org) In the Senate, Republican leadership initially advanced versions tying federal AI funds to states refraining from AI regulation, and only later, after major backlash from state officials of both parties, did the Senate vote 99–1 to strip the AI‑preemption language from Trump’s megabill.(reuters.com) That sequence (House passage, then partial Republican retreat in the Senate) shows that substantial numbers of GOP lawmakers did in fact get “on board” with preemptive federal AI legislation before political costs forced a climb‑down.

  • National‑standard framing vs. actual law on the books. Reuters and other outlets note that tech firms like OpenAI, Google, Meta, and Andreessen Horowitz have been pressing for national AI standards to replace state rules, and that Trump’s efforts to block state laws through the budget bill and NDAA align with that push — even though, as of late November 2025, Congress still has not enacted an overarching federal AI regulatory framework.(reuters.com) Instead, the White House and allied Republicans have repeatedly tried to (a) stop states from regulating AI and (b) position federal authority as the locus of future AI rules, which matches Sacks’s substance: preempting blue‑state AI regulation in favor of a single, federally controlled standard.

Because Sacks’s prediction was about who would support what kind of policy (White House + an expanding bloc of Republicans backing federal preemption and a unified national approach), not about the ultimate success of those bills, the core elements he forecast have already occurred. The specific preemption provisions have been beaten back so far, and no comprehensive national AI statute exists yet, but his political prediction about Trump’s administration and Republican legislators lining up behind federal preemption of state AI rules is borne out by the 2025 record.