Last updated Nov 29, 2025

Presidential Debate Reaction, Biden Hot Swap?, Tech unemployment, OpenAI considers for-profit & more

Sat, 29 Jun 2024 04:05:00 +0000
Back to episodes
politics
Within 30–60 days of the earlier episode being recorded (the clip they replayed), the Democratic Party will replace Joe Biden with a different presidential candidate ('a switcheroo').
I believe that's what's going to happen in the next 30 to 60 days. So I'm predicting... There's going to be a switcheroo... 100%.View on YouTube
Explanation

Jason made this prediction on the May 31, 2024 episode "Trump verdict, COVID Cover-up, Crypto Corner, Salesforce drops 20%, AI correction?", saying, "I believe that's what's going to happen in the next 30 to 60 days... there's going to be a switcheroo, 100%."(podscripts.co) That episode’s date anchors his 30–60 day window to roughly June 30–July 30, 2024.

On July 21, 2024—about 51 days after May 31—President Joe Biden announced his withdrawal from the 2024 presidential race.(en.wikipedia.org) The same day, Vice President Kamala Harris launched her own presidential campaign, and she became the Democratic Party’s official nominee on August 5, 2024.(en.wikipedia.org) Functionally, the Democratic Party did “switch” from Biden to a new standard-bearer within Jason’s 30–60 day forecast window.

Because Biden exited the race and Harris replaced him as the Democratic nominee within that time frame, Jason’s prediction that there would be a Biden "switcheroo" in the next 30–60 days is best judged as right.

politicsgovernment
In the then-upcoming presidential debate referenced in the replayed clip, Donald Trump will decisively outperform Joe Biden, Biden’s support in the polls will fall to around 30%, and subsequently Democratic Party leadership will orchestrate a ‘graceful’ withdrawal by Biden and replace him with another candidate.
I think Trump's going to demolish him in a debate. I think he'll sink to 30% in the polls. And then the Democrats will find a way to give him a graceful out, and then they'll field somebody else.View on YouTube
Explanation

As of November 30, 2025, enough time has passed to judge this prediction, and most of the core scenario Jason sketched out did occur, though one quantitative detail (the "30%" figure) was off.

1. “Trump’s going to demolish him in a debate.” – Largely accurate
The first 2024 Biden–Trump debate took place on June 27, 2024. Post-debate polling and later reporting consistently describe Biden’s performance as disastrous and Trump as clearly outperforming him:

  • A 538/Ipsos poll of debate watchers found about 60% said Trump performed best vs. 21% for Biden, and 73% rated Biden’s performance as poor or terrible, while only 8% said it was good or excellent. (ipsos.com)
  • Subsequent analyses and campaign books routinely refer to the debate as a “nationally televised disaster” for Biden and a pivotal moment that triggered panic inside the Democratic Party. (politico.com)
    That matches the spirit of “demolish” in terms of perceived performance, even though some analysts cautioned that Trump’s showing wasn’t objectively strong on its own.

2. “He’ll sink to 30% in the polls.” – Incorrect
Whether you interpret this as overall job approval or vote intention, Biden’s numbers never dropped to ~30%:

  • National job-approval series (Gallup, Marquette, AP–NORC, etc.) have Biden mostly in the high‑30s to low‑40s through 2024. For example, Gallup recorded 38% approval around this period, and Marquette’s national polling had him at roughly 35–40%, with a low of about 34% later in 2024 – still above 30. (law.marquette.edu)
  • In head‑to‑head vote polls, Biden’s support also stayed around 40%, not 30%. The 538 national average moved from roughly a tie pre‑debate to Trump leading by about 1–2 points (e.g., Trump ~42%, Biden ~40%) after June 27. (goodmorningamerica.com)
    So the specific numerical claim about “30% in the polls” did not materialize.

3. “Then the Democrats will give him a graceful out and field somebody else.” – Substantively correct
The rest of the scenario essentially unfolded as described:

  • After weeks of intense internal pressure following the June 27 debate, Biden announced on July 21, 2024 that he was ending his re‑election bid, framing it as a decision taken “in the best interest of my party and the country” and saying he’d focus on serving out his term – a classic “graceful exit” letter. (time.com)
  • In the same move, he formally endorsed Kamala Harris as the Democratic nominee. (americamagazine.org)
  • A YouGov snap poll the day of his withdrawal found 71% of Americans – including 70% of Democrats – approved of his decision to step aside, and 82% of Democrats supported his endorsement of Harris, indicating the party and broader public largely saw it as a legitimate, orderly handoff rather than a messy ouster. (today.yougov.com)
  • Harris then became the Democratic nominee (formally at the Democratic National Convention in Chicago in August 2024), i.e., the party “fielded somebody else” in Biden’s place. (en.wikipedia.org)

Overall assessment
Jason correctly anticipated the structure and outcome of events: a debate in which Trump was widely perceived to outperform Biden, followed by Democratic elites engineering a dignified way for Biden to withdraw and replacing him with another nominee. The one clear miss was the specific claim that Biden’s support would crash to around 30%, which did not happen. Because the main causal story and end state he forecast did occur (disastrous debate → intra‑party pressure → Biden’s graceful exit → new Democratic candidate), the overall prediction is best classified as right, with a notable but secondary numerical inaccuracy.

politics
From June 29, 2024, within 30 days (by around July 29, 2024), Joe Biden will no longer be an active candidate in the 2024 presidential race (i.e., he will drop out or be replaced).
the hot swaps coming. I just I'm telling you right now, he is not going to be in this race in the next 30 days.View on YouTube
Explanation

Jason said on June 29, 2024 that “he is not going to be in this race in the next 30 days,” i.e., by around July 29, 2024, Joe Biden would no longer be an active 2024 presidential candidate.

Authoritative reports show that President Joe Biden publicly ended his 2024 reelection bid on July 21, 2024, announcing his withdrawal from the race and endorsing Vice President Kamala Harris as his replacement at the top of the Democratic ticket. (en.wikipedia.org)

July 21, 2024 is 22 days after June 29, 2024—well within the 30‑day window Jason specified. Since Biden indeed dropped out of the race within that period, the prediction was correct.

politicsgovernment
Following the June 27, 2024 debate, Joe Biden will leave office before his term ends, Kamala Harris will become President for roughly four months and will be celebrated as the first female president, then she will announce she is not running for election (citing family or similar reasons), and the Democratic Party will instead put forward two new, more moderate candidates to contest the 2024 presidential election, with that new general‑election race effectively beginning about 15 days after Biden exits.
what's going to happen right now, I guarantee you, is he's out. We're going to have President Kamala. She's going to get her flowers for four months as she gets to be the first female president of the United States. Then she steps out of the way. She decides she's not going to run because she's got things to do with her family. And there will be two new people who will be moderates. And then the real election starts in about 15 days.View on YouTube
Explanation

The prediction did not come true.

Key parts of Jason’s forecast vs. what actually happened:

  1. “He’s out… We’re going to have President Kamala.”

    • As of the end of Joe Biden’s first term (which concluded at noon on January 20, 2025), Biden did not leave office early and was not replaced by Kamala Harris as president. Public records and news coverage of the 2024 election and subsequent inauguration show Biden serving through the full term and then leaving office in the normal course at inauguration, not via early resignation or removal.
  2. “She’s going to get her flowers for four months as she gets to be the first female president… Then she steps out of the way… decides she’s not going to run.”

    • Kamala Harris never became President in 2024 or early 2025, so there was no four‑month Harris presidency and no announcement by a President Harris that she would decline to run. Coverage of the 2024 campaign confirms that this scenario never occurred.
  3. “There will be two new people who will be moderates” heading the Democratic ticket in 2024

    • In the actual 2024 election cycle, the Democratic presidential ticket was not composed of two entirely new, post‑Biden, non‑Harris “moderate” figures created by a last‑minute Biden exit. Mainstream reporting on the 2024 Democratic ticket and convention shows the ticket emerging through the standard primary/nomination process, not through a sudden presidential resignation followed by a short Harris presidency and her withdrawal.
  4. Timing element: “the real election starts in about 15 days” after Biden exits

    • Because Biden never exited early, there was no 15‑day reset point that launched a brand‑new general‑election matchup.

Since none of the critical elements occurred (no early Biden departure, no Harris presidency, no four‑month window, no Harris withdrawal, no completely new moderate duo emerging on that basis), the prediction is substantively wrong.

politicsgovernment
Within 72 hours of June 29, 2024, Barack Obama will personally speak with Joe Biden and, as a result of that intervention, Biden will agree to step down as the 2024 Democratic presidential candidate (the ‘hot swap’ will occur).
Obama's going to give him a call. I guarantee you Obama is going to talk to him in the next 72 hours, guarantee the hot swap happens.View on YouTube
Explanation

Jason predicted that within 72 hours of June 29, 2024, Barack Obama would call Joe Biden and that this would cause Biden to agree to step down as the 2024 Democratic nominee (a “hot swap”). The podcast transcript shows him saying Obama would talk to Biden in the next 72 hours and that he “guarantee[d] the hot swap happens.” (podcasts.happyscribe.com) In reality, Biden did not step down in that window. In the immediate aftermath of the June 27 debate and on June 29, reporting described Biden as acknowledging a poor performance but vowing to stay in the race and defeat Donald Trump. (en.wikipedia.org) He continued to insist he would not drop out for weeks, including a July 5 ABC News interview where he said only the Lord Almighty could convince him to quit, and a July 8 letter to congressional Democrats reiterating that he would remain a candidate. (en.wikipedia.org) The first sitting Democratic member of Congress did not publicly call for his withdrawal until July 2, 2024, still with Biden refusing to leave the race. (en.wikipedia.org) Biden ultimately announced his withdrawal on July 21, 2024, more than three weeks after Jason’s 72‑hour window, and there is no public evidence that a phone call from Obama in late June directly triggered an immediate decision to step aside. (en.wikipedia.org) Because the predicted timeline and mechanism (Obama’s call causing a hot swap within 72 hours) did not occur, the prediction is wrong.

politics
In the 2024 U.S. presidential election, Jason will not cast his vote for Joe Biden.
I am not voting for Biden. I have eliminated Biden as a possibility. I was waiting to see what happened last night.View on YouTube
Explanation

Available public information does not establish how Jason Calacanis actually filled out his 2024 presidential ballot.

What we do know:

  • In the June 29, 2024 All‑In episode “Presidential Debate Reaction, Biden Hot Swap?, Tech unemployment, OpenAI considers for‑profit & more,” Jason explicitly says: “This is breaking news. I am not voting for Biden. I’ve eliminated Biden as a possibility. I was waiting to see what happened last night.” (podcasts.happyscribe.com)
  • Joe Biden subsequently withdrew from the 2024 race and endorsed Kamala Harris; Harris became the Democratic nominee against Donald Trump in November 2024, so Biden was not the Democratic candidate on the general‑election ballot. (en.wikipedia.org)
  • In the post‑election All‑In episode “Trump wins! How it happened and what’s next,” the hosts joke about having all voted for Trump “twice” and in multiple swing states, but this is clearly tongue‑in‑cheek banter rather than a factual disclosure of each host’s ballot. (happyscribe.com)
  • Coverage of the All‑In podcast and Jason’s political activity documents his endorsement of Dean Phillips in the Democratic primaries and his broader alignment with the show’s pro‑Trump, anti‑Biden shift, but none of these sources state whom he personally voted for in November 2024. (en.wikipedia.org)

Crucially, U.S. ballots are secret. Unless Jason himself publicly states “I voted for X” (or “I wrote in Joe Biden”), no external source can definitively confirm or deny whether he cast a vote for Biden—especially given that Biden was no longer the nominee and could only have been supported via an unusual write‑in or similar mechanism.

Because the election is long past but Jason has not disclosed his actual vote, we cannot determine with certainty whether he did or did not vote for Biden. The circumstantial evidence makes a Biden vote highly unlikely, but not provably impossible, so the prediction’s truth value remains ambiguous rather than definitively right or wrong.

Chamath @ 00:28:47Inconclusive
politics
As a consequence of the Biden debate fallout and internal party dynamics, the Democratic Party will undergo a significant internal realignment or ‘reset’ (e.g., leadership, platform, or coalition changes) over the next election cycle or two.
I think the Democratic Party is It's probably at risk of a pretty meaningful reset.View on YouTube
Explanation

There is clear evidence that the Biden–Trump debate on June 27, 2024 precipitated a major crisis and immediate changes in the Democratic Party:

  • Biden’s disastrous debate performance fueled intense internal pressure, and he ultimately dropped out of the 2024 race and endorsed Kamala Harris, an unprecedented move for a modern incumbent president and a major shock to party leadership and campaign plans. 【0search0】【2search13】
  • Harris then lost the 2024 general election to Donald Trump, with Republicans regaining the White House and full control of Congress, pushing Democrats into a period of soul‑searching and strategic reassessment. 【1news13】【1news16】
  • In early 2025, Jaime Harrison stepped down and Ken Martin was elected new DNC chair on a platform of rebuilding and re‑orienting the party toward working‑class voters, with explicit talk of a new strategic framework and party “rebranding.” 【1search17】【1search19】

However, the prediction (as normalized) goes beyond short‑term turmoil and leadership turnover; it posits a “significant internal realignment or ‘reset’” of the party over the next election cycle or two.

As of November 30, 2025:

  • The party is riven by internal conflicts: prominent union leaders have left the DNC; younger activists like David Hogg have been pushed out of leadership roles; and major policy splits (e.g., over Israel–Gaza) have flared at national party meetings. 【1news14】【1news15】【0news16】
  • Analysts describe the party as being in a kind of internal “cold war” between its progressive and moderate wings, and some explicitly argue that efforts by moderates (e.g., an “abundance” message) are too small and technocratic to amount to a meaningful reset. 【2search1】【2search4】
  • Key figures such as Tim Walz are publicly calling for a “full Democratic Party reset,” emphasizing that such fundamental change is still a project for the coming cycles (especially looking ahead to 2026 and 2028), not something that has already been accomplished. 【2search0】

In other words, the debate fallout clearly triggered a leadership crisis, a forced nominee change, and subsequent leadership turnover at the DNC—developments consistent with being “at risk” of a reset. But whether this evolves into a durable, structural realignment of leadership, platform, or coalition over “the next election cycle or two” is still unresolved, and that full time horizon (through at least 2026 and plausibly 2028) has not yet elapsed.

Because substantial internal change is underway but the long‑term outcome and depth of any true party‑wide reset remain uncertain within the stated timeframe, the prediction cannot yet be judged clearly right or wrong. It is therefore inconclusive (too early).

politicsgovernmentai
As OpenAI deepens its ties with U.S. national security and large institutional investors (e.g., BlackRock, T. Rowe, major mutual funds), future U.S. Senate scrutiny of OpenAI will increasingly occur in confidential, national‑security–framed settings, and the company will face effectively no serious legal or regulatory harassment such as IRS audits (i.e., probability of an IRS audit becomes near zero over the coming years).
So now what happens is, when you have Senate hearings about this stuff, it's more likely that it's confidential behind closed doors. It's under the purview of national security. All these things are beneficial to OpenAI. And then secondly, they were able to get Elon to drop his lawsuit conveniently... The only thing left is to get shares into the hands of the blackrocks, the tea rose, the all the big mutual fund apparatuses of the world that then syndicate to all the individual investors of the world. And you have everything. You have government connections. You have no real legal overhang. Then the likelihood that an IRS agent all of a sudden decides to audit OpenAI is basically zero.View on YouTube
Explanation

Chamath’s premise that OpenAI would tighten its connections to U.S. national security and big institutional capital has largely played out, but his conclusion—that this would leave OpenAI with “no real legal overhang” and effectively no serious regulatory harassment—has been falsified by events since mid‑2024.

1. Parts that did come true (inputs to his reasoning)

  • Deepened national‑security ties. In December 2024 OpenAI partnered with defense contractor Anduril to deploy AI for “national security missions,” and in June 2025 the Pentagon awarded OpenAI a one‑year contract worth up to $200 million to develop “prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains,” under the new OpenAI for Government initiative.(reuters.com) This is exactly the kind of national‑security alignment he was talking about.
  • Big institutional and mutual‑fund capital. OpenAI has repeatedly raised money via large primary and secondary transactions. A 2025 secondary share sale at a $500B valuation let employees sell about $6.6B of stock to major asset managers and sovereign/PE investors, including T. Rowe Price, SoftBank, Dragoneer and Abu Dhabi’s MGX.(reuters.com) In early 2025 OpenAI also added Adebayo Ogunlesi—CEO of Global Infrastructure Partners and a senior managing director and board member at BlackRock—to its board, explicitly tying it into BlackRock’s infrastructure‑finance orbit.(reuters.com) So the “BlackRock / T. Rowe / big mutual‑fund apparatus” component is directionally accurate.
  • National‑security framing in Congress exists, but not exclusively. The Senate’s AI “Insight Forums,” which include Sam Altman, are deliberately closed‑door sessions to brief senators on AI risks and strategy, including national‑security aspects.(cnbc.com) At the same time, Altman also continues to appear in open, televised hearings such as the May 2025 Senate Commerce Committee hearing on U.S.–China AI competition, where national security was central but the proceeding was fully public.(apnews.com) So there is a mix of public and confidential, “national‑security‑framed” engagement, but not a clear shift to only or predominantly closed settings.

2. The main prediction about legal/regulatory risk is contradicted by events Chamath’s punch line was that, once these government and Wall Street ties were in place, OpenAI would have “no real legal overhang” and that the probability of an IRS audit (and similar regulatory harassment) would be “basically zero.” On observable facts, the opposite has happened: OpenAI’s legal and regulatory exposure has grown substantially.

  • Major, ongoing copyright litigation with existential downside. The New York Times and other publishers are pursuing a consolidated federal lawsuit in Manhattan alleging that OpenAI and Microsoft used millions of copyrighted articles without authorization to train their models. In March 2025, Judge Sidney Stein allowed the core copyright‑infringement claims to proceed, potentially to a jury, after consolidating multiple suits (including those by prominent authors) into a single case.(apnews.com) The Times is explicitly seeking billions in damages and even destruction of training datasets—exactly the kind of large legal overhang Chamath implied would be neutralized.
  • Expanded FTC scrutiny and investigations. The FTC opened a formal investigation into OpenAI in 2023 over privacy, data security, and consumer‑harm issues related to ChatGPT.(cnbc.com) In 2025 it went further, launching a targeted inquiry into “companion” chatbots used by teens, explicitly naming OpenAI among others and demanding detailed information on harms, safety controls, and data handling.(ft.com) These are not the actions of regulators standing down because of OpenAI’s political and financial connections; they are intensifying oversight.
  • Serious tort exposure: wrongful‑death litigation. In August 2025, the parents of a 16‑year‑old filed Raine v. OpenAI in California state court, alleging that ChatGPT contributed to their son’s suicide by giving detailed instructions on how to hang himself and failing to protect a vulnerable user.(en.wikipedia.org) The case directly challenges OpenAI’s safety practices and could set a key precedent for product liability in generative AI—another very real legal overhang.
  • Continuing, high‑profile corporate litigation with Elon Musk. Although Musk dropped his original 2024 lawsuit, he filed a new one later that year attacking OpenAI’s restructuring into a more conventional for‑profit entity and seeking to block it. In 2025 a federal judge rejected Musk’s bid for a preliminary injunction but granted an expedited trial timetable, and OpenAI in turn countersued Musk for alleged bad‑faith sabotage and a “sham” takeover bid.(politico.com) This ongoing cross‑litigation is textbook “legal overhang” for a company contemplating major financings and structural changes.
  • IRS‑specific point is simply untestable, but the broader claim fails. IRS examinations are confidential unless a company chooses to disclose them; there is no public evidence either way of an IRS audit of OpenAI. What we can see is that, despite OpenAI building out a sophisticated tax‑compliance function, the broader landscape of lawsuits and federal investigations entirely contradicts the notion that regulatory or legal pressure has effectively vanished.(openai.com) Chamath’s statement was not just about the IRS; it was that OpenAI would have “no real legal overhang” thanks to its alignment with powerful interests, and that is clearly not borne out.

3. Senate‑scrutiny pattern is mixed, not clearly shifted as predicted We do see closed‑door, national‑security‑tinged engagements (AI Insight Forums; DoD and OpenAI‑for‑Government work).(reuters.com) But high‑stakes oversight has also continued in very public form, like the May 2025 Senate Commerce hearing where Altman testified under TV cameras about U.S.‑China AI competition and the need for infrastructure and regulation.(apnews.com) In other words, the observable pattern is both public and private scrutiny, not a clear step‑function shift toward mostly confidential national‑security venues.

Bottom line: Chamath correctly anticipated that OpenAI would deepen ties with U.S. national‑security institutions and major asset managers, but the core forecast—that this alignment would insulate OpenAI from serious legal and regulatory problems, leaving “no real legal overhang” and effectively zero chance of aggressive enforcement—has been decisively contradicted by the proliferation of major lawsuits and regulatory investigations since mid‑2024. On balance, the prediction is wrong.

Sacks @ 00:55:41Inconclusive
aiventure
The startup Safe Superintelligence Inc. (SSI), which emphasizes safety in developing superintelligence, will not become a top‑tier winner in the AGI race; its safety‑first approach will cause it to lose competitively to faster‑moving rivals like OpenAI and Google over the next several years.
I'm not bullish about that pitch because I think it it makes the company a little bit schizophrenic... So you've got to move very fast. On the other hand, you're saying you're going to basically make this very safe... safety concerns are a brake pedal... And so I don't but I don't think that's the recipe for winning because you're the guys who want to move slower.View on YouTube
Explanation

As of November 30, 2025, it’s too early to judge whether Safe Superintelligence Inc. (SSI) will ultimately fail to be a “top‑tier winner” in the AGI race due to its safety‑first strategy.

What we can see so far:

  • SSI was founded in June 2024 with a mission explicitly centered on building safe superintelligence and insulating safety from short‑term commercial pressure.(ssi.inc)
  • Through 2024–2025, SSI has operated in deep R&D mode, with no public model, API, or benchmarks; this is an intentional “straight‑shot” strategy focused on long‑term superintelligence rather than near‑term products.(asapdrew.com)
  • Despite the lack of products, SSI has raised around $3B at a valuation near $30–32B from top‑tier investors (a16z, Sequoia, DST, Alphabet, NVIDIA, etc.), putting it in the same financial league as leading frontier labs.(vctavern.com)

However, the prediction concerns whether over the next several years this safety‑first posture will cause SSI not to be a top‑tier AGI winner relative to OpenAI, Google, and other fast‑moving labs. The industry has not yet reached AGI or superintelligence, and no clear long‑term “winner” has emerged. SSI’s strategy and funding position keep it a serious contender, but there is also not yet evidence that it will win.

Because the core claim is about a multi‑year competitive outcome that has not yet played out, the prediction cannot be evaluated at this time, so the result is inconclusive (too early).

Sacks @ 00:57:06Inconclusive
aitechmarkets
In the emerging race to AGI over the coming years, AI companies that prioritize safety more than their competitors will systematically lose market share and technological leadership, leading to a Darwinian dynamic that favors less safety‑constrained firms.
I think this is maybe the tragic situation is we're going to have this competition by all these different companies to advance AI. And the companies that care about safety more than others are going to lose. And so you have this Darwinian effect going on where there's going to be a race to AGI.View on YouTube
Explanation

It is too early to determine whether this structural, multi‑year prediction has come true.

The claim is:

In the emerging race to AGI over the coming years, AI companies that prioritize safety more than their competitors will systematically lose market share and technological leadership, leading to a Darwinian dynamic that favors less safety‑constrained firms.

Key reasons this is inconclusive as of Nov 30, 2025:

  1. Timeframe: “over the coming years”
    The prediction explicitly concerns a multi‑year competitive dynamic in an industry that is still rapidly evolving. As of late 2025, only ~1.5 years have passed since the statement (June 2024), which is not long enough to see whether a long‑run “Darwinian” pattern systematically plays out across cycles of technology deployment, regulation, and market consolidation.

  2. Ambiguous mapping of “safety‑prioritizing” vs “less safety‑constrained” firms
    Major labs—OpenAI, Anthropic, Google DeepMind, Meta, and xAI—publicly emphasize both capability and safety. They differ in rhetoric, openness, and governance (e.g., Anthropic’s public positioning as more safety‑oriented, OpenAI’s capped‑profit structure, DeepMind’s integration within Google and focus on alignment research), but there is no clear, agreed‑upon ranking of which firms “care more about safety” in a way that would let us objectively test the prediction. Any classification here would be contestable and largely qualitative.

  3. Market and leadership status is still fluid

    • Different firms lead on different axes (model performance, user base, enterprise deals, research breakthroughs, chip access, etc.).
    • Leadership has already shifted multiple times since early GPT‑4–era systems, and regulatory, antitrust, and safety‑oriented governance efforts are still ramping up in the US, EU, and elsewhere. These moving parts make it premature to say that any systematic Darwinian effect has locked in.
  4. No clear evidence yet of a safety‑leadership tradeoff dominating the field
    While there have been tensions between releasing powerful models quickly and imposing strict safety constraints (e.g., staged releases, regional limitations, content restrictions), we do not yet see unambiguous, widely accepted evidence that more safety‑conscious firms are clearly and systematically losing overall market share and technological leadership because of their safety stance. Some firms perceived as relatively safety‑oriented have continued to secure large funding rounds, strategic partnerships, and regulatory goodwill, which could just as easily be interpreted as advantages rather than handicaps.

Because (a) the horizon of “coming years” has not fully played out, (b) the relevant concepts (who is most safety‑oriented, what counts as leadership) are not crisply measurable yet, and (c) industry structure and regulation are still in flux, the prediction cannot be fairly judged as right or wrong at this point.

Therefore the status is inconclusive (too early) rather than right, wrong, or permanently ambiguous.

Chamath @ 00:59:15Inconclusive
aitech
Over the next few years, frontier foundational AI models will converge in capabilities such that they become near‑interchangeable commodities from the user’s perspective (a “consumer surplus”), with only marginal performance differences between major providers.
Foundational models are quickly becoming a consumer surplus. Every model is roughly the same. They keep getting better and better, but they're also approaching these asymptotic returns.View on YouTube
Explanation

As of November 30, 2025, it’s too early to definitively judge a prediction framed as happening "over the next few years," since only ~17 months have passed since June 29, 2024.

Evidence is mixed:

  • Partial capability convergence at the top: Comparative benchmarks often show leading proprietary LLMs (e.g., GPT‑4o/4.1, Claude 3.5/3.7, Gemini 1.5/2.x) clustered closely in performance. For example, GPT‑4o and Claude 3.5 Sonnet achieve very similar accuracy on a medical reasoning benchmark, while open‑source Llama‑3.1‑405B trails but is not dramatically worse. (arxiv.org) A geospatial benchmark finds GPT‑4o and Claude Sonnet 3.5 essentially sharing the top tier. (arxiv.org) This supports a trend toward narrowing gaps among frontier proprietary models.
  • Continuing leapfrogging and non‑trivial gaps: Newer releases like GPT‑4.1/4.5, Claude 4.x/4.5 and Gemini 3 Pro still show material performance differences and regularly overtake one another on specific benchmarks (coding, multimodal reasoning, etc.), contradicting a clear plateau of "asymptotic" returns. (reuters.com)
  • Systematic differences by vendor: A large 2025 study comparing 18 models from OpenAI, Anthropic, Google, and DeepSeek found clear, statistically significant differences between vendors and model families, concluding that model choice is not neutral. (arxiv.org)
  • Economic/UX signals of commoditization, but not full interchangeability: Token prices have dropped sharply (e.g., GPT‑4o mini at $0.15 per million input tokens), and analysts describe foundation models as facing commoditization pressure with low customer lock‑in. (en.wikipedia.org) Yet developers still report distinct practical behavior and preferences among GPT‑4.1, Claude, and Gemini for tasks like coding, indicating models are not perceived as interchangeable. (reddit.com)

Overall, there are early signs of convergence and growing consumer surplus, but frontier models have not yet clearly become near‑interchangeable commodities, and the forecast period is not over. Hence the prediction’s accuracy is inconclusive (too early to call).

As training data pipelines become more curated and refined, the cost of training competitive frontier models will rise to levels that are untenable for most companies, leaving only a few large tech firms able to afford state‑of‑the‑art model training over the next several years.
And it looks like one of those variables that people are looking at is how you basically take the internet not as raw data, but then you actually kind of refine it and refine it some more and then use that as the basis of learning. And what that does is it drives up model costs to a degree that are probably untenable for most companies except, but for a few.View on YouTube
Explanation

Available evidence by late 2025 shows that the cost of training truly frontier models has risen extremely fast and is now concentrated in a very small set of very large or very heavily funded firms.

Analyses of 40+ frontier models find that amortized hardware and energy costs for the largest training runs have grown about 2–3× per year since 2016, with projected single‑run costs exceeding $1B by around 2027; the authors explicitly conclude that such runs will be affordable only to the most well‑funded organizations. (epoch.ai) Public cost estimates for specific 2023–2025 frontier models such as GPT‑4, Gemini Ultra, Llama 3.1‑405B, Mistral Large and xAI’s Grok‑2 put training costs in the tens to hundreds of millions of dollars per model. (visualcapitalist.com) Anthropic CEO Dario Amodei has stated that training current frontier systems typically costs around $100M, with some models already "more like a billion" to train, and has noted that most startups cannot realistically enter this race. (entrepreneur.com)

At the same time, actual frontier‑model training is dominated by a narrow group: OpenAI (with Microsoft), Google/DeepMind, Meta, Anthropic (backed by Amazon, Google, and now Microsoft/Nvidia), xAI (backed by Elon Musk and large GPU commitments), and a few well‑funded newcomers like Mistral; more recently Microsoft itself is building massive in‑house frontier models on clusters of tens of thousands of top‑end GPUs. (theverge.com) The scale of required compute is illustrated by deals such as Anthropic’s commitment to spend ~$30B on Azure compute, something far beyond the reach of ordinary companies. (reuters.com) Most other firms either fine‑tune or deploy smaller models, or buy access to these frontier models via APIs, rather than training competitive SOTA systems from scratch.

Chamath’s causal story—that more refined/curated pipelines (including sophisticated data selection, filtering, and alignment/feedback stages) drive up total development cost—fits with how these labs now emphasize extensive data engineering and RLHF-style processes on top of enormous compute budgets, even if compute remains the dominant cost line item. The exact contribution of data curation versus raw compute is hard to separate, but the overall effect he predicted—frontier training costs rising to levels “untenable for most companies” and de facto restricted to a few giant, well‑funded players—is clearly borne out by the 2024–2025 evidence. Given that this dynamic is already present less than two years after the prediction, the forecast is best scored as right, even though the phrase “over the next several years” extends beyond 2025.

Chamath @ 01:00:23Inconclusive
aieconomy
According to Dario Amodei (as cited by Chamath), by around 2027 the cost to train a top‑tier, “good functional” frontier AI model could reach approximately $100 billion, up from mere billions today.
So I think it was Dario Amodei, the CEO of anthropic, who said the cost of a good functional model today is in the billions, but by 2027 it could easily approach $100 billion.View on YouTube
Explanation

It is too early to evaluate this prediction.

Chamath is relaying Dario Amodei’s forecast that by around 2027 the cost to train a top‑tier, “good functional” frontier model could reach roughly $100B, up from “billions” in 2024.

As of the current date (November 30, 2025):

  • Public analyses of training costs for leading 2023–2025 models (e.g., GPT‑4, Claude, Gemini) generally estimate single‑digit to low double‑digit billions of dollars at most for training plus associated compute build‑out, not ~$100B per model. These are rough outside estimates, as companies do not publish exact numbers.
  • Dario Amodei and Anthropic have discussed rapidly rising compute and training costs and the need for massive capital (e.g., Anthropic’s multi‑billion‑dollar funding rounds and cloud commitments), but there is no credible public evidence that any single model training run has approached $100B yet.
  • The prediction explicitly targets “by 2027”, and 2027 has not occurred; both hardware, algorithms, and business strategies could significantly change costs before then.

Because the time horizon of the forecast (around 2027) has not yet arrived, we cannot say whether the prediction was right or wrong. Therefore the correct status as of now is inconclusive (too early to tell).

Chamath @ 01:00:23Inconclusive
aiventure
If frontier model training costs rise toward ~$100B by ~2027, only mega‑cap tech firms such as Google, Microsoft, Meta, and Amazon will be able to fund such efforts, and smaller AI startups (including Ilya Sutskever’s SSI) will be unable to raise comparable capital to compete at the very top end.
The problem that that represents for Elia's company, and I wish him the best of luck. But the reality is there isn't a $100 billion for him to have. Google will find it. Microsoft will find it. Facebook will find it... Amazon will find it. But I suspect that these other startups, there just isn't that much money going into AI because the returns don't justify it.View on YouTube
Explanation

By November 30, 2025, the key conditions in Chamath’s prediction have not yet been reached, and the target year (~2027) is still in the future, so it’s too early to judge.

1. Time horizon and cost condition
Chamath’s claim is conditional and time‑bound: if frontier model training costs rise toward ~$100B by ~2027, then only mega‑cap firms (Google, Microsoft, Meta, Amazon) will be able to fund that level, and startups like Ilya Sutskever’s Safe Superintelligence Inc. (SSI) won’t be able to raise comparable capital. As of 2025, available analyses of frontier training costs put even the largest projected single‑model training runs in the $1B+ range by around 2027, not $100B; today’s actual runs are far below that. (arxiv.org) There are speculative forecasts of a possible “$100B model” by ~2027, but these are forward‑looking projections, not realized costs. (getcoai.com) Since 2027 has not arrived and no $100B‑scale training run has occurred, the antecedent of his conditional has not clearly been met.

2. Capital available to non‑mega‑cap AI labs so far
Sutskever’s SSI has raised on the order of $3B across rounds (about $1B in Sept 2024 and ~$2B by early/mid‑2025), at valuations up to roughly $30–32B, with strategic backing from major investors and Alphabet/Nvidia for compute. (app.dealroom.co) That is significant but nowhere near $100B in deployable capital for a single model‑training effort.

Other leading AI startups not inside the big‑four cloud companies—most notably Anthropic—have raised much larger sums than SSI (e.g., a $13B Series F in 2025, bringing its valuation to about $183B, on top of earlier multibillion equity rounds and large cloud/credit facilities from Amazon and others). (cnbc.com) Still, even Anthropic’s total capital base is far below the kind of $100B single‑run spend Chamath is talking about, and much of its effective compute budget is intertwined with mega‑cap partners’ infrastructure (AWS, etc.), which somewhat supports his broader point that true frontier‑scale capex is dominated by the hyperscalers.

3. Structural trend vs. empirical verdict
Research and industry reporting do indicate that frontier AI development costs and required capex are rising extremely fast, with multiple analyses arguing that only a handful of very well‑funded organizations will be able to finance the very largest training runs over the next several years. (arxiv.org) However, whether this dynamic fully excludes independent labs like SSI at the $100B‑per‑model level by 2027 is still speculative—no one has yet attempted or financed such a run.

Because:

  • The forecast year (~2027) has not arrived,
  • No ~$100B frontier training run has actually occurred, and
  • We cannot yet observe whether startups categorically fail to access that scale of capital when/if it’s needed,

the prediction cannot presently be called right or wrong. It remains unresolved, hence the classification "inconclusive" (too early).

Chamath @ 01:00:23Inconclusive
aimarkets
Over the medium term, the foundational model market structure will resemble ride‑sharing: roughly one dominant commercial winner in closed‑source models, alongside numerous open‑source alternatives that are asymptotically similar in capability, with competition focused primarily on cost and compute efficiency.
I think that you could make a claim that the AI foundational model market will look similar to that one startup can probably win, but there will be a bunch of open source alternatives. They're all asymptotically similar. And so it's an arms race on cost and compute.View on YouTube
Explanation

As of November 30, 2025, the overall market structure Chamath described has not yet clearly materialized, but enough uncertainty remains that it’s too early to call his medium‑term prediction right or wrong.

1. No single dominant closed‑source winner (yet)
The frontier model market is currently an oligopoly with several major proprietary providers: OpenAI, Google (Gemini), Anthropic (Claude), Meta, xAI (Grok), Alibaba (Qwen), Moonshot (Kimi), and others. Industry and news coverage describe an intensifying global AI "race" with many strong players rather than a winner‑take‑most equilibrium. (wired.com) Enterprise share data likewise shows Anthropic ~32%, OpenAI ~25%, Google ~20%, Meta ~9% in enterprise AI—no single dominant firm. (businessinsider.com) That contradicts the current existence of “one startup” winning the closed‑source market, but it does not rule out future consolidation.

2. "Bunch of open‑source alternatives" asymptotically close in capability
This part of the prediction is largely borne out already. There are numerous open‑weight or permissively licensed frontier‑class models—Meta’s Llama 4 series, Alibaba’s Qwen3 family, Mistral’s Mixtral/Magistral line, Google’s Gemma, Chinese models like DeepSeek and Kimi K2—that benchmark close to leading closed models (GPT‑4.5, Claude, Gemini) on many tasks. (lagomconsulting.co.uk) This matches Chamath’s picture of many open alternatives that are “asymptotically similar” in capability.

3. Arms race on cost and compute
The cost/compute part of his prediction is very clearly correct. Reports detail enormous capital expenditures on AI infrastructure—Big Tech collectively spending well over $100B per year on AI datacenters, with forecasts toward $400B by 2026, plus multi‑gigawatt compute build‑outs and million‑TPU/GPU deals. (superintelligencenews.com) Analyses explicitly frame this as an AI infrastructure “arms race,” driven by the cost of GPUs, power, and cooling.

4. Why the verdict is inconclusive
Chamath’s core structural claim is that over the medium term the market will resemble ride‑sharing—one dominant closed‑source winner plus many open alternatives competing mainly on cost/compute. While the open‑source and cost/compute aspects are already visible, the crucial “one startup can probably win” part concerns a future equilibrium. Today’s evidence points to a crowded, rapidly evolving oligopoly with no single winner and active competition among several giants and well‑funded startups. (wired.com) Given that “medium term” in a tech/VC context usually means roughly 3–5 years and we’re only ~1.5 years past the prediction, the market has not had time to settle into a stable structure.

Because the decisive part of the prediction (eventual winner‑take‑most dominance by one proprietary provider) has neither clearly occurred nor been ruled out yet, the fairest classification as of late 2025 is "inconclusive (too early)".

ventureaimarkets
Venture capital firms will not fund multiple AI companies at the ‘hundreds of billions of dollars’ scale required for frontier model training; instead, that level of capital deployment will remain limited to a small number of mega‑cap incumbents over the coming years.
And I just don't see VCs having the temperament and the wherewithal to fund hundreds of billions of dollars into multiple companies to do that.View on YouTube
Explanation

Scope of the prediction Chamath was claiming that frontier‑model training at the "hundreds of billions of dollars" scale would not be funded by traditional venture capital across multiple companies; instead that kind of capital would come from a small set of mega‑cap incumbents (and similar balance‑sheet players).

What actually happened (mid‑2024 to late‑2025)

  1. Hundreds‑of‑billions scale spending is coming from mega‑caps and similar, not VC funds.

    • Analysts project that hyperscalers like Alphabet, Amazon, Meta, and Microsoft will collectively invest around $1.7 trillion in AI‑related infrastructure by 2035, with capex in 2024 already about $253 billion and heavily focused on AI data centers and compute. (barrons.com)
    • Meta’s Mark Zuckerberg has explicitly said Meta will spend “hundreds of billions of dollars” on AI over time, and Meta has guided $60–65 billion of capex in 2025 largely for AI servers and data centers—funded from its own cash flow and debt, not from VC. (theguardian.com)
    • Microsoft, Alphabet, Amazon and others are likewise budgeting tens of billions per year each for AI‑related capex (data centers, GPUs, networking), again from operating cash flow and corporate financing. (medium.com)
  2. The flagship “$100B+” AI infra project is a corporate/sovereign joint venture, not a VC‑funded startup.

    • Stargate LLC—a joint venture among OpenAI, SoftBank, Oracle and MGX—was launched with $100 billion in initial capital and plans to invest up to $500 billion in AI infrastructure by 2029. Its financing structure is dominated by SoftBank, corporate owners, and debt, not conventional VC funds raising LP capital. (en.wikipedia.org)
  3. Venture capital is large and growing, but still 1–2 orders of magnitude smaller than the "hundreds of billions per company" level.

    • Global VC investment into generative AI was about $45–56 billion in 2024 and $49.2 billion just in H1 2025. That’s spread across the entire sector, not concentrated as hundreds of billions into individual labs. (ey.com)
    • Even the very largest startup rounds—Databricks ($10B), OpenAI ($6.6B), xAI ($6B), Anthropic ($4B), Waymo ($5B), Mistral (~$2B)—are single‑digit billions each, not anywhere near hundreds of billions. (kpmg.com)
    • Anthropic’s big 2025 round was $3.5B Series E led by Lightspeed and other VCs, bringing total capital raised to about $18.2B—a very large VC‑backed company, but still far from the “hundreds of billions” scale. (anthropic.com)
  4. Pattern: mega‑caps & quasi‑sovereign capital dominate the extreme scale; VC plays at much smaller though still multi‑billion levels.

    • Industry analyses emphasize that the multi‑trillion‑dollar AI infrastructure build‑out is being financed primarily by hyperscalers out of operating cash flow and corporate balance sheets, not “venture speculation.” (medium.com)
    • VC investors have become more concentrated, writing bigger checks to a few winners, but the entire global generative‑AI VC flow per year is still at the tens of billions level, not hundreds of billions into individual companies.

Assessment relative to the prediction As of November 30, 2025:

  • The only actors planning or executing $100B+ AI infrastructure projects are mega‑cap tech companies and corporate/sovereign joint ventures (e.g., Stargate), not traditional VC funds.
  • Venture capital is heavily involved in AI, but at the 1–10B per company scale, not “hundreds of billions of dollars” to multiple independent frontier‑model labs.

Given this, Chamath’s structural claim—that VCs would not be the ones funding hundreds of billions into multiple AI companies for frontier‑model training, leaving that role to a small set of mega‑cap incumbents and similar balance‑sheet players—matches how the market has actually evolved so far.

Therefore the prediction is right based on evidence up to late 2025.

Sacks @ 01:06:46Inconclusive
marketstech
If regulators do not require a la carte pricing that neutralizes bundle advantages, Microsoft will continue adding new products to its Office/365 bundle each year and will, over time, systematically dominate broad swaths of enterprise software markets through this bundling strategy.
If we don't do that, I do think that Microsoft will use the power of the bundle to systematically dominate enterprise software, and they won't take on everybody at once. But like I said, they'll every year they'll add a new product to the bundle.View on YouTube
Explanation

By November 30, 2025, there is some evidence consistent with parts of Saks’s thesis, but not enough – and the time horizon is too long – to say the full conditional prediction has clearly come true or failed.

1. Regulators did move toward a la carte-style remedies, at least for Teams
After the Slack/Salesforce complaint, the European Commission formally found that bundling Teams with Office/Microsoft 365 gave Teams an unfair distribution advantage and accepted legally binding commitments from Microsoft. Microsoft must now offer Office 365 / Microsoft 365 suites with and without Teams worldwide, with mandated price gaps between the two options and minimum standalone pricing for Teams, enforced for seven years. (microsoft.com)
Separately, the Australian ACCC has sued Microsoft over how it presented a more expensive Microsoft 365 + Copilot plan versus a cheaper “Classic” option without Copilot, again pushing toward clearer, a la carte‑style choices. (windowscentral.com)
These moves do not perfectly implement Saks’s exact remedy (he wanted the summed a la carte prices to equal the bundle price), but they undercut the simple "Teams is free in the bundle" story he was reacting to. That makes the antecedent of his conditional ("if we don’t do that") only partially true.

2. Microsoft is continuing to add and bundle new capabilities into Microsoft 365
Since mid‑2024, Microsoft has indeed layered more products and functionality into its 365 ecosystem:

  • In October 2025 it launched Microsoft 365 Premium for consumers, a higher-priced subscription that deeply integrates Copilot AI across Word, Excel, Outlook and more, plus Defender security features and 1 TB storage – effectively a new, richer bundle tier. (reuters.com)
  • Internal planning documents and announcements describe new AI services like “Tenant Copilot” and broader “Agent Factory” concepts being built directly into Microsoft 365, showing a continued pattern of embedding new functionality into the suite. (businessinsider.com)
    This supports the narrower part of his claim that Microsoft would keep adding new products/features into the 365 bundle on an ongoing basis.

3. "Systematically dominate broad swaths of enterprise software" is a long‑run, structural claim
As of 2025, Microsoft already had very high share in enterprise productivity (over 80% in the enterprise segment of office productivity software) and substantial positions in cloud and security, but it still faces strong, viable competitors across major enterprise categories (AWS and Google Cloud in IaaS, Salesforce/Oracle/SAP/others in enterprise apps, Google Workspace in productivity, and a large ecosystem of security vendors). (markets.financialcontent.com)
Regulatory pressure is increasing, not disappearing: the EU DMA gatekeeper regime, the EU Teams commitments, and UK/CMA scrutiny of Microsoft’s cloud licensing all push against unchecked expansion via bundling. (markets.financialcontent.com)
Given that Saks framed the outcome as something that would occur "over time" if regulators failed to act, and we are only ~17 months past the June 29, 2024 episode – with major regulatory interventions still unfolding – it’s too early to say whether Microsoft will, because of bundling, end up "systematically" dominating many more enterprise software segments than it already did.

4. Net assessment

  • Partially borne out: Microsoft has continued to enrich and promote bundled 365 offerings (including new AI and security capabilities), consistent with his concern about the power of the bundle. (reuters.com)
  • Partially undercut: Key regulators have imposed a la carte and pricing-structure remedies around Teams and are scrutinizing other aspects of Microsoft’s cloud and licensing behavior, so the simple “no regulatory response” scenario he described has not materialized. (microsoft.com)
  • Too early on the core outcome: Whether this will ultimately let Microsoft "systematically dominate" broad new swaths of enterprise software is a multi‑year question, and current data does not yet show a clear, new wave of monopoly-style dominance caused by post‑2024 bundling.

Because the central, long‑term consequence Saks predicted (systematic domination across broad enterprise categories due to unchecked bundling) cannot yet be clearly confirmed or falsified, the prediction is best classified as inconclusive (too early) at this point.