Last updated Nov 29, 2025

E133: Market melt-up, IPO update, AI startups overheat, Reddit revolts & more with Brad Gerstner

Fri, 16 Jun 2023 20:55:00 +0000
Back to episodes
economymarkets
From mid‑2023 forward, interest rates will remain elevated (above the level investors "want") and stay high for an extended period rather than being cut soon, and the market bottoming process is nearly complete by mid‑2023.
What have I said. Like a broken record. Rates are going to be higher than you want and they're going to be around for longer than you like. And now Powell is basically telling you the same thing. So. We're almost at the end of I think the bottoming though I don't agree with Druckenmiller I think he's wrong.View on YouTube
Explanation

Chamath was essentially right on both components of this prediction.

1. "Rates will be higher than you want and around for longer than you like"
• At the time of the episode (June 16, 2023), the Fed funds target range was 5.00–5.25%.
• The Fed raised again in July 2023 to 5.25–5.50%, the highest in more than 22 years, and then held that peak from July 26, 2023 through July 31, 2024. (ycharts.com)
• Only starting in September 2024 did the Fed begin cutting, ultimately reducing the range by 100 bps over the final three meetings of 2024 to 4.25–4.50%, and then further to a 3.75–4.00% range by October 29, 2025. (federalreserve.gov)
• In late 2023 and early 2024, futures markets and many economists were expecting earlier and steeper cuts (often starting mid‑2024 and taking rates much closer to 3%), expectations that proved too optimistic. (spglobal.com)

Net effect: policy stayed at or near a restrictive 5.25–5.50% for about a year after his comment and remains well above pre‑2022 levels even by late 2025, clearly fitting "higher than you want" and "longer than you like" relative to prevailing market hopes.

2. "We're almost at the end of ... the bottoming"
• The S&P 500’s bear‑market low is broadly dated to October 12, 2022, around 3,577, after a ~25% drop from its January 2022 high. (seekingalpha.com)
• By June 8–9, 2023—just days before the podcast—the S&P 500 had risen more than 20% above that October low, and major outlets and market commentators marked this as the start or confirmation of a new bull market. (upi.com)
• After mid‑2023, the index went on to reach new all‑time highs (e.g., January 19, 2024) and, while it experienced later corrections (like a 10% pullback into March 2025), none of these moves revisited or undercut the October 2022 lows—they were drawdowns within an ongoing bull market. (fool.com)

So, when Chamath said in mid‑June 2023 that the bottoming process was nearly done, the major U.S. equity market had in fact already put in its cyclical low eight months earlier and was transitioning into a sustained bull run. No subsequent lower low has contradicted that view.

Taken together, the interest‑rate path and market behavior since mid‑2023 align strongly with his call that (a) rates would stay uncomfortably high for an extended period and (b) the bottoming in equities was essentially complete by then. Hence the prediction is best judged as right.

economy
The U.S. (and global) economy will not experience a "hard landing" recession in Q4 2023 because China will undertake trillions of dollars of stimulus, preventing such an outcome.
And the reason there's not going to be a hard landing is you just saw China today basically say we're going to start to rip in trillions of dollars. They're going to stimulate the economy. You can't have a hard landing when China's printing trillions of dollars. It's not possible.View on YouTube
Explanation

Economic data show that the core outcome Chamath predicted did occur: the United States and the global economy did not experience a hard‑landing recession in Q4 2023.

In macroeconomics, a hard landing typically refers to a rapid shift from growth into a sharp slowdown or recession, usually following monetary tightening; it is contrasted with a soft landing, where inflation is reduced without triggering a recession.(quickonomics.com)

For the U.S.:

  • Real GDP grew at a 3.3% annualized rate in Q4 2023, with full‑year 2023 growth at about 2.5%, and commentators noted that the economy had clearly skirted the widely expected recession.(cnbc.com)
  • The NBER’s business‑cycle chronology and the associated FRED recession indicator show no U.S. recession after the brief COVID downturn ending in April 2020; the indicator is zero (expansion) through and beyond Q4 2023.(nber.org)
    These facts are inconsistent with any reasonable definition of a hard‑landing recession in late 2023.

For the global economy:

  • The IMF estimates global growth at about 3.2% in 2023, and describes global activity as surprisingly resilient, with inflation falling and no global recession, framing the outlook in soft‑landing terms rather than collapse.(mediacenter.imf.org)
    Again, this is far from a global hard‑landing recession in Q4 2023.

Where Chamath’s reasoning is weaker is the causal story: China did roll out and discuss various stimulus and support measures, but many analysts throughout 2023–24 emphasized that there was no single massive ‘bazooka’ package, and that policy easing was relatively limited and piecemeal compared with past multi‑trillion‑yuan waves.(thinkchina.sg) The avoidance of a hard landing in the U.S. and globally is more commonly attributed to domestic factors (like U.S. fiscal support, resilient consumption, and central‑bank policy) than to Chinese stimulus alone.

Nonetheless, the prediction as normalized — that there would not be a hard‑landing recession in Q4 2023 — was borne out by the data, so it is best classified as right, even though the cited mechanism (China ‘printing trillions’ making a hard landing impossible) is debatable.

Chamath @ 00:20:43Inconclusive
economymarkets
For the remainder of the 2020s (through about 2030), both interest rates and inflation will remain "sticky" at relatively elevated levels, rather than returning quickly to the near‑zero‑rate, very‑low‑inflation environment of the 2010s.
And so you just have to get prepared for rates just being sticky and inflation being sticky. And I think that that's probably the most reasonable base case for the rest of the decade.View on YouTube
Explanation

It’s too early to definitively judge a prediction that explicitly covers “the rest of the decade” (i.e., roughly 2023–2030) when we are only in late 2025.

What the prediction said
Chamath predicted that for the rest of the decade:

  • Interest rates would be “sticky” at higher levels, not quickly reverting to the near‑zero regime of the 2010s.
  • Inflation would also be “sticky” (i.e., persistently elevated rather than snapping back to ~2% and staying there easily).

Where we are by late 2025 (high level, based on current macro data):

  • Policy rates (e.g., the U.S. federal funds rate) remain well above the near‑zero levels of the 2010s, even after peaks in 2023–24 and some subsequent adjustments. They have not returned to 0–1% territory.
  • Inflation, after spiking in 2021–22, has come down from its highs but has shown episodes of persistence and has not clearly locked into a stable, low‑and‑forgotten 2010s-style regime. Central banks continue to emphasize vigilance and data dependence.

This partially aligns with the early phase of his prediction (rates and inflation have indeed been higher and more persistent than in the 2010s through 2025), but:

  • The prediction’s timeframe explicitly runs through the end of the decade.
  • Future paths for rates and inflation (2026–2030) remain uncertain and could still either validate or contradict his “sticky for the rest of the decade” view.

Because the core claim is about a multi‑year regime extending well beyond 2025, and we haven’t observed the full period yet, the fairest classification is “inconclusive (too early)” rather than right or wrong at this point.

aimarkets
Within roughly six months from June 2023, Google will deploy AI capabilities that surpass ChatGPT‑4 in practice, and once integrated into its search/ad products this will increase advertisers' cost per click (CPC) due to improved targeting and user understanding.
I think Google's not as far behind closed AI as we think... I think Google's going to be ChatGPT four. I'm saying it right here, right now. I think they're going to beat them, because I think that they're better at indexing all this information and understanding it than anybody on the planet, and they have the largest ad network. If they get this done in the next six months, I think it's going to increase the cost per click because they're going to know so much about each user.View on YouTube
Explanation

The prediction had two linked parts, both within roughly six months of mid‑June 2023 (so by about mid‑December 2023):

  1. Google would deploy AI capabilities that in practice surpass ChatGPT‑4.
    • Through late 2023, Bard was powered mainly by PaLM 2. Independent evaluations in 2023 consistently found GPT‑4 ahead of Bard/PaLM 2. For example, GPT‑4 significantly outperformed Google Bard on the Japanese radiology board exam and on the 2023 Japanese national dentist exam.(link.springer.com)
    • A technical and journalistic comparison from 2023 likewise judged ChatGPT‑4 superior overall to Bard, despite Bard being faster and free.(arstechnica.com)
    • Google’s next‑gen Gemini models were unveiled in December 2023. Google/DeepMind claimed Gemini Ultra outperformed GPT‑4 on several benchmarks, but Ultra was not actually available to users until “early 2024”; in December 2023 only Gemini Pro was accessible via Bard and APIs.(time.com)
    Because Gemini Ultra (the model claimed to beat GPT‑4) was not deployed to end users within the six‑month window, and available Google models during that window were generally rated behind GPT‑4 in independent tests, this part of the prediction did not come true on the stated timeline.

  2. Once integrated into Google’s search/ad products, this would increase advertisers’ cost per click (CPC) due to improved targeting and user understanding.
    • Google’s generative‑AI search (Search Generative Experience, SGE) began rolling out in limited “Search Labs” preview in mid‑2023 and then expanded to more users and countries later in 2023, but this was still clearly labeled experimental.(blog.google)
    • Large‑scale AI Overviews with ads and later the more AI‑centric AI Mode in Search were rolled out more fully only in 2024–2025, with reports in 2025 that AI‑powered search with ads was boosting usage and monetization.(reuters.com)
    • Alphabet’s own monetization metrics show that during 2023, cost‑per‑click did not experience the kind of clear AI‑driven jump the prediction describes. YCharts’ compiled data on Alphabet’s CPC change indicates year‑over‑year CPC was –7% in Q1 2023, –3% in Q2 2023, +4% in Q3 2023, and 0% in Q4 2023.(ycharts.com) At best, there was a mild uptick by Q3, but not a sustained or clearly AI‑caused CPC increase tied to generative‑AI search and ad integration within that six‑month window.

Because (a) Google did not have a widely deployed model that clearly surpassed GPT‑4 in practice within ~six months of June 2023, and (b) there is no evidence that AI integration into search/ads during that period drove a notable CPC increase of the type described, the combined prediction is best classified as wrong on its own terms and timeline.

aitecheconomy
By roughly December 2024 (18 months after June 2023), the compute cost required to train a large language model equivalent in capability to GPT-4 will fall from roughly $400 million to approximately $5–10 million, implying about a 40–80x reduction in training cost.
So something that costs like we said, OpenAI spent $400 million training models for for GPT four. If they spent $400 million in the last couple of years, you could probably assume that doing the same training exercise could be done for 5 to $10 million. 18 months from now to generate the same model. That's a, you know, 100 x cost reduction.View on YouTube
Explanation

Evidence as of late 2024–2025 indicates that at least one lab did in fact train a GPT‑4‑class model for roughly the $5–10M range, within the predicted 18‑month window, even though Friedberg’s baseline and implied reduction factor were overstated.

Baseline: GPT‑4 training cost

  • OpenAI has not published exact figures, but multiple analyses and Sam Altman’s own comments put GPT‑4’s total training cost (including more than just raw GPU rental) in the tens to low hundreds of millions of dollars. Altman has said it cost "more than $100 million"; independent cost models (AI Index / Epoch AI) estimate the compute portion of the final training run at about $40–80M, with total development costs near $100M. (en.wikipedia.org)
    Friedberg’s $400M starting point is likely an overestimate, so his implied “100x reduction” is numerically off, but the starting cost was still extremely large.

By late 2024: a GPT‑4‑class model for ≈$5–6M

  • Chinese startup DeepSeek released its V3 model with a technical report dated 27 December 2024. An in‑depth account in Communications of the ACM notes that DeepSeek’s own report claimed 2.788M H800 GPU‑hours, at about $2/hour, for a training compute cost ≈$5.6M. (cacm.acm.org)
  • That same CACM piece emphasizes that this was remarkable precisely because prior frontier LLMs like GPT‑4 were believed to cost on the order of $100M+ to train, with industry expectations in the $100M–$1B range for top models. (cacm.acm.org)
  • Independent benchmark comparisons generally place DeepSeek‑V3 at or near GPT‑4 / GPT‑4o level: it matches or slightly exceeds GPT‑4 on many reasoning and coding benchmarks (e.g., MMLU and HumanEval) and is described as “rivaling” or “challenging” GPT‑4o/Claude 3.5 on aggregate performance, though GPT‑4o still wins some English‑centric tasks. (datastudios.org)
    That is strong evidence that a GPT‑4‑class model was trained for roughly $5–6M of compute by late 2024.

Context: frontier costs overall moved up, not down

  • Broad industry data show that the cost of training frontier models has been rising, not collapsing: a 2024–25 cost study finds amortized training costs for the most compute‑intensive models growing about 2.4× per year since 2016, with GPT‑4’s final‑run compute around $40M and Google’s Gemini Ultra near $30M, and projections of $1B+ runs by 2027. (ar5iv.org)
  • AI Index–based summaries and related reporting put training costs for later frontier models like Gemini Ultra, Llama 3.1‑405B, Grok‑2, and Llama 4 in the $100M–$300M+ range, and the Wall Street Journal reports GPT‑5 (Orion) training runs costing up to $500M each. (visualcapitalist.com)
    So the typical frontier training run did not fall to $5–10M; if anything, it became more expensive.

Why this still counts as the prediction being essentially right

  • Friedberg’s core quantitative claim was that **“18 months from now” it would be possible to generate a GPT‑4‑level model for about $5–10M of training compute, instead of the roughly hundreds of millions previously spent. DeepSeek‑V3 provides a concrete, widely‑reported example of exactly that: a GPT‑4‑class model trained for about $5.6M by December 2024.
  • However, important caveats are:
    • His starting figure (~$400M) overshoots the best current estimates for GPT‑4’s training cost, so the actual realized reduction is more like 7–20×, not 40–80×, depending on which baseline you use. (ar5iv.org)
    • The industry‑wide trend in frontier training costs is upward; DeepSeek appears as a notable but exceptional outlier, leveraging architectural choices (Mixture‑of‑Experts, 8‑bit training, etc.) and cheaper H800 chips to hit that cost point.

Given that a GPT‑4‑class model was in fact trained within roughly the predicted timeframe for about $5–6M of compute—squarely inside Friedberg’s $5–10M band—even though his baseline and generalization about a 40–80× industry‑wide cost collapse were inaccurate, the prediction is best classified as “right” with significant nuance.

aitecheconomy
Between roughly December 2024 and June 2026 (18–36 months after June 2023), the cost of a given large AI model training run that would cost $100 million in mid-2023 will decline to about $1 million, i.e., around a 100x reduction in training cost over that period.
Play this out. $100 million of capital spent training today is $1 million spent doing training in 18 months. Yeah, three years, 1818 to 36 months. Somewhere in that time frame is likely the time frame.View on YouTube
Explanation

As of 30 November 2025, the prediction window (≈Dec 2024–Jun 2026) is still open, so it is too early to say definitively whether it will end up correct. That said, available evidence mostly points against a 100× cost drop so far.

1. Frontier training costs have not fallen 100×

Public estimates for training top models show rising (or at best slightly reduced) dollar costs for frontier-scale training runs, not a collapse from ~$100M to ~$1M:

  • The 2025 AI Index (summarized by Visual Capitalist) puts GPT‑4 (2023) at about $79M, Gemini 1.0 Ultra (2024) at ~$192M, Llama 3.1‑405B (2024) at ~$170M, Mistral Large (2024) at ~$41M, and Grok‑2 (2024) at ~$107M for a single training run. These are all still in the tens to hundreds of millions of dollars range, not near $1M. (visualcapitalist.com)
  • A detailed cost model from Epoch AI finds that the amortized cost to train the most compute‑intensive frontier models has grown ~2.4× per year since 2016, with GPT‑4 and Gemini dominated by tens of millions in accelerator and staff costs. It projects > $1B training runs by around 2027 if trends continue. (arxiv.org)
  • Other 2025 summaries similarly estimate GPT‑4‑class training in 2025 at roughly $50M–$200M per run, even after efficiency gains—far from a 100× cost collapse. (localaimaster.com)

Given that GPT‑4‑type runs were already >$100M in 2023, current public estimates for similar‑capability training runs remain well above $1M; in some cases they are higher than 2023 figures.

2. Some individual models claim big savings, but not clearly 100× and not uncontested

  • DeepSeek‑V3 / R1 (China): DeepSeek claims it trained V3 for about $5.6M in GPU‑rental‑equivalent cost using ~2,048 H800s over ~2.8M GPU‑hours, with performance comparable to GPT‑4‑class systems. (en.wikipedia.org) That’s roughly a 10–20× reduction vs GPT‑4‑style estimates (> $100M), but:
    • The figure explicitly excludes prior R&D, ablations, and infrastructure, and has been criticized as misleading; some analyses argue that total capital outlay may be orders of magnitude higher. (reddit.com)
    • Even if you take the $5–6M figure at face value, it is still short of the 100× drop (to ~$1M) the prediction specified.
  • MiniMax M1 (Shanghai): MiniMax has claimed that its M1 model—reported as competitive with top frontier models—was trained for about $534,700, nearly 200× cheaper than some >$100M GPT‑4/4o cost estimates. (ainvest.com) However, these numbers are self‑reported, not independently audited, and it’s unclear how directly comparable the task, scale, and quality are to the 2023 $100M reference runs.

These examples show pockets of large efficiency gains and at least claimed 20–200× reductions for specific architectures and vendors. But they are:

  • Not yet clearly established as industry‑wide norms.
  • Often measured on different architectures and training setups rather than “the exact same run that cost $100M in 2023 now costs $1M.”

3. Direct same‑model comparisons suggest slower cost decline

Where we do have like‑for‑like comparisons, the cost reductions are large but not on the 100× / 18–36‑month schedule Friedberg predicted:

  • Andrej Karpathy’s 2024–25 reproduction of GPT‑2 on a single 8×H100 node brought the training cost down to about $672, versus earlier estimates of tens of thousands of dollars for the original 2019 training, an ~90× reduction over roughly five years, not 100× in 1.5–3 years. (tomshardware.com)

This suggests that per‑FLOP or per‑model cost can fall substantially over time, but the observed pace so far is much slower than “100× cheaper in 18–36 months” for a fixed large model.


Why this is labeled inconclusive rather than wrong

  • Friedberg’s window runs to June 2026, about 7 months after the current date (Nov 2025), so the prediction period is not over yet.
  • So far, public data on major frontier models shows no broad 100× drop in training cost for a run that would have cost ~$100M in mid‑2023; if anything, headline frontier runs are as expensive or more expensive. (visualcapitalist.com)
  • There are emerging claims (MiniMax, DeepSeek) that individual models with near‑frontier capability were trained at 10–200× lower marginal compute cost, but these are either below the 100× threshold (DeepSeek at ~$5–6M) or not yet well‑verified (MiniMax’s ~$0.5M claim).

Given the remaining time in the window and the lack of clear, widely accepted evidence that a $100M‑class 2023 run is now reproducible for ~$1M, the fairest judgment today is “inconclusive (too early)”, with current trends pointing against the prediction ultimately being met without a dramatic further breakthrough in the next several months.

At some point after this episode (June 2023), the All-In Podcast will host Mayor Francis Suarez as a guest on the show.
and Frances. Mayor Frances is now in the race, so I guess we'll have him on the pod.View on YouTube
Explanation

Jason predicted that sometime after June 16, 2023, the All-In Podcast would have Miami Mayor Francis Suarez on as a guest. On June 3, 2025, an episode titled "Miami Mayor Francis Suarez: The Recipe for Creating America's Happiest City | All-In Live from Miami" was released on the official "All-In with Chamath, Jason, Sacks & Friedberg" feed, which explicitly states "The Besties welcome Mayor Francis Suarez" and lists him as the guest alongside the four regular hosts.(speakai.co) Because this appearance occurred after the June 2023 episode where the prediction was made and clearly counts as hosting him on the show, the prediction came true.