Last updated Nov 29, 2025

E153: In conversation with Jared Kushner: Israel-Hamas War, paths forward, macro picture, AI

Sat, 11 Nov 2023 01:02:00 +0000
Back to episodes
economy
By the time of the November 2024 U.S. presidential election, the U.S. economy will not be in a recession and will instead be in a "reasonable" (i.e., non-recessionary, soft-landing-type) condition.
Now, if we were going to go into a November election where we were going to be in a recession, that's very bad for Biden. But sort of the tea leaves, for whatever it's worth, all the predictions, all the predictive markets show that we're going to be in a reasonable place.
Explanation

Evidence around the November 5, 2024 U.S. presidential election shows the U.S. economy was not in a recession and was broadly viewed as achieving or nearing a soft landing.

  • No official recession by late 2024: The National Bureau of Economic Research’s Business Cycle Dating Committee lists the most recent peak in U.S. economic activity as February 2020 and the trough as April 2020, with no subsequent peak/trough—and therefore no new recession—dated as of late 2025.(nber.org) That implies the economy was still in the expansion phase through the 2024 election.

  • Continued positive GDP growth in 2024: BEA data show real GDP grew 1.6% in Q1 2024, 3.0% in Q2, and about 2.8–3.1% in Q3 2024, all clearly positive growth rates.(apps.bea.gov) For the full year 2024, real GDP increased 2.8%, only slightly below 2023’s 2.9%, indicating a moderate but solid expansion rather than contraction.(apps.bea.gov)

  • Labor market still relatively strong: The unemployment rate was 4.1% in October 2024 and had ranged between 4.0% and 4.3% since May—low by historical standards.(bls.gov) In November 2024, the economy added about 227,000 jobs and unemployment was roughly 4.2%, consistent with a cooling but not collapsing labor market.(ft.com)

  • Contemporary characterization as a soft landing / reasonable conditions: Near the election, multiple mainstream analyses described the U.S. as having likely pulled off or being on the cusp of a "soft landing"—inflation moving down while unemployment stayed relatively low and growth remained positive.(theguardian.com) These descriptions match the podcast’s notion of a “reasonable” non‑recessionary environment.

Given that: (1) no recession had begun by November 2024 on the standard NBER chronology, and (2) GDP growth and employment data show a slowing but still expanding economy that many economists labeled a soft landing, Chamath’s prediction that the U.S. would be in a "reasonable" (non‑recessionary, soft‑landing‑type) condition by the November 2024 election is best classified as right.

economy
U.S. inflation will continue to decline over the subsequent 6–12 months after November 2023, moving the economy into a materially better ("pretty decent") inflation environment than at the time of this recording.
We know that inflation is falling. It's going to fall even more. The second thing, Nick, the third chart here is you can see that now validated in these ten year breakevens... what it shows is the ten year break evens are also telling us, okay guys, we're going to be in a pretty decent place.
Explanation

At the time of the podcast in mid‑November 2023, U.S. inflation was on a downward trend but still elevated: October 2023 CPI was 3.2% year‑over‑year with core around 4%, and the Fed’s preferred core PCE gauge was about 3.2% in November. (jpmorgan.com) Over the next six months the CPI measure actually ticked back up to about 3.5% by March–April 2024, so the disinflation path was bumpy, but core PCE fell from 3.2% in November 2023 to 2.6% by May 2024, its lowest level since 2021. (cnbc.com) By late 2024, inflation was clearly lower than at the time of the prediction: headline CPI was about 2.7% and core CPI 3.3% in November 2024, while headline PCE was 2.4% and core PCE 2.8%. (cnbc.com) Average annual CPI inflation also fell from about 4.1% in 2023 to about 2.9% in 2024, a material improvement in the overall inflation environment. (officialdata.org) Market-based inflation expectations, reflected in the 10‑year breakeven rate, hovered around 2.3% through 2024, consistent with a “pretty decent” medium‑term inflation outlook. (ycharts.com) Taken together, inflation did end up meaningfully lower and more benign 6–12 months out than when he spoke, even though it did not decline monotonically, so the spirit of Chamath’s prediction was essentially correct.

marketseconomy
As interest rates start to be cut in 2024, U.S. equity markets will perform well, with a broad positive move in equities driven by rate cuts and large cash balances moving out of money market funds.
So that's a really positive sign, which is that money will need to find a home once rates drop... And then, as you said Friedberg, the last part of this is now you introduce rate cuts, and that's a real accelerant. Now, more than likely, I think what that means is that markets are set up to to do pretty well equity markets specifically.
Explanation

Summary:

  • The core of Chamath’s prediction — rate cuts starting in 2024 coinciding with U.S. equities doing “pretty well” in a broadly positive way — did come true.
  • However, his suggested mechanism of large cash balances moving out of money market funds is not strongly supported by the data; money market assets actually grew to record levels.

1. Did rate cuts start in 2024?
Yes. After holding rates steady for most of 2024, the Federal Reserve began cutting in September 2024 and delivered three cuts by December, totaling 1 percentage point, bringing the federal funds target range down to 4.25–4.50%. (broadway.bank)
This matches his timing: cuts began in 2024, not later.

2. Did U.S. equity markets “do pretty well” around that period?
Yes, decisively.

  • The S&P 500 delivered a total return of about 25% in 2024, following a strong 2023. (broadway.bank)
  • The S&P 500 Equal Weight Total Return index (a better gauge of breadth than the cap‑weighted index) returned about 13% in 2024. (ycharts.com)
  • The Russell 2000 small‑cap index had a 2024 total return of ~11.5%, also solidly positive. (en.wikipedia.org)

Taken together, large caps, equal‑weight large caps, and small caps all had double‑digit total returns in 2024. That supports his claim that equity markets were set up to “do pretty well.”

3. Was the move “broad” or narrowly driven?
This part is more mixed but still broadly supportive:

  • One analysis notes the S&P 500’s 25% 2024 return was heavily driven by AI‑linked mega‑caps (Nvidia and other “Magnificent 7” names), which contributed a disproportionate share of gains. (rbcwealthmanagement.com)
  • At the same time, 7 of 11 S&P 500 sectors delivered double‑digit returns in 2024, and equal‑weight and small‑cap indices posted solid positive years, indicating gains were not confined only to a tiny corner of the market. (broadway.bank)

So while leadership was concentrated, performance was still broadly positive across much of the equity universe, which is directionally consistent with his “broad positive move” language.

4. Did “large cash balances” really leave money market funds to drive this?
Evidence here contradicts his specific mechanism:

  • U.S. mutual fund data from the Investment Company Institute show total money market mutual fund assets rose, not fell: roughly $6.0T in January 2024 vs. $6.88T in January 2025, and similar year‑over‑year increases in April and August. (idc.org)
  • A Barron’s article reported record money‑market fund assets around $7.0T as of early March 2025, underscoring that investors continued to keep very large balances in cash‑like vehicles despite rate cuts. (barrons.com)

This suggests there was not a massive, visible exodus of cash out of money market funds; if anything, balances stayed high or grew, even as equities performed well. Some re‑risking certainly took place (e.g., big inflows to equity ETFs in 2024–2025), but it does not look like the kind of large, clean rotation out of money markets that his quote implies. (reuters.com)

Conclusion:

  • Correct: Rate cuts began in 2024, and U.S. equities overall had very strong performance with positive returns across large‑cap, equal‑weight, and small‑cap indices. This validates the main directional prediction that “equity markets…do pretty well” as the cutting cycle began.
  • Partially incorrect: The justification that this would be driven by “money…need[ing] to find a home once rates drop” via large outflows from money market funds did not materialize; money market assets remained elevated or hit records.

Netting those points, the prediction is best classified as right on its main outcome (strong, broadly positive equity performance as cuts started in 2024), albeit with a flawed explanation of the underlying flows.

economy
By mid-2024, the Federal Reserve will have begun cutting interest rates, U.S. inflation will largely be past its peak and no longer a major problem, and the U.S. economy will achieve a soft landing (i.e., avoid a recession while growth slows).
I think what it's kind of saying is inflation is very much in the rear view mirror. Rates are going to get cut by the middle part of the year. The economy looks like it's going to be a soft landing.
Explanation

Chamath’s prediction bundled three related claims for conditions by mid‑2024: (1) Fed rate cuts would have begun, (2) inflation would be largely behind us, and (3) the U.S. would achieve a soft landing (slowing growth without recession).

1. Fed rate‑cut timing (clearly wrong)
The Federal Reserve held the federal funds target range at 5.25–5.50% at its June 12, 2024 meeting, indicating no cuts yet that year. (wolfstreet.com) It again left rates unchanged at 5.25–5.50% at the July 31, 2024 meeting, only hinting that a cut might come as soon as September. (en.people.cn) The first actual cut of this cycle came later, on September 18, 2024, when the FOMC reduced the target range by 50 bps to 4.75–5.00%. (cnbc.com) So by mid‑2024 (around June), the Fed had not begun cutting rates, contradicting his explicit timeline that “rates are going to get cut by the middle part of the year.”

2. Inflation mostly in the “rear‑view mirror” (broadly right)
By mid‑2024, inflation had fallen sharply from its 2022 peak (~9% CPI) and was running in the mid‑2% to low‑3% range year‑over‑year:

  • June 2024 PCE inflation (the Fed’s preferred gauge) was 2.5% YoY, with core PCE at 2.6%. (equiti.com)
  • July 2024 CPI fell below 3% for the first time since early 2021: headline CPI 2.9% YoY and core CPI 3.2%. (ft.com)
    While inflation was still modestly above the Fed’s 2% target, it was far below its peak and broadly seen as having cooled substantially. In that sense, describing inflation as largely in the “rear‑view mirror” is a reasonable qualitative characterization.

3. Soft landing / no recession (so far, essentially right)
The NBER’s Business Cycle Dating Committee continues to list February 2020 as the most recent peak and April 2020 as the most recent trough, with no new recession dated after the pandemic downturn. (nber.org) Forecasts and official commentary for 2024–2025 describe positive but slowing growth rather than a contraction: the OECD, for example, estimates U.S. real GDP growth at 2.8% in 2024 and a slower but still positive 1.8% in 2025. (investopedia.com) As of late 2025, U.S. officials explicitly state that the overall economy has avoided recession, despite sector‑specific weakness. (reuters.com) This is broadly consistent with a “soft landing” rather than a hard recession.

Overall judgment
Although Chamath was directionally right that inflation would cool and that the U.S. could avoid a recession, his prediction hinged on a concrete timing call that the Fed would start cutting rates by mid‑2024. In reality, the first cut didn’t occur until September 18, 2024, well after that window. (cnbc.com) Because this key, time‑specific component was clearly wrong, the combined prediction is best classified as wrong, even though some qualitative aspects (cooling inflation and a soft landing) turned out broadly accurate.

Chamath @ 01:35:35Inconclusive
aiventure
Over the coming years, advances in AI tooling will enable a very large number (on the order of millions or more) of one- to two-person teams to create and operate companies, leading to a startup landscape dominated by many very small firms rather than a smaller number of large, labor‑intensive startups.
If you think about a world where there's a million little companies or 50 million companies or 500 million companies that exist because they're one and two person teams that can build stuff that seems pretty reasonable and logical as the outcome.
Explanation

As of November 30, 2025, it is too early to determine whether the startup landscape will be dominated by millions of one‑ to two‑person AI‑enabled companies.

Evidence for the trend:

  • Generative AI tools (e.g., OpenAI’s GPT models, GitHub Copilot, Claude, etc.) have significantly reduced the cost and time required for solo founders to build software products, content, and workflows, and there is extensive reporting on “AI solopreneurs,” micro‑SaaS, and one‑person AI startups scaling to meaningful revenue. These trends strongly support the direction of Chamath’s claim, but they mostly consist of qualitative case studies and early‑stage numbers rather than hard, economy‑wide measurements.
  • Startup tooling, infrastructure, and distribution (serverless platforms, no‑code/low‑code tools, app stores, SaaS marketplaces) have continued to lower fixed costs and headcount needs. Combined with AI coding and content tools, a single person can now build what used to require small teams, again supporting the plausibility of many more very small firms.

Evidence against calling it “already correct”:

  • There is no robust 2023–2025 data showing that the overall startup landscape is now dominated by one‑ to two‑person companies or that we are anywhere near “a million to hundreds of millions” such firms created because of AI. Public statistics on new business formation, venture‑backed startups, and high‑growth tech companies still show a mix of sizes; many of the headline AI companies (e.g., OpenAI, Anthropic, Inflection, xAI, major frontier‑model labs) are large, capital‑intensive, and labor‑intensive organizations.
  • The prediction’s timescale is “over the coming years,” and the claim is structural: that the typical or dominant startup form becomes tiny AI‑leveraged teams. Structural shifts of that magnitude generally take longer than the roughly two‑year window since the prediction to be clearly confirmed.

Because:

  • The mechanisms Chamath describes (AI enabling very small teams) are clearly emerging and supported by current evidence, but
  • We lack definitive, quantitative proof that the startup ecosystem is already dominated by millions of 1–2 person AI‑tooled companies, and
  • The prediction is explicitly about a multi‑year future endpoint rather than 2024–2025 specifically,

…the fairest assessment as of late 2025 is that the prediction’s correctness is inconclusive rather than clearly right or wrong yet.

Chamath @ 01:36:35Inconclusive
venturetechai
Over the medium to long term, the traditional venture capital role and firm structure will be largely replaced by more automated, algorithmic systems that allocate many small investments (e.g., $100k–$500k) against specified objectives, with only later‑stage large checks remaining as a more conventional process.
I think there's a reasonable case to make that it doesn't exist. It's more of an automated system of capital against objectives.
Explanation

As of November 2025 (about two years after the podcast), traditional venture capital firms and roles clearly still dominate funding and decision‑making, so Chamath’s end‑state (traditional VC largely replaced by automated small‑check allocators) has not materialized yet. Major human‑run partnerships like Khosla Ventures and Menlo Ventures continue to raise multi‑billion‑dollar funds and operate with conventional GP/LP structures, and capital is consolidating into a smaller number of large, brand‑name firms rather than shifting to algorithmic platforms. (en.wikipedia.org)

At the same time, there is strong evidence of a directional move toward data‑ and AI‑driven VC, consistent with the spirit of his thesis but far short of replacement. The 2025 Data‑Driven VC Landscape maps 235 data‑driven VC firms and describes rapid growth in the use of AI, agents, and automation across sourcing, screening, and portfolio work, yet emphasizes that the future is "augmented, not automated": 94% of surveyed VCs expect a human‑in‑the‑loop model, not fully algorithmic “quant” VC. (datadrivenvc.io) Other analyses note that while many firms want to be more data‑driven and Gartner predicted that most executive reviews would be informed by AI by 2025, only a small fraction of firms were truly data‑driven as of 2023, and current commentary still talks about AI primarily as a support tool. (forbes.com) A few explicitly algorithmic funds (e.g., Rebel Fund, Correlation Ventures, Follow[the]Seed) exist but remain niche relative to the overall VC asset base. (rebelfund.vc)

Because Chamath framed this as a medium‑ to long‑term structural shift, and we are only ~2 years out, it is too early to say definitively whether traditional VC will eventually be “largely replaced” by automated, small‑check allocators. The current evidence shows meaningful movement toward more algorithmic and data‑driven practices, but not the kind of wholesale replacement he described. Therefore, the prediction is best classified as inconclusive (too early) rather than clearly right or wrong.

Chamath @ 01:39:52Inconclusive
aitech
The AI application ecosystem will ultimately resemble the open web rather than a tightly controlled app‑store model, with open and widely accessible models and tools prevailing over a single proprietary platform.
I think the reality is it's going to end up as the open web.
Explanation

Chamath’s claim is explicitly about the ultimate structure of the AI application ecosystem (“it’s going to end up as the open web”), so it’s a long‑run prediction. As of November 2025, only ~2 years have passed, and the market is still evolving rapidly in both directions.

Evidence for an “open web / open models” trajectory

  • Major firms now release powerful open‑weight models that anyone can download and deploy on their own infrastructure. Meta’s Llama 3 and Llama 3.1 (including a 405B-parameter model) are distributed as open weights and made available across multiple clouds, explicitly positioned as broadly usable building blocks rather than a single locked-down platform. (arstechnica.com)
  • Companies like Mistral AI, along with many others, have released numerous open‑weight models under permissive licenses (e.g., Apache 2.0) that can be self‑hosted and freely built upon. (en.wikipedia.org)
  • Hugging Face’s hub surpassed 1 million hosted models in 2024, functioning as a decentralized repository where anyone can publish, fork, and run models, much closer to an “open web” than a single proprietary store. (arstechnica.com)
  • A 2025 MIT–Hugging Face study shows open models are now downloaded at massive scale globally, with Chinese and US ecosystems both heavily using open‑source or open‑weight models, underscoring how central open access has become to AI development. (arynews.tv)

Evidence for an “app‑store / controlled platform” trajectory

  • OpenAI launched the GPT Store in January 2024, a curated marketplace for custom GPTs built on its proprietary models, with review, policy compliance, and centralized distribution—explicitly likened to an app store. (en.wikipedia.org)
  • In 2025 OpenAI expanded this into a broader app‑store‑like platform for ChatGPT, with an SDK, branded third‑party apps (e.g., Spotify, Zillow), monetization and in‑chat commerce, and a ranked app directory—very similar in structure and control to mobile app stores. (businessinsider.com)
  • Microsoft is also rolling out agent/app stores around Copilot, such as the Microsoft 365 Copilot agent store and the Microsoft Security Store, where businesses browse and deploy AI agents and SaaS integrations from within Microsoft’s ecosystem. (theverge.com)

Why the prediction is still unresolved

  • Both models coexist at large scale today: open‑weight models and decentralized hosting look like an open web, while GPT‑style stores and enterprise agent marketplaces look like tightly controlled app stores.
  • The ongoing acceleration of open‑source/open‑weight models and tooling suggests real momentum toward an open ecosystem, but proprietary platforms with store‑like economics are simultaneously consolidating distribution and user attention.
  • Because the claim is about where things “end up,” and current evidence supports a hybrid landscape with no clear, stable equilibrium yet, it is too early to say whether the ecosystem will ultimately resemble the open web more than a controlled app‑store model.

Given this, the correct classification is “inconclusive (too early)”, not clearly right or wrong at this point.

Chamath @ 01:41:27Inconclusive
aiventureeconomy
As AI models and tooling become pervasive, the tech and startup economy will shift toward many more small companies and materially fewer extremely large, dominant companies (“ginormous outcomes”) than in the previous tech cycle.
So I think what that means economically is there's just going to be a lot more small companies and a lot fewer of these ginormous outcomes.
Explanation

As of late 2025, the AI-driven tech cycle is still in its early years, so the long‑run structure of outcomes (many small vs. few huge companies) cannot be reliably judged.

Evidence so far is mixed and often points in the opposite direction of Chamath’s prediction:

  • There are very many AI startups and small companies: one analysis estimates over 212,000 active AI companies globally, including ~62,000 AI‑related startups, with ~2,000 new AI companies funded per year from 2013–2024. (startus-insights.com)
  • AI startups have attracted an enormous share of total venture capital (about $97B of $209B U.S. startup funding in 2024, nearly half of all VC dollars), and AI accounts for roughly half of new unicorns, leading to 245 AI unicorns worldwide. (entrepreneur.com) This supports the “lots of companies” side of the claim.
  • However, capital is highly concentrated in a small number of very large players: in 2025, $118B of AI‑related funding went out by mid‑August, and just eight companies accounted for $73B (62%) of that total, including a $40B raise by OpenAI. (news.crunchbase.com) Valuation tables show a handful of mega‑unicorns—OpenAI (~$157B), Databricks (~$62B), Anthropic (~$60B), xAI (~$50B), CoreWeave (~$23B)—dominating the top tier. (altindex.com)
  • VCs and analysts observing the AI boom describe intensifying power‑law dynamics, where a small set of AI companies is expected to capture outsized returns while many others fail. (businessinsider.com) Major research and investment notes also argue that AI’s high fixed costs (compute, data, talent, regulation) structurally favor large incumbents and a few foundation‑model platforms more than in previous tech transitions. (goldmansachs.com)

Taken together, the data show:

  • Yes, many AI startups and small firms exist and are being funded.
  • At the same time, funding, infrastructure, and market power are extremely concentrated in a small number of giant companies and platforms.

Chamath’s prediction was about the eventual shape of the AI era compared with the last tech cycle (mobile/cloud), which played out over a decade or more. Generative‑AI platforms only began scaling commercially around late 2022; by November 2025 we have only 2–3 years of real data, and the market is still changing rapidly (e.g., leadership shifts between OpenAI, Anthropic, Google, etc.). (globenewswire.com) That is too short a window to know whether today’s concentration persists or gives way to a broader base of smaller, sustainable companies over the full cycle.

Because the relevant “cycle” has not run its course and current evidence is compatible with either future (continued concentration or later fragmentation into many smaller winners), the prediction’s ultimate accuracy cannot yet be determined, so the result is inconclusive (too early).

aitechscience
Within the next hardware generation or so (on the order of a few years from 2023), training new foundation or near‑foundation AI models will commonly take on the order of weeks rather than many months.
So to your point, this thing is going to be like people will be training models in weeks.
Explanation

Evidence on training times is mixed and depends heavily on what counts as a “foundation or near‑foundation” model and what “commonly” means.

1. Large frontier models still often take on the order of a few months

  • A 2024 survey table of major models estimates wall‑clock training times of about 95 days for GPT‑4 and 100 days for Gemini, i.e., roughly 3+ months, even on very large GPU/TPU clusters. These are canonical frontier‑level foundation models. (aimodels.fyi)
  • Meta’s open frontier‑scale model Llama‑3.1 405B reportedly used over 16,000 H100 GPUs and around 30.84 million GPU‑hours. Given that GPU count, this implies a wall‑clock of roughly 80 days (~11–12 weeks), again on the order of several months rather than just a few weeks. (venturebeat.com)
  • A 2025 overview article notes that training large LLMs like GPT‑4 still requires vast energy and that training “can take weeks or months,” not clearly showing a decisive shift away from multi‑month runs for the biggest models. (livescience.com)

These data suggest that for the largest, GPT‑4/5‑class and Gemini‑class models, training remains in the multi‑month regime, not clearly reduced to just a few weeks.

2. Many sizable models do train in weeks – but that was already true before 2023

  • The same training‑cost table lists GPT‑3 at ~15 days, Llama‑1 at ~21 days, and Llama‑2 at ~35 days, all trained on large A100/V100 clusters. (aimodels.fyi)
  • Newer, mid‑scale models also hit short times: e.g., the 1.5‑Pints LLM (a from‑scratch pre‑train, not just a tiny toy model) was trained in 9 days while matching or beating other small LLMs, explicitly advertised as “pretraining in days, not months.” (arxiv.org)
  • Numerous domain‑specific foundation models (e.g., medical and retinal imaging foundations) emphasize orders‑of‑magnitude compute reductions and training costs <~$100, implicitly corresponding to runs well under a month on modest clusters. (arxiv.org)

So training substantial but not frontier‑scale models in one to several weeks is already feasible and demonstrated, but that was true around GPT‑3 / early LLaMA as well, i.e., before the 2023 podcast date.

3. Why the prediction is hard to score cleanly

  • The normalized prediction says that within the next hardware generation (~a few years), training new foundation / near‑foundation models will commonly take weeks rather than many months.
  • In practice, the largest and most important frontier models in 2024–2025 still cluster around ~2–3+ months of training, not a clear collapse to <~4 weeks. (aimodels.fyi)
  • At the same time, a substantial number of non‑frontier but still serious foundation‑style models now train in under a few weeks, yet this was already happening (GPT‑3, LLaMA‑1) before the prediction. The mix of weeks vs. months for different scales and use cases looks broadly continuous from 2020–2025 rather than showing a decisive industry‑wide transition.
  • Terms like “near‑foundation” and “commonly” are not precisely defined: if one restricts to GPT‑4/5‑class frontier models, the prediction looks mostly wrong; if one includes a broad class of mid‑sized open models and sector‑specific foundations, it looks plausibly right but not meaningfully new.

Because:

  • There is no sharp, widely‑acknowledged shift from “many months” to “weeks” for the top foundation models, yet
  • Weeks‑scale training is demonstrably common for many significant models, but already was pre‑2023, and
  • The claim’s key qualifiers ("commonly", "near‑foundation", "many months") are too vague to pin down a clear pass/fail,

the outcome cannot be determined cleanly as simply right or wrong. Hence the prediction is best classified as ambiguous.

Elon Musk will open‑source the Grok AI model, making it freely accessible for broad developer use.
whether it's Llama or Mistral or even grok, when Elon open sources it, it's going to allow people to have access to these tools, basically for free.
Explanation

Evidence shows that Elon Musk’s xAI did in fact open‑source the Grok model in a way that broadly matches Chamath’s prediction.

  1. Grok‑1 open‑sourced (Apache 2.0)
    In March 2024, xAI released the base model weights and network architecture of Grok‑1 on GitHub. TechCrunch reports that xAI "open sourced the base model of Grok" and that it is a 314B-parameter Mixture‑of‑Experts model. The release is under the Apache 2.0 license, which is a permissive open‑source license that allows commercial use and redistribution. (techcrunch.com)
    This makes Grok‑1 freely downloadable and usable by developers, which matches the spirit of "allow people to have access to these tools, basically for free."

  2. Later Grok versions also opened, though with more restrictions
    In August 2025, Musk announced that Grok 2.5 was made "open source" and released its model weights on Hugging Face. Multiple reports confirm that the weights for Grok 2.5 were published, with Musk stating on X that “The @xAI Grok 2.5 model … is now open source,” and promising Grok 3 would follow. (techcrunch.com)
    However, unlike Grok‑1’s Apache license, Grok 2.5 uses a custom, more restrictive non‑commercial/community license, meaning it is not as fully permissive as classic open source. (beebom.com)

  3. Assessment vs prediction
    Chamath’s forecast was that Musk would open‑source Grok so that people could access and use it essentially for free. By March 2024—well after the November 2023 podcast—xAI did release Grok‑1’s weights and architecture under a standard open‑source license, and later released Grok 2.5 weights as well. These moves provided broad, no‑cost access for developers, notwithstanding some licensing nuances for newer versions.

Given that:

  • Grok‑1 was released with full model weights and architecture under Apache 2.0 (widely accepted as open source), and
  • This enabled broad, free developer use,

the core prediction that Elon would open‑source Grok and make it freely accessible to developers is fulfilled in substance.

aitech
OpenAI (or comparable leading AI providers) will add video as a supported modality for their models after text, images, and audio, enabling multimodal input/output that includes video in the foreseeable future.
The other thing is multimodal. I mean, so they're really stressing the idea of combining text with photos. I guess videos will eventually come later.
Explanation

Sacks predicted that OpenAI (or similarly leading AI labs) would eventually extend multimodality beyond text, images, and audio to include video.

That has happened:

  • OpenAI developed Sora, a text‑to‑video model capable of generating realistic short videos from text prompts. OpenAI first previewed Sora’s capabilities in February 2024, then moved it out of research preview and launched it publicly on December 9, 2024 as a product for ChatGPT Plus/Pro users, enabling users to generate up to 1080p, 20‑second videos. (openai.com)
  • The Sora interface explicitly supports prompting with text, images, and videos, and can extend or remix existing clips, making video both an input and an output modality within OpenAI’s ecosystem—exactly the kind of next‑step multimodality Sacks anticipated. (openai.com)
  • OpenAI has since released Sora 2 (September 30, 2025), which adds synchronized dialogue and sound effects, further cementing video (with audio) as a core modality alongside text and images. (openai.com)
  • Beyond OpenAI, other top labs followed the same trajectory: Google integrated its Veo video‑generation models (Veo 2, later Veo 3) into Gemini/Gemini Advanced, letting users generate short, high‑resolution videos from text (and sometimes images), which confirms the broader industry move Sacks was pointing to. (blog.google)

Because OpenAI and comparable leading providers did, in fact, add video as a supported modality within roughly 1–2 years of the November 2023 podcast, the prediction is right.