Last updated Nov 29, 2025

GPT-4o launches, Glue demo, Ohalo breakthrough, Druck's Argentina bet, did Google kill Perplexity?

Fri, 17 May 2024 20:23:00 +0000
Back to episodes
Chamath @ 00:21:50Inconclusive
aitechventure
Within roughly two years of May 2024, the basic building blocks of AI models and capabilities will have standardized across the industry, and only after that point (i.e., starting around 2026) will the dominant, large-scale AI application companies (“the Facebook of AI”) be created; as of May 2024, that dominant consumer AI application company does not yet exist.
So I think where we are is probably within two years of where the basic building blocks are standardized. And then I think the real businesses get built. So I will maintain my perspective here, which is the quote unquote Facebook of AI has yet to be created.View on YouTube
Explanation

Chamath’s statement is documented in the episode transcript, where he says that within about two years the basic AI building blocks will be standardized, after which the real businesses will be built, and that the Facebook of AI has not yet been created as of May 2024. (podscripts.co)

As of today (30 November 2025), less than two years have elapsed since May 2024, so his core timeline (standardization by roughly May 2026, and dominant application companies arising only after that) has not yet reached its decision point. Architecturally, major players are converging on broadly similar multimodal, transformer-based, agentic models (e.g., OpenAI’s GPT‑5, Google’s Gemini 2.x/3, Meta’s Llama 4 family), but the ecosystem is still evolving rapidly, and there is no widely agreed industry standard “finished state” to judge against his forecast. (en.wikipedia.org)

There are some signs that cut against his narrative: by 2025, OpenAI/ChatGPT already looks like a dominant consumer AI application platform, with hundreds of millions of weekly active users, top positions in app-store downloads, and extensive cultural and economic impact, making it a plausible candidate for the kind of winner he had in mind. (en.wikipedia.org) But because his prediction ties the emergence of the eventual dominant AI application companies specifically to a post‑2026 period, and we have not yet reached that phase, we cannot definitively say whether his overall forecast will prove right or wrong. Therefore, the prediction is best classified as inconclusive (too early to judge).

Jason @ 00:23:21Inconclusive
aitecheconomy
AI foundation models will continue to become roughly 90% cheaper to run each year, implying that over any two-year period they will be approximately 99% cheaper and better than models from two years prior.
these new models are so much more efficient that you actually can throw the old model in the garbage, because it's so inefficient, and these are now becoming about 90% cheaper every year, which means every two years these things are gonna be 99% cheaper and better.View on YouTube
Explanation

Only about 1.5 years have passed since the May 17, 2024 prediction, which explicitly talks about an ongoing trend of models becoming ~90% cheaper per year and being ~99% cheaper over any two‑year span. The first full two‑year window (to compare “models two years prior”) would end around May 2026, which hasn’t occurred yet.

We do see substantial price drops and efficiency gains around the time of, and shortly after, the prediction:

  • Between the original GPT‑4 (March 2023) at $30 per million input tokens for the 8k context model and GPT‑4‑Turbo (April 2024) at $10 per million, OpenAI cut the per‑token price by about 3× in a bit over a year, a ~67% cost reduction but not a full 90%. (help.openai.com)
  • GPT‑4o, launched May 2024, was announced as delivering GPT‑4‑level intelligence at half the cost and twice the speed of GPT‑4‑Turbo, a further ~2× drop in price at similar or better quality, but again not a clean 10× year‑over‑year step. (investopedia.com)
  • GPT‑4o mini (July 2024) is priced at $0.15 per million input tokens and $0.60 per million output tokens—roughly two orders of magnitude cheaper than the original GPT‑4 pricing for frontier‑level models—and OpenAI describes it as “an order of magnitude more affordable than previous frontier models” while often outperforming GPT‑4 in chat preferences. (openai.com)
  • In 2025, newer models like GPT‑4.1 and o3‑mini are described as cheaper than their immediate predecessors (e.g., GPT‑4.1 about 26% cheaper than GPT‑4o; o3‑mini about 63% cheaper than o1‑mini), continuing the downward trend but at increments far short of a consistent 10× drop each year. (theverge.com)

However, the prediction is about future sustained rates (“will continue to become about 90% cheaper every year … every two years these things are gonna be 99% cheaper and better”), not just the one‑off transition up to GPT‑4o. We have:

  • Less than the two years he explicitly uses as the benchmark window.
  • No standardized, industry‑wide metric for “cost to run” across all “AI foundation models,” since provider prices mix model quality, context length, and product strategy.

Because (a) the forecast period he specifies has not fully elapsed, and (b) existing data, while showing large cost reductions, is not sufficient to confirm or falsify a continuing ~90%‑per‑year drop, the status of this prediction as of November 30, 2025 is inconclusive (too early).

aitech
OpenAI’s long‑term strategy for ChatGPT/GPT access will be to make core individual usage effectively free or near‑free, and to monetize primarily by charging for multi‑user or collaborative (“multiplayer”) versions or features.
OpenAI ... on a strategic level, is going to make all this free or close to free, and maybe just charge for a multiplayer version. That seems to be where it's heading.View on YouTube
Explanation

OpenAI has indeed pushed more powerful models and tools into the free tier (for example, making GPT‑4o and several advanced capabilities available to ChatGPT Free users), which aligns partially with the “make it free or close to free” part of the prediction. (openai.com) However, the company still clearly relies on large numbers of individual paid subscribers via Plus and Pro plans: as of July 2025 around 35 million users (about 5% of weekly actives) were paying $20 or $200 per month for these individual tiers, and OpenAI internally projects around 220 million paying ChatGPT subscribers by 2030. (cio.economictimes.indiatimes.com) OpenAI’s own pricing page in late 2025 still positions Free as limited, with substantial additional reasoning power, usage, memory, and tools reserved for individual Plus and Pro users at $20 and $200 per month, respectively. (openai.com) While OpenAI also sells multi‑user “Team,” “Enterprise,” and other business offerings that emphasize collaboration, these exist alongside and not instead of a major consumer subscription business, and the firm is also exploring new revenue such as shopping and advertising features. (openai.com) Given that individual subscriptions remain central to both current revenue and long‑term projections, the specific prediction that OpenAI would “maybe just charge for a multiplayer version” does not match the observable strategy as of late 2025.

Sacks @ 00:23:21Inconclusive
techai
Over time, OpenAI will shift its primary business model away from direct B2C subscriptions toward B2B monetization, mainly by charging for and monetizing applications and services built on top of its models.
Well, I think you're assuming there that the long term business model of OpenAI is in B2C subscriptions. And I think that's probably the least attractive business model they have available to them... So I suspect they're going to move in more of a B2B direction over time, because that's where the real money is. And probably the way they do that is by monetizing all the apps that are built on top of it.View on YouTube
Explanation

As of 30 November 2025, OpenAI’s current revenue mix still appears to be dominated by subscriptions to ChatGPT rather than by a B2B app‑platform model, but the company is clearly pushing harder into B2B, so it’s too early to judge the long‑term claim.

Key evidence:

  • Multiple analyses report that the majority of OpenAI’s revenue comes from ChatGPT subscriptions (Plus, Team, Enterprise). One strategy analysis notes that product subscriptions (ChatGPT Plus/Team/Enterprise) make up the bulk of revenue, while API usage is a smaller portion. (deepstarstrategic.com) An essay summarizing reporting from The Information similarly estimates that over 70% of OpenAI’s revenue comes from premium ChatGPT access. (johncandeto.com) A Financial Times piece from mid‑2025 says OpenAI’s ARR nearly doubled to $10B, driven largely by demand for ChatGPT subscriptions, again underscoring subscription‑centric economics. (ft.com) Reuters, citing The Information, also frames OpenAI’s long‑term projections in terms of hundreds of millions of paying ChatGPT subscribers by 2030, positioning ChatGPT as one of the world’s largest subscription services. (reuters.com) Collectively, this suggests that, so far, the primary business model has not moved away from subscription access to ChatGPT.
  • At the same time, OpenAI is clearly intensifying its B2B focus: it has a dedicated "Growth & GTM Strategy – B2B Monetization" team tasked with shaping commercial strategy across B2B channels, including pricing, packaging, and enterprise GTM. (openai.com) There are also large‑scale enterprise and platform deals (e.g., PwC‑sold enterprise offerings, Intuit’s multiyear strategic partnership, and a $100M+ agents/AI‑services partnership with Databricks) that are explicitly about monetizing business use and embedding OpenAI models into other companies’ products. (johncandeto.com) These moves support the directional part of the prediction (“move in more of a B2B direction”).
  • Regarding “monetizing all the apps that are built on top of it”: OpenAI launched the GPT Store in January 2024, allowing third parties to create and publish custom GPTs and introducing a builder revenue program where creators are paid based on engagement. (openai.com) This is indeed a platform for apps built on top of OpenAI models, but available reporting focuses on it as an additional ecosystem and creator‑payout mechanism; there is no strong evidence that OpenAI’s main revenue now comes from taking a cut of third‑party apps, as opposed to continuing to monetize via ChatGPT subscriptions (consumer and business tiers) and direct API/enterprise contracts.

Putting this together:

  • Parts that are directionally supported: OpenAI has moved more into B2B (enterprise tiers, big corporate partnerships, dedicated B2B monetization org). That aligns with the prediction’s general direction of a stronger B2B focus.
  • Parts not yet observed: Available data indicates that ChatGPT subscriptions (a mix of B2C and seat‑based B2B) still account for the majority of revenue, so we do not yet see a clear shift of the primary business model away from subscription access. Nor is there evidence that revenue from monetizing third‑party apps built on top of OpenAI (via the GPT Store, etc.) has become the main business model.
  • Time horizon: The prediction is explicitly framed as a long‑term, “over time” shift. Given that only ~18 months have passed since the May 2024 podcast and OpenAI is still rapidly evolving, the current evidence doesn’t conclusively validate or falsify a long‑run business‑model change.

Because (1) the core business is still subscription‑centric today, but (2) OpenAI is clearly moving more deeply into B2B and platform plays, and (3) the prediction is explicitly long‑term, the fairest judgment as of late 2025 is that it’s too early to tell whether Sacks’s full prediction will ultimately be correct.

ventureai
Because of GPT‑4o and similar advances in foundation models, most of the R&D invested over the last 2–5 years into building proprietary conversational/virtual customer support agents (especially work focused on core conversational ability and speed) will be rendered obsolete, and new customer support agent startups built on the latest models from 2024 onward will have a structural advantage over those earlier efforts.
there are a whole bunch of startups out there that were creating virtual customer support agents... I think their product roadmaps just became obsolete... my guess is that hundreds of millions of dollars of R&D just went out the window. And probably this is the best time to be creating a customer support agent company if you're doing it two years ago, five years ago. Your work is just like been obsolete.View on YouTube
Explanation

Evidence since GPT‑4o’s May 2024 launch shows that frontier foundation models have rapidly become the core of many new and existing customer‑support agents, but there isn’t clear, measurable proof that most of the R&D from the previous 2–5 years is now obsolete or that new 2024‑era startups have a durable structural advantage.

What clearly did happen

  • GPT‑4o and GPT‑4o mini were explicitly positioned and priced for high‑volume automation use cases like customer‑service agents; OpenAI and industry coverage highlight customer‑support automation as a prime application, with lower costs and better latency than earlier models. (en.wikipedia.org) Case‑study style pieces describe GPT‑4o cutting support costs and handling large fractions of customer inquiries for companies like Octopus Energy or Lunar Bank, reinforcing that these models are now central to modern support stacks. (ptolemay.com)
  • Major CX platforms such as Zendesk quickly rolled GPT‑4o into production. Zendesk announced in May 2024 that it rolled out GPT‑4o across its AI offering within 24 hours to improve speed and quality for bots and agents. (zendesk.com) By March 2025, Zendesk was piloting a new class of agentic AI agents powered by OpenAI models (including GPT‑4o), explicitly moving away from older intent‑based bots toward generative, reasoning‑based agents. (openai.com) This supports the direction of Sacks’s claim: older, rule/intent‑driven conversational tech is being superseded by LLM‑centric agents.
  • Many new tools and indie products for “AI support teams” launched in 2024–2025 are built directly on GPT‑4o, Claude, Gemini, etc., without proprietary base models, and claim high automation rates (e.g., founders reporting ~80% of tickets auto‑handled using GPT‑4o‑plus‑other‑LLMs pipelines). (reddit.com) This shows that new entrants can reach strong performance quickly by standing on top of the latest foundation models, consistent with the idea that the bar for proprietary conversational tech has been raised.

Why the prediction can’t be cleanly scored as right or wrong

  • Zendesk and similar incumbents are not being structurally displaced by 2024‑born startups; they are themselves integrating GPT‑4o and newer models, leveraging extensive pre‑existing R&D in routing, workflows, data infrastructure, UI, and distribution. Their own case study emphasizes that they can benchmark and deploy new models (including OpenAI’s reasoning models) in under 24 hours, specifically to reuse their existing platform rather than scrap it. (openai.com) That’s very different from “your work is obsolete” in the sense of wasted R&D.
  • Industry‑level data still show that end‑to‑end AI agents handle a minority of total customer‑service volume; human agents and traditional systems remain the backbone. For example, Gartner’s projection that only about 10% of customer‑service interactions will be fully automated by AI by 2026 implies that earlier investments in knowledge bases, workflows, and tooling continue to matter significantly alongside new LLM agents. (linkedin.com)
  • There is no public, quantitative evidence that “hundreds of millions of dollars” of R&D from 2019–2022 virtual‑agent startups has literally “gone out the window.” Many such companies have simply swapped or layered in GPT‑4/4o (or competitors) under the hood rather than discarding their products entirely; public announcements tend to frame this as an upgrade, not a write‑off.
  • Some large enterprises have moved away from proprietary access to OpenAI’s models toward custom or open‑source stacks for cost, latency, or privacy reasons. A prominent example is AT&T, which initially used ChatGPT/GPT‑4 to analyze customer‑service call summaries, then replaced it with a hybrid system of distilled and open‑source models (including Llama 70B), cutting costs to about 35% of the ChatGPT‑based solution while retaining most of the accuracy. (businessinsider.com) This suggests that, even post‑GPT‑4o, there is still strategic value in investing in custom modeling rather than that R&D being per se obsolete.

Net assessment

  • Directionally, Sacks’s intuition—that frontier models like GPT‑4o would make it much easier for new teams to build high‑quality, fast conversational agents and would undermine older, intent‑based/hand‑rolled conversational tech—has been borne out in practice.
  • However, the strong form of the claim (“most” 2–5‑year R&D in virtual agents is now obsolete; new GPT‑4o‑era startups have a structural advantage over earlier efforts) is not something we can confirm or falsify from available data: incumbents have generally adapted by integrating the same models, and there’s no systematic evidence that earlier R&D has been rendered worthless or that pre‑2022 startups are being displaced en masse.

Because the observable facts support the general trend but do not let us decisively judge the extent (“most,” “hundreds of millions… out the window,” “structural advantage”), the prediction is best classified as ambiguous rather than clearly right or wrong.

aitech
For application developers who build features that overlap too much with what frontier AI models are learning to do natively, those overlapping features will be rendered obsolete within a few months of major model upgrades.
If you're an app developer, the key thing to understand is where does model innovation end and your innovation begin? Because if you get that wrong, you'll end up doing a bunch of stuff that the model will just obsolete in a few months.View on YouTube
Explanation

Evidence since mid‑2024 supports sacks’s conditional claim that app features which closely overlap with frontier models tend to be wiped out within months of major upgrades.

Even before GPT‑4o, this pattern was visible: when ChatGPT added native PDF upload and Q&A in late 2023, TechCrunch reported that a single feature change made the core offering of multiple PDF‑wrapper startups redundant, sharply undercutting businesses like PDF.ai, ChatPDF, AskYourPDF and similar tools almost overnight. Later analysis of these events describes how those wrappers saw significant usage drops once ChatGPT could do the same job natively. (techcrunch.com)

After GPT‑4o’s May 2024 release and its follow‑on upgrades, OpenAI rapidly folded more capabilities directly into the base models: real‑time multimodal chat, advanced voice agents, cheaper small models like GPT‑4o‑mini, and then native image generation that replaced DALL‑E 3 inside ChatGPT. Each wave reduced the gap between what a generic model could do out‑of‑the‑box and what many specialized “AI apps” were selling as standalone products. (en.wikipedia.org)

Market analyses now describe almost exactly what sacks warned about. A 2025 review from Market Clarity estimates that for thin “wrapper” startups, the next GPT iteration creates a high (70–80%) chance of serious impact, with the startup’s differentiating features typically absorbed into the platform within 6–12 months of a major model release. (mktclarity.com) Another Market Clarity piece documents how previous OpenAI launches immediately threatened successful wrappers, and notes Sam Altman’s explicit warning that companies which merely wrap GPT‑4 will be “steamrolled” once those features are shipped natively. (mktclarity.com)

Founder and investor commentary in 2024–25 ties this directly to GPT‑4o and its peers. A widely shared LinkedIn analysis notes that each time a major LLM update lands—specifically naming GPT‑4o, Claude 3.5, and Gemini 2—dozens of niche products like generic summarizers, copilots, and research helpers become unnecessary because the base models now perform those tasks better and more cheaply. (linkedin.com) Another investor piece describes many “ChatGPT wrapper” founders openly telling VCs they must raise funding before OpenAI ships their entire product as a first‑party feature, underscoring that platform obsolescence is already a lived risk, not just a theoretical one. (linkedin.com)

Plenty of application‑layer companies are still alive—but the ones surviving are those that add proprietary data, deep workflow integration, or distribution advantages that go beyond what the raw models provide. That is precisely the boundary sacks highlighted between model innovation and app‑layer innovation. Given the documented cases where overlapping wrappers were effectively made obsolete within a few months of new GPT‑4o‑class or similar releases, his prediction is borne out by what has actually happened in the market.

In the AI era, companies that build internally useful but non‑core AI systems (such as customer support agents like Klarna’s) will increasingly choose to open-source those systems rather than maintain them privately, leading to a stronger push toward open-source infrastructure in AI than in prior software markets.
That's why I think the incentive for these folks is going to be to push this stuff into the open source... Release it in the open source, guys. Let the rest of the community take it over so that it's available to everybody else. Otherwise you're going to be stuck supporting it... So I also think the incentive to just push towards open source in this market, if you will, is so much more meaningful than any other market.View on YouTube
Explanation

Available evidence up to November 2025 does not show the pattern Chamath described.

  1. Companies like Klarna have not open‑sourced their internal customer‑support agents. Klarna’s widely covered AI assistant (used for two‑thirds of its customer‑service chats and doing the work of ~700 FTEs) is an internal, OpenAI‑powered deployment with no indication that the system or its orchestration code has been released as open source. It remains a proprietary capability used to drive profit improvement for Klarna, not a community project.

    • Evidence: Klarna’s own press and OpenAI’s case study describe the AI assistant as a Klarna product powered by OpenAI, with no mention of open‑sourcing the system or its components. (klarna.com)
  2. The dominant trend for customer‑service / internal “agent” systems is proprietary SaaS, not enterprises open‑sourcing their own agents.

    • Major CX and contact‑center vendors — Intercom (Fin 2), NICE CXone Mpower, Cisco’s Webex AI Agent, Zendesk’s Resolution Platform, Oracle’s AI agents for sales, and others — are all sold as proprietary products, not open‑sourced internal tools handed to the community.(en.wikipedia.org)
    • Telecom and large‑enterprise use cases similarly do not match the prediction. AT&T, for example, switched from ChatGPT to a hybrid system built on open‑source models (H2O.ai + Meta’s Llama 70B) to analyze customer‑service calls, but this is just internal use of open‑source components; AT&T has not open‑sourced its own end‑to‑end system.(businessinsider.com)
  3. Where open source is strong, it is mostly from AI vendors and research groups, not ordinary enterprises pushing out their non‑core agents.

    • Open‑weight / open‑source models from Meta (Llama 3.x), DeepSeek, AI2’s OLMo, etc., plus tooling like Langtrace, form a vibrant open ecosystem, but these are purpose‑built open projects by AI labs and dev‑tool startups, not companies dumping internal customer‑support agents onto GitHub.(en.wikipedia.org)
    • There are open‑source customer‑support platforms with AI agents (e.g., Chatwoot with its “Captain” AI agent), but these were conceived as open‑source products from the outset. They are not examples of mainstream enterprises open‑sourcing their previously internal, non‑core AI systems to offload maintenance.(github.com)
  4. The broader structure of the AI market contradicts the claim that incentives here push toward open source more than in prior software markets.

    • Analyses of open‑source AI note a mixed landscape: some major models and tools are open or open‑weight, but many key components (frontier models, data, and cloud platforms) remain closed and are used to reinforce proprietary ecosystems.(en.wikipedia.org)
    • Historically, previous software eras (e.g., operating systems, web servers, databases) also had massive open‑source pushes (Linux, Apache, MySQL, PostgreSQL). Current AI looks at best comparable in openness and, in core model access, often more closed, not clearly “more meaningfully” open‑source than any other market.

Given these points, the specific prediction — that companies building internal, non‑core AI systems like customer‑support agents will naturally choose to open‑source them, driving a uniquely strong open‑source infrastructure wave in AI — has not materialized by late 2025 and is contradicted by how most enterprises are actually behaving. Therefore, it is best classified as wrong.

aitech
Future AI models used within Glue (and similar enterprise chat tools) will become capable of automatically inferring which internal data source or repository to query, without users having to explicitly specify it, making AI assistance more seamless than the May 2024 state.
in the future it's going to figure it out on its own. So it's going to become more and more seamless, but.View on YouTube
Explanation

By late 2025, several enterprise chat/assistant products are doing essentially what Sacks predicted: models automatically route to the relevant internal data sources without users having to name a specific repository each time.

  1. Glue itself has moved in this direction. Glue AI now:

    • Sits in every conversation and "leverages context from your workspace" (threads, attachments) and web data when you simply @‑mention it and ask a question, rather than you pointing it to a particular thread or file each time.
    • Is described as using context from "threads, attachments, and connected apps (with permission)" to answer questions, implying it can pull from whatever connected sources are relevant rather than requiring the user to specify which one on each query. (docs.glue.ai)
  2. Glue added MCP-based integrations that let the model pick the right external tool. In 2025 Glue shipped MCP support and a curated directory of MCP servers for apps like Linear, Asana, Notion, Sentry, Stripe, Vercel, etc. In use, you can just ask Glue AI in a thread to do things like "create a Linear issue" or "look up data from an internal system," and the system invokes the appropriate MCP server/tool behind the scenes rather than you manually choosing an API endpoint. (docs.glue.ai)

  3. Anthropic’s Claude Enterprise Search is a clear realization of the prediction in a “similar enterprise chat tool.” Claude’s 2025 Microsoft 365 connector and Enterprise Search feature are explicitly designed so that one question to Claude can search across a company’s connected data sources–SharePoint, OneDrive, Outlook, Teams, and other apps–in a single shot. Anthropic’s own example: ask about your company’s remote‑work policy and Claude automatically pulls from HR documents in SharePoint, email discussions in Outlook, and team guidelines from various sources into one answer. Users no longer have to say "now search SharePoint" vs. "now search Outlook"—the model infers which repositories to query based on the question. (anthropic.com)

  4. Broader ecosystem trend confirms the direction. Microsoft 365 Copilot’s Researcher agent similarly leans on Microsoft Graph to draw from meetings, emails, chats, and SharePoint files, with users mainly expressing what they want rather than formulating per‑repository queries—evidence that mainstream enterprise assistants are converging on automatic source selection. (techcommunity.microsoft.com)

Given these shipped capabilities in Glue and closely related enterprise assistants like Claude for Enterprise and Microsoft 365 Copilot, the world has clearly moved toward exactly what Sacks described: AI models in enterprise chat that mostly “figure out on their own” which internal data sources to hit, making assistance more seamless than it was in May 2024. Therefore, the prediction is best classified as right.

aitech
By the time a GPT‑5‑class model is available, enterprise chat tools like Glue will be able to support effectively “promptless” AI behavior, where the AI autonomously interjects into conversations when it detects it has relevant, helpful information, without being explicitly summoned by a user.
That's where I want to go with it is I call that prompt list, which is I want the AI just to chime in when it determines that it has relevant information and can help the team, even if it hasn't been summoned yet. But we need some model improvement for that, frankly. I mean, we'll be able to get there by GPT five, but that's totally where this is headed.View on YouTube
Explanation

GPT‑5, a “GPT‑5‑class” multimodal foundation model from OpenAI, was released on August 7, 2025 and is now widely available via ChatGPT and Microsoft Copilot, satisfying the timeline condition “by the time a GPT‑5‑class model is available.”(en.wikipedia.org)

By that point, at least some enterprise chat environments do support effectively “promptless” AI behavior very close to what Sacks described:

  • Doti Autopilot for Slack is an AI assistant specifically designed to sit in Slack channels and listen in the background. Its documentation states that Autopilot:

    • “monitors messages in enabled channels,”
    • “detects valid questions,”
    • “replies automatically when it can confidently help,” and
    • does so “without requiring the user to tag @doti,” adding that “users might not even know Doti is there – and yet their questions get answered within seconds.”(help.doti.ai)
      This matches the prediction’s idea of an AI that “just [chimes] in” with relevant, helpful information in a team chat without being explicitly summoned.
  • Slack’s own AI/agent framework describes AI agents in Slack as “autonomous, proactive applications” that can operate inside channels and DMs, analyze conversational context, and take actions or respond based on automated triggers rather than only direct prompts.(docs.slack.dev) This supports the claim that the enterprise chat stack is moving toward embedded, always‑on agents.

Glue itself, the product Sacks co‑founded, has shipped extensive agentic AI features—MCP integration so Glue AI can act across external tools, multi‑step tool use, and AI Rules/Suggestion Rules that let it propose thread names and recipients and run complex actions once a user asks for help.(docs.glue.ai) However, the publicly documented behavior still appears to be user‑initiated (via @‑mentions, AI sessions, or explicit actions) rather than a fully autonomous “chime‑in on any conversation” mode.

Taken literally as a prediction about Glue’s own feature set, the claim is not yet clearly fulfilled. But as normalized—“enterprise chat tools like Glue will be able to support effectively promptless AI behavior by the GPT‑5 era”—the ecosystem now includes Slack‑based tools (e.g., Doti Autopilot) that deliver exactly this: background agents in enterprise chat that autonomously interject when they detect they can help, by or before the GPT‑5 timeframe. On that broader reading, the prediction has effectively come true.

techclimateeconomy
Ohalo’s “boosted breeding” technology will be applied across nearly every major crop worldwide, leading to higher yields and cheaper seed production, which in turn will lower food prices while improving sustainability metrics (less land, water, and energy per unit of food) compared to current agriculture.
So we're going to be applying this boosted technology that we've discovered across nearly every major crop worldwide. It'll both increase yield, but it will also have a massive impact on the ability to actually deliver seed and help farmers and make food prices lower. Is it improve sustainability? No, it's actually cheaper. So higher yield, lower cost.View on YouTube
Explanation

As of November 30, 2025, there is no evidence that Ohalo’s Boosted Breeding technology has been applied across nearly every major crop worldwide or that it has had a visible, system‑wide impact on global food prices and sustainability metrics—but the time horizon for such a sweeping prediction is inherently long, so it is too early to call it definitively right or wrong.

Key points:

  1. Current scope of deployment is narrow, not “nearly every major crop worldwide.”

    • Ohalo’s own materials describe Boosted Breeding as a platform that can be applied to nearly every crop on Earth, but they highlight active programs mainly in potatoes, strawberries, and some other specialty or pilot crops, not a broad, global rollout across all major cereals and oilseeds.(ohalo.com)
    • The May 23, 2024 press release and subsequent coverage emphasize that the technology has been discovered and that Ohalo is implementing Boosted Breeding to drive breakthroughs across a “multitude of crops” with partners and plans to expand, which indicates early rollout and R&D rather than mature, global adoption.(prnewswire.com)
    • Concrete commercial efforts mentioned publicly include Boosted Potato (true potato seed and value‑added traits) and the Ohalo Strawberry Consortium launched in 2025, again pointing to specific crops and regions rather than most major crops worldwide.(ohalo.com)
  2. No evidence that the technology has materially lowered global food prices.

    • Global food prices over 2024–2025 have been driven mainly by macro factors like weather, energy, and geopolitical shocks. The FAO Food Price Index in 2025 is volatile and in many months higher year‑on‑year than in 2024, even if well below the 2022 peak.(tradingeconomics.com)
    • There is no attribution in FAO, Reuters, or other major analyses of food prices to Ohalo’s technology, which is still at an early‑adoption stage; any acreage under Boosted varieties is far too small (and geographically limited) to measurably affect global commodity prices.
  3. Sustainability and yield claims remain mostly at the trial / promise stage.

    • Ohalo reports large yield gains (50–100%+ in early trials) and theoretical sustainability benefits (less land, water, and inputs per unit of food), but these are framed as potential or early‑trial outcomes, not as measured, global‑scale impacts across major crops.(ohalo.com)
    • Job postings and materials emphasize field trials, early adopter farmers, and development of agronomic protocols for true seed systems, again indicating that the technology is in scale‑up and validation rather than mature, industry‑wide deployment.(jobs.valorcapitalgroup.com)
  4. Why this is “inconclusive” rather than “wrong.”

    • The prediction did not specify a time frame. Transforming breeding systems and seed supply chains across nearly every major crop worldwide would, in realistic agricultural timelines, likely take well over a decade. Only ~18 months have passed since the podcast (May 17, 2024), which is far too short to judge such a structural, global claim.
    • What we can say confidently is that the predicted end state has not yet materialized by late 2025; what we cannot say is that it will not materialize over the longer term. Hence, the correct classification is that it is too early to evaluate definitively.

So, the prediction is unfulfilled so far, but given the inherently long adoption cycle in global agriculture and the lack of a stated deadline, it is best categorized as inconclusive (too early) rather than clearly right or wrong.

techclimate
Compared with conventional crop varieties, boosted‑breeding crops from Ohalo will require less water, less land, and less energy per unit of output to achieve their higher yields.
Do you need more water. … Less water, less land, less energy?View on YouTube
Explanation

As of November 30, 2025, there is not enough independent, quantitative evidence to say whether Ohalo’s boosted‑breeding crops actually use less water, land, and energy per unit of output in real‑world farming.

What we do know:

  • Ohalo and David Friedberg report very large yield gains (often 50–100%+ in early potato trials) from Boosted Breeding, which would mechanically reduce land required per unit of output if input use per acre stayed similar. These claims are based on company trials and press materials, not yet large-scale independent field data. (ohalo.com)
  • Friedberg has explicitly claimed that the technology will enable “more food per acre using less water, less land, less fertilizer per unit output,” but this is framed as the potential impact of the platform, not as results from long‑term commercial deployment. (freethink.com)
  • Ohalo is still early in commercialization: potatoes and strawberries are in programs and consortia, and an almond variety (FruitionOne) with a projected ~50% reduction in water per almond is only in trial orchards, with first commercial deliveries targeted for 2027. (ohalo.com)
  • Regulatory milestones (e.g., USDA APHIS review of an Ohalo potato line) and initial product offerings show the tech is moving toward market, but they do not provide measured data on water, energy, or land use per ton of output relative to conventional varieties. (agtechnavigator.com)

Because the public record so far is dominated by company‑reported trials and forward‑looking statements, and there are no peer‑reviewed or large‑scale independent studies quantifying water, land, and energy use per unit yield for Ohalo’s boosted crops versus conventional varieties, Friedberg’s multi‑part efficiency prediction cannot yet be judged as clearly correct or incorrect. It remains too early to call.

scienceclimate
Once Ohalo’s boosted‑breeding crops reach commercial trials and deployment, fertilizer required per pound of harvested output will decrease significantly compared with current crop systems, due to improved efficiency despite higher biomass.
Fertilizer usually scales with biomass. But these sorts of systems should be more efficient. So fertilizer use per pound produced should go down significantly. As we get to commercial trials with all this stuff.View on YouTube
Explanation

As of November 30, 2025, there is not enough real‑world data to determine whether Ohalo’s Boosted Breeding crops actually use significantly less fertilizer per pound of harvested output in commercial settings.

Key points:

  1. Stage of deployment: After unveiling Boosted Breeding in May 2024, Ohalo reported large yield gains (50–100%+) in early internal trials, but described the platform as moving toward commercial trials and looking for partners, not yet in broad commercial deployment.(agtechnavigator.com) A June 2024 Freethink piece explicitly says Ohalo is preparing for commercial trials and seeking partners.(freethink.com) A 2025 job posting for a "Field Technology Development & Agronomy Lead" to support "early adopter farmers" with true potato seed further indicates they are still building out field systems and protocols rather than operating at mature commercial scale.(s32.getro.com)

  2. Claims vs. measured fertilizer data: Friedberg and Ohalo repeatedly claim that Boosted Breeding will enable more food per acre with less fertilizer per unit output, echoing the prediction.(freethink.com) However, public materials (press releases, interviews, product pages) focus on yield, disease resistance, true seed, and reduced pesticide use; they do not provide quantified fertilizer‑input data (e.g., N/P/K rates or fertilizer per ton of yield) from commercial or large‑scale field trials.(prnewswire.com) No independent agronomic studies or multi‑season commercial trial results reporting fertilizer use per unit yield for Boosted crops appear in the literature or trade press.

  3. Timeline for full commercial products: The only clearly dated commercial roadmap item with regulatory detail, the FruitionOne self‑fertile Nonpareil almond variety, has trial orchards underway with first commercial deliveries not expected until 2027, again suggesting the broader product portfolio is still pre‑mass‑deployment.(prnewswire.com)

Because:

  • Boosted Breeding crops are only in early adopter / preparatory commercial‑trial phases, and
  • There is no published evidence yet on actual fertilizer applied per pound of harvested crop in those trials or deployments,

we cannot say whether fertilizer intensity has in fact fallen "significantly" as predicted. The prediction is therefore too early to evaluate, not clearly right or wrong at this time.

Chamath @ 01:20:35Inconclusive
aitecheconomy
Within approximately 5–10 years from May 2024, typical company organizational structures and workflows will be transformed by AI and related tools to such an extent that they will be barely recognizable compared to how companies operate today (much smaller headcount relative to output, far lower opex, heavy automation/delegation).
The way that companies will work in five and ten years, I don’t think guys, any of us are going to recognize what it’s going to look like.View on YouTube
Explanation

The prediction concerns how companies will operate 5–10 years after May 2024 (i.e., roughly 2029–2034). As of the current date, 2025-11-30, we are only about 1.5 years past the prediction date, well before the start of the forecast window (2029). There is no way yet to evaluate whether organizational structures and workflows will, by then, be “barely recognizable” compared to today. Therefore, the correctness of this prediction cannot currently be determined.

Jason @ 01:21:22Inconclusive
aieconomy
Over the coming years of the AI-driven productivity boom (implicitly the next several years after 2024), overall unemployment in the U.S. will remain low rather than rising significantly, with the primary adjustment being the creation of many more (often smaller) companies rather than widespread joblessness.
The unemployment rate is going to stay very low. We’re just going to have more companies.View on YouTube
Explanation

As of late 2025, the prediction looks directionally consistent with the data so far, but the stated time horizon (“over the coming years of the AI-driven productivity boom”) clearly extends beyond the ~18 months since May 2024, so it’s too early to call it definitively right or wrong.

On unemployment: The U.S. unemployment rate did rise modestly from around 3.8% in late 2023 to an average of about 4.2% in Q4 2024, according to the Bureau of Labor Statistics, but remained in what economists generally consider a historically low range. (bls.gov) By September–October 2025, estimates put unemployment around 4.3–4.4%, a four‑year high but still low compared with past recessions and still among the lowest rates in the G7. (ft.com) Weekly jobless claims in November 2025 are at a seven‑month low, indicating layoffs remain subdued even as hiring slows. (reuters.com) So there has been no surge into high unemployment or “widespread joblessness” so far.

On “more companies”: New business formation has remained unusually strong. The U.S. saw about 5.21 million business applications in 2024, after a record 5.48 million in 2023, both far above the ~3.5 million pre‑pandemic level. (finder.com) Through 2025 to date, applications are slightly up relative to the same period in 2024, suggesting continued elevated entrepreneurial activity. (finder.com) This supports the idea that adjustment so far has included a large number of (often small) new firms rather than a collapse in employment.

However, because the claim was about the entire AI‑driven boom over “the coming years,” and structural effects of AI on labor markets can take several years to fully manifest, the available data window is too short to conclusively validate or falsify the long‑run prediction. Hence the classification: inconclusive (too early).

aitechventure
A new type of company will emerge (and become relevant at scale) that specializes in clearing/licensing publisher content rights for AI-answer products like Google’s AI Overviews, acting as an intermediary between content creators and AI platforms.
There’s going to need to be a new company that clears this content so that Google can do answers like this.View on YouTube
Explanation

Evidence since mid‑2024 shows the emergence, and growing scale, of specialized intermediaries that clear or license publisher/creator content for AI systems, including answer/search products:

  • The long‑standing Copyright Clearance Center (CCC) created new AI‑specific collective licenses in 2024–2025 that let organizations ingest and use large corpora of copyrighted works in AI systems via a central license, and then expanded this to an AI Systems Training License for models whose outputs are used externally—i.e., commercial, customer‑facing generative AI products. CCC acts as an intermediary between thousands of publishers and AI developers, and reports strong publisher uptake. (grokipedia.com)
  • In the U.K., the Copyright Licensing Agency (CLA), Authors’ Licensing and Collecting Society (ALCS), and Publishers’ Licensing Services (PLS) are rolling out a collective licensing framework for AI training and retrieval‑augmented generation (RAG), explicitly designed so AI developers can get permission to use a wide range of text‑based published works through a single license, with CLA administering fees and distributions to rightsholders. This is exactly an intermediary model for AI training/RAG at national scale. (ailawandpolicy.com)
  • A Bloomberg Law report describes a wave of “AI licensing startups” that aggregate and license copyrighted works for AI, including Created by Humans (authors and book publishers licensing to AI systems), Narrativ (voice actors’ likenesses), and ProRata, whose tools decompose AI outputs and route compensation back to contributors. These are explicitly characterized as "middlemen startups" aggregating many rightsholders into a single licensing package. (news.bloomberglaw.com)
  • Separate coverage of ProRata AI shows it has enrolled 400+ (later 500+) publications—among them major brands like the Financial Times, Axel Springer titles, The Atlantic, Fortune, Vox Media, The Boston Globe, and others—into a 50/50 revenue‑share model. ProRata analyzes generative‑AI outputs, attributes them back to source publishers, and distributes revenue accordingly, functioning as a rights‑clearing and payment intermediary between publishers and AI services. (axios.com)
  • ProRata also operates Gist.ai, an AI‑powered search/answer engine that uses this licensed publisher content to generate responses, a concrete example of an AI‑answer product built on top of a specialized rights‑clearing intermediary rather than purely on unlicensed web scraping. (digiday.com)

While Google specifically has not publicly outsourced AI Overviews licensing to a single neutral clearinghouse, the broader substance of the prediction—that a new class of intermediaries, whose core business is clearing/licensing publisher and creator content for AI answer/search and other generative uses, would emerge and reach meaningful scale—has clearly materialized by late 2025. The combination of large, AI‑specific collective licenses (CCC, CLA/ALCS/PLS) and scaled startups like ProRata/Created by Humans matches the predicted pattern closely enough to count this as right.

Chamath @ 01:24:39Inconclusive
aiventuremarkets
Isomorphic Labs, Google’s drug-discovery/biotech AI unit, will ultimately be worth on the order of multiple hundreds of billions of dollars in enterprise value (i.e., comparable to a standalone multi-hundred-billion-dollar company) once its business is mature.
Between last week’s announcement of Isomorphic Labs, which let’s be honest, that’s a that’s just a multi hundred billion dollar company.View on YouTube
Explanation

As of November 30, 2025, Isomorphic Labs is still an Alphabet-controlled AI drug‑discovery subsidiary, not an independently listed or otherwise separately valued company, so there is no observable enterprise value on the order of hundreds of billions of dollars to confirm or falsify Chamath’s long‑run claim. The company remains in the R&D and early‑commercialization phase: it has struck multibillion‑dollar potential collaboration deals with Eli Lilly and Novartis, together worth up to about $3 billion in milestones and royalties, but these are contingent and long‑dated rather than mature cash flows. (prnewswire.com) In March 2025, Isomorphic raised $600 million in its first external funding round (one of the largest UK AI‑biotech raises), yet the round did not disclose a valuation, and industry reporting only notes that Alphabet remains the majority owner. (prnewswire.com) The company expects its first AI‑designed drugs to enter human trials around the end of 2025, underscoring that the business is still far from a “mature” steady‑state where ultimate enterprise value could be credibly assessed; drug development timelines are typically 10–15 years. (cnbc.com) Because the prediction concerns Isomorphic’s eventual value once its business is fully mature, and the company has not yet reached anything close to that stage, there is currently insufficient evidence to judge the forecast as either correct or incorrect.

Sacks @ 01:34:33Inconclusive
markets
Alphabet/Google’s market capitalization will reach approximately $5 trillion at some point in the future, despite earlier AI missteps, driven in part by successfully copying and integrating AI features into search and other products.
Same thing here with Google. They completely screwed up AI. They invented the transformer. Completely missed LLMs…Doesn’t matter…they’re probably going to become a $5 trillion company now.View on YouTube
Explanation

As of the latest available data (close of trading on 2025-11-28), Alphabet Inc. (GOOGL) has a market capitalization of about $2.94 trillion, well below the $5 trillion level mentioned in the prediction.

However, the prediction is explicitly open-ended: Alphabet/Google will become "a $5 trillion company" "at some point" in the future, with no time horizon specified. Since there is still ample time for Alphabet’s market cap to potentially reach ~$5T, the prediction cannot yet be judged as definitively right or wrong. We only know that it has not come true yet by 2025-11-30, not that it will never come true.

Therefore, the appropriate status is inconclusive (too early) rather than right or wrong.

Sacks @ 01:32:51Inconclusive
techai
Gemini-powered AI Overviews/one-box style results will become the dominant interface for Google Search, effectively displacing the traditional ‘10 blue links’ paradigm as the primary way users consume search results.
It’s very clear now that Gemini powered one box is the future of Google search. People just want the answer. I think that this feature is going to eat the rest of Google’s search.View on YouTube
Explanation

As of November 2025, Gemini-powered AI Overviews are clearly a major part of Google Search, but it’s not yet possible to say they have definitively displaced the traditional “10 blue links” as the primary way users consume results, and the prediction had no explicit time horizon.

Evidence that AI Overviews are becoming central:

  • Google’s own posts and earnings calls describe AI Overviews as a major new layer in Search, rolled out to over 100 countries and reaching more than 1–2 billion users monthly by early–mid 2025. (blog.google)
  • Multiple independent studies show AI Overviews now appear in a substantial share of queries. Semrush/Datos found they appeared in ~13% of U.S. desktop searches as of March 2025, up from 6.5% in January. (searchengineland.com) Ahrefs data from June 2025 estimated AI Overviews on about 12.8% of all Google searches, and a later Ahrefs-based analysis in late 2025 put their presence at roughly 20–21% of all queries across 146 million results, especially for long, informational, question-style searches. (ahrefs.com)
  • One smaller 8,000-keyword study (Advanced Web Ranking, summarized by an agency blog) reported AI Overviews on ~60% of U.S. queries and called them the “dominant experience in Google’s interface,” showing how, at least in some sampled keyword sets, the AI box now visually dominates the SERP. (xponent21.com)

Evidence that the “10 blue links” paradigm has not been fully displaced:

  • Even in the more aggressive studies, AI Overviews appear above traditional results; they do not fully remove organic listings from the default “All” tab. For a large fraction—and likely a majority—of total search volume, users still see and can scroll through standard link lists, especially on commercial, navigational, local, and news queries where AI Overviews are relatively rare. (digitalinformationworld.com)
  • Google’s experimental AI-only “AI Mode,” which truly replaces the link list with an AI summary, is opt-in and paywalled for Google One AI Premium subscribers, not the default interface for the mass user base. (reuters.com)
  • A Google public-policy VP explicitly stated in September 2025 that “we’re not going to abandon” the 10-blue-links model and framed AI summaries as an additional, not replacement, experience—even while acknowledging that users are increasingly asking for contextual AI answers. (edexlive.com)
  • Other analyses note that, as of 2025, the main Google Search experience is still a familiar list of links with AI Overviews added on top, and that AI Mode remains a separate, optional tab. (paadiatech.com)

Interpretation:

  • Sacks’ directional claim—that Gemini-powered one-box/AI answers are the future of Google Search and will take a large bite out of traditional link-based behavior—is strongly supported by current trends: AI Overviews are widely deployed, visually dominant on many informational SERPs, and associated with declining organic click-through rates.
  • But his stronger version—that this experience will "eat the rest of Google’s search" and become the clearly dominant, primary interface displacing the 10 blue links paradigm—has not yet fully materialized as of late 2025. Traditional results remain core, AI-only mode is niche, and measured AI Overview coverage (especially in large-scale datasets) is still well below 100% of queries.

Because the prediction is framed about the long-term “future” of Google Search rather than by a specific date, and current evidence shows a strong but incomplete shift rather than a clear, completed displacement, the fairest assessment as of November 30, 2025 is “inconclusive (too early)” rather than definitively right or wrong.

aitechmarkets
Perplexity.ai, as an independent search/answer engine, will struggle severely or fail competitively now that Google has copied its core product approach; its most viable positive outcome is to be acquired rather than to succeed at large scale as a standalone competitor.
They got caught completely flat footed. And they’ve now, I guess, caught up by copying perplexity and sucks for perplexity. I think they’re kind of screwed now unless they get over an acquisition deal.View on YouTube
Explanation

Evidence since May 2024 shows Perplexity has grown rapidly and remained independent despite Google rolling out AI Overviews, rather than being “kind of screwed” or pushed into an acquisition.

Key points:

  • Strong funding and soaring valuation: Perplexity was valued just over $1 billion in April 2024, but after Google’s AI Overviews launch it went on to raise a $500 million round in June 2025 at a $14 billion valuation, and by September 2025 its valuation was reported around $20 billion—hardly the profile of a company failing competitively. (en.wikipedia.org)

  • Rapid user and usage growth: Analyses report Perplexity handling around 30 million queries per day (780 million in May 2025) with more than 20% month‑over‑month growth and over 22 million active users, giving it about a 6.2% share of the AI search segment. (en.wikipedia.org) This is growth after Google’s AI Overviews became a mainstream feature in search.

  • Still independent; acquisition only speculative: Multiple reports in mid‑2025 describe Apple internally discussing a potential bid for Perplexity, but both Apple and Perplexity declined to confirm any deal, and coverage characterizes talks as early and uncertain. Perplexity’s CEO has explicitly said he does not want to sell to Big Tech and wants the company to remain independent. (macrumors.com) As of late 2025, there is no credible report that Perplexity has been acquired.

  • Behavior of a confident competitor, not a distressed target: Perplexity has launched major new products (e.g., the Comet AI browser) and even made bold proposals to merge with TikTok U.S. and to buy Google Chrome, positioning itself as an acquirer or strategic partner rather than a company forced into a sale. (en.wikipedia.org)

  • Google competition did not wipe it out: Google’s AI Overviews now reaches over 1.5 billion monthly users worldwide, embedding generative AI deeply into Google Search. (mlq.ai) Despite this, Perplexity has continued to raise large rounds, sign major partnerships, and grow usage, contradicting the idea that Google copying its approach would effectively kill its prospects.

Given that (1) Perplexity has not been acquired, (2) it is expanding and highly valued, and (3) it remains a notable independent competitor in AI search, the prediction that it was “screwed” and that acquisition was its main viable outcome has not come true as of November 30, 2025.