Sacks @ 00:26:10Ambiguous
ventureai
Because of GPT‑4o and similar advances in foundation models, most of the R&D invested over the last 2–5 years into building proprietary conversational/virtual customer support agents (especially work focused on core conversational ability and speed) will be rendered obsolete, and new customer support agent startups built on the latest models from 2024 onward will have a structural advantage over those earlier efforts.
there are a whole bunch of startups out there that were creating virtual customer support agents... I think their product roadmaps just became obsolete... my guess is that hundreds of millions of dollars of R&D just went out the window. And probably this is the best time to be creating a customer support agent company if you're doing it two years ago, five years ago. Your work is just like been obsolete.View on YouTube
Explanation
Evidence since GPT‑4o’s May 2024 launch shows that frontier foundation models have rapidly become the core of many new and existing customer‑support agents, but there isn’t clear, measurable proof that most of the R&D from the previous 2–5 years is now obsolete or that new 2024‑era startups have a durable structural advantage.
What clearly did happen
- GPT‑4o and GPT‑4o mini were explicitly positioned and priced for high‑volume automation use cases like customer‑service agents; OpenAI and industry coverage highlight customer‑support automation as a prime application, with lower costs and better latency than earlier models. (en.wikipedia.org) Case‑study style pieces describe GPT‑4o cutting support costs and handling large fractions of customer inquiries for companies like Octopus Energy or Lunar Bank, reinforcing that these models are now central to modern support stacks. (ptolemay.com)
- Major CX platforms such as Zendesk quickly rolled GPT‑4o into production. Zendesk announced in May 2024 that it rolled out GPT‑4o across its AI offering within 24 hours to improve speed and quality for bots and agents. (zendesk.com) By March 2025, Zendesk was piloting a new class of agentic AI agents powered by OpenAI models (including GPT‑4o), explicitly moving away from older intent‑based bots toward generative, reasoning‑based agents. (openai.com) This supports the direction of Sacks’s claim: older, rule/intent‑driven conversational tech is being superseded by LLM‑centric agents.
- Many new tools and indie products for “AI support teams” launched in 2024–2025 are built directly on GPT‑4o, Claude, Gemini, etc., without proprietary base models, and claim high automation rates (e.g., founders reporting ~80% of tickets auto‑handled using GPT‑4o‑plus‑other‑LLMs pipelines). (reddit.com) This shows that new entrants can reach strong performance quickly by standing on top of the latest foundation models, consistent with the idea that the bar for proprietary conversational tech has been raised.
Why the prediction can’t be cleanly scored as right or wrong
- Zendesk and similar incumbents are not being structurally displaced by 2024‑born startups; they are themselves integrating GPT‑4o and newer models, leveraging extensive pre‑existing R&D in routing, workflows, data infrastructure, UI, and distribution. Their own case study emphasizes that they can benchmark and deploy new models (including OpenAI’s reasoning models) in under 24 hours, specifically to reuse their existing platform rather than scrap it. (openai.com) That’s very different from “your work is obsolete” in the sense of wasted R&D.
- Industry‑level data still show that end‑to‑end AI agents handle a minority of total customer‑service volume; human agents and traditional systems remain the backbone. For example, Gartner’s projection that only about 10% of customer‑service interactions will be fully automated by AI by 2026 implies that earlier investments in knowledge bases, workflows, and tooling continue to matter significantly alongside new LLM agents. (linkedin.com)
- There is no public, quantitative evidence that “hundreds of millions of dollars” of R&D from 2019–2022 virtual‑agent startups has literally “gone out the window.” Many such companies have simply swapped or layered in GPT‑4/4o (or competitors) under the hood rather than discarding their products entirely; public announcements tend to frame this as an upgrade, not a write‑off.
- Some large enterprises have moved away from proprietary access to OpenAI’s models toward custom or open‑source stacks for cost, latency, or privacy reasons. A prominent example is AT&T, which initially used ChatGPT/GPT‑4 to analyze customer‑service call summaries, then replaced it with a hybrid system of distilled and open‑source models (including Llama 70B), cutting costs to about 35% of the ChatGPT‑based solution while retaining most of the accuracy. (businessinsider.com) This suggests that, even post‑GPT‑4o, there is still strategic value in investing in custom modeling rather than that R&D being per se obsolete.
Net assessment
- Directionally, Sacks’s intuition—that frontier models like GPT‑4o would make it much easier for new teams to build high‑quality, fast conversational agents and would undermine older, intent‑based/hand‑rolled conversational tech—has been borne out in practice.
- However, the strong form of the claim (“most” 2–5‑year R&D in virtual agents is now obsolete; new GPT‑4o‑era startups have a structural advantage over earlier efforts) is not something we can confirm or falsify from available data: incumbents have generally adapted by integrating the same models, and there’s no systematic evidence that earlier R&D has been rendered worthless or that pre‑2022 startups are being displaced en masse.
Because the observable facts support the general trend but do not let us decisively judge the extent (“most,” “hundreds of millions… out the window,” “structural advantage”), the prediction is best classified as ambiguous rather than clearly right or wrong.