Last updated Nov 29, 2025

E167: Nvidia smashes earnings (again), Google's Woke AI disaster, Groq's LPU breakthrough & more

Fri, 23 Feb 2024 19:48:00 +0000
Back to episodes
Chamath @ 00:16:33Inconclusive
marketstech
Nvidia's revenue will continue to grow at large scale (i.e., remain at or above its early-2024 run-rate and not materially contract) for approximately the next 2–3 years from February 2024.
I think the revenue scale will continue for like the next 2 or 3 years probably for Nvidia.View on YouTube
Explanation

Chamath’s claim was that Nvidia’s revenue scale would continue for “the next 2 or 3 years” from February 2024, i.e., roughly through February 2026–February 2027, meaning no material contraction from its then-current (very elevated) run‑rate.

As of late 2025, all available data show Nvidia’s revenue has grown dramatically rather than contracted:

  • In Q4 FY2024 (quarter ended January 28, 2024, just before the prediction), Nvidia reported revenue of $22.1B.
  • In Q1 FY2025, revenue rose to $26.0B, up 18% sequentially and 262% year‑over‑year.
  • By Q2 FY2025, revenue hit $30.0B.
  • By Q4 FY2025 (quarter ended January 26, 2025), quarterly revenue climbed further to $39.3B, with full‑year FY2025 revenue of $130.5B, up 114% year‑over‑year. (nvidianews.nvidia.com)
  • In Q1 FY2026, revenue reached $44.06B, another record, despite export‑control headwinds to China. (investopedia.com)
  • In Q3 FY2026, revenue surged again to about $57.0B for the quarter, up 62% year‑over‑year and 22% sequentially, driven mainly by data center/AI sales. (tomshardware.com)

So far, Nvidia’s revenue has not only maintained but far exceeded its early‑2024 run‑rate, validating the direction of Chamath’s thesis up to now.

However, the full 2–3 year horizon has not yet elapsed as of November 30, 2025. To conclusively judge the prediction, we’d need to see whether this high revenue scale persists at least through roughly February 2026 (the 2‑year mark) and ideally toward 2027. Since that period isn’t over, the prediction cannot yet be definitively scored as right or wrong, only as on track so far.

Therefore, the correct classification is “inconclusive (too early)”, even though current data are strongly supportive so far.

Sacks @ 00:21:40Inconclusive
marketstechai
Consensus Wall Street analyst expectations as of February 2024 are that Nvidia will still hold roughly "60-something percent" of the GPU market used for AI workloads five years later (around 2029).
The Wall Street analysts I've been listening to think that in five years they're still going to have 60 something percent market share, so they're going to have a substantial percentage of this market or this Tam.View on YouTube
Explanation

This prediction is explicitly about Nvidia’s market share in AI GPUs five years after February 2024—i.e., around 2029. The current date is 2025-11-30, so we are only partway through the forecast horizon. There is no data yet on what Nvidia’s AI GPU market share will be in 2029, and current dominance or trends in 2024–2025 cannot definitively confirm or falsify a statement about 2029. Therefore, it is too early to judge whether this prediction is right or wrong.

Sacks @ 00:25:35Inconclusive
aitech
The AI-driven wave of new applications (both consumer and enterprise) powered by cloud data centers and GPUs will continue to expand for at least the next decade from 2024, i.e., through at least 2034.
I mean, we're just at the beginning of a wave that's probably going to last at least a decade.View on YouTube
Explanation

As of November 30, 2025, we are less than two years into the 10‑year window (2024–2034) specified by the prediction, so it is not yet possible to determine whether the duration claim (“at least a decade”) is correct.

Empirically, the AI‑driven infrastructure and application boom is still very much underway:

  • Global data center GPU and AI infrastructure spending is growing rapidly and is forecast to keep expanding through at least 2030, driven by AI/ML workloads, cloud services, and GPUs/accelerators. (globenewswire.com)
  • Major cloud service providers (AWS, Microsoft, Google, Meta, Oracle, Tencent, Alibaba, Baidu) are dramatically increasing CapEx, with projections of $420B+ in 2025 and $520B in 2026, largely to fund GPU servers and AI data centers. (iconnect007.com)
  • Data center investments overall surged to about $455B in 2024 and are expected to continue rising, with hyperscalers’ CapEx heavily driven by AI infrastructure. (ciodive.com)
  • Server and AI accelerator forecasts show strong growth at least into 2026, with high‑end GPU server volumes and ASIC accelerators both expanding to serve AI applications. (investing.com)

These data points support the near‑term part of the thesis that an AI‑driven GPU/cloud data center wave is underway and still expanding. However, the prediction is about this wave continuing through at least 2034. Long‑range market forecasts out to ~2030 are positive but inherently uncertain and do not conclusively validate a full decade‑long boom.

Because the end date (2034) is still almost nine years away and no clear evidence yet contradicts or confirms sustained growth over that whole period, the correct evaluation at this time is “inconclusive (too early)”.

techmarkets
If Groq ("grok with a Q") is able to produce its LPU chips at scale, its market capitalization will increase substantially from its early-2024 level of roughly "a billion something" (i.e., it will become worth significantly more than a low-single-digit billion dollars over the following years).
there's a lot of market cap for grok to gain by just being able to produce these things at scale.View on YouTube
Explanation

Evidence strongly indicates that Chamath’s conditional prediction has come true as of late 2025.

  1. Starting point (“a billion something”)

    • Groq was valued at about $1.1B after its 2021 Series C round, which is consistent with Chamath’s early‑2024 description of it being worth “a billion‑something.” (econotimes.com)
    • In August 2024, Groq raised $640M in a Series D round at a $2.8B valuation, confirming that as it entered the mid‑2020s it was still in the low‑single‑digit billions. (reuters.com)
  2. Scaling LPUs (“produce these things at scale”)

    • Following that funding, Groq stated plans to deploy more than 108,000 Language Processing Units (LPUs) by the end of Q1 2025, with chips manufactured by GlobalFoundries—clear evidence of an aggressive scale‑up in production and deployment capacity. (economictimes.indiatimes.com)
    • In February 2025, Saudi Arabia committed $1.5B to procure Groq’s AI chips, expected to drive around $500M in 2025 revenue, further indicating that Groq could actually ship substantial volumes of its LPUs into real deployments. (reuters.com)
    • By late 2025, Groq was planning a major global data‑center expansion specifically to meet surging demand for its high‑speed AI inference hardware, again pointing to practical, at‑scale production and deployment rather than just prototype volumes. (winbuzzer.com)
  3. Resulting market‑cap / valuation increase

    • In August 2024, the company’s valuation rose to $2.8B with the Series D. (reuters.com)
    • In 2025 Groq then pursued additional large funding rounds. Reports in July 2025 described talks to raise funding at about a $6B valuation, nearly doubling the prior year’s $2.8B figure. (reuters.com)
    • In September 2025, Groq closed a $750M round that more than doubled its valuation to $6.9B in just over a year, with multiple outlets confirming the $6.9B figure. (reuters.com)
  4. Assessment against the prediction

    • Condition: If Groq could “produce these things [LPUs] at scale.” — The large planned and actual deployments (100k+ LPUs), major chip procurement deals, and global data‑center build‑out strongly indicate that Groq did advance to large‑scale production and deployment of its LPUs. (economictimes.indiatimes.com)
    • Outcome: From a roughly $1–2B range in the early‑2020s to about $6.9B by September 2025, Groq’s private valuation increased several‑fold, clearly “substantially” above a low‑single‑digit‑billion baseline within the “following years” after the February 2024 podcast.

Given that both the if condition (scaling LPUs) and the then outcome (substantial valuation/market‑cap increase) are satisfied by late 2025, Chamath’s prediction is best judged as right.

techai
Following the public backlash to Gemini’s rollout (Feb 2024), Google will not remove ideological bias from the model but will instead adjust it to be less obvious and more subtle, rather than changing the underlying values or objectives.
Now, I think what's going to happen now is in light of this, the reaction to the rollout is, do I think they're going to get rid of bias? No, they're going to make it more subtle... I think they're simply going to they're going to dial down the bias to be less obvious.View on YouTube
Explanation

Available evidence suggests Google did not remove Gemini’s underlying ideological lean but instead adjusted how it surfaces, aligning with Sacks’s prediction.

  • In Google’s own post‑mortem on the February 2024 image‑generation fiasco, leadership said the feature had “missed the mark” because tuning for diversity and caution overcorrected, and promised to improve it while explicitly reaffirming goals of broad representation and safety rather than abandoning those goals.(blog.google)
  • When Google re‑enabled people image generation with Imagen 3 in August 2024, it emphasized new mitigations to make outputs more “fair,” training on captions to improve the variety and diversity of concepts and filtering data for “safety” and “fairness,” indicating continued value alignment rather than a neutrality reset.(techcrunch.com)
  • Independent 2025 research measuring political orientation across languages finds Gemini still exhibits a clear liberal/left‑wing tendency, often stronger than ChatGPT’s, meaning the system remains ideologically non‑neutral well after the backlash.(arxiv.org)
  • A separate 2025 case study on Gemini 2.0 Flash reports ongoing systematic moderation and gender/content biases (some reduced, some shifted), consistent with retuning guardrails rather than eliminating value‑laden behavior.(arxiv.org)
  • External testing and user reports describe Gemini as one of the most heavily censored models—frequently refusing politically sensitive prompts and showing asymmetric treatment of different political targets—indicating that overtly skewed outputs were largely replaced by stricter, selectively applied refusals and subtler bias rather than a genuine removal of ideology.(twitter.com)

Because Gemini’s observable behavior after the controversy still reflects a consistent ideological tilt, but with more careful tuning and heavier use of safety filters instead of obviously skewed generations, Sacks’s forecast—that Google would “dial down” rather than remove the bias—is best characterized as right.

General-purpose AI Q&A services (such as Google Gemini) will only be commercially successful if they implement user-level personalization of values and filtering; if they do not, over time a significant share of users will abandon these products after encountering outputs that conflict with their objectives.
I think the only way that a company like Google or others that are trying to create a general purpose, knowledge Q&A type service are going to be successful is if they enable some degree of personalization... If they don't allow this, eventually everyone will come across some search result or some output that they will say doesn't meet their objectives... they're going to say, I don't want to use this product anymore.View on YouTube
Explanation

Major general‑purpose AI Q&A services have become commercially successful without offering the kind of explicit user‑level values and filtering personalization Friedberg described.

  1. Strong commercial success of non‑personalized LLMs

    • OpenAI’s ChatGPT has on the order of 700–800 million weekly active users in 2025, processes billions of messages per day, and is generating revenue at a run rate in the tens of billions of dollars. (business-standard.com)
    • Google’s Gemini app has also grown rapidly, surpassing 450M+ monthly active users by mid‑2025 and then around 650M MAUs by Q3 2025. (timesofindia.indiatimes.com)
      These are clear cases of large‑scale commercial success.
  2. What personalization actually exists

    • ChatGPT offers Custom Instructions and “Memory mode,” letting users set preferences for tone, level of detail, language, and some persistent context across chats—essentially style and task personalization. (tomsguide.com)
    • Google Gemini similarly added Personal Context / memory so it can remember user details and preferences to tailor answers, plus features like Temporary Chats. (theverge.com)
      These are convenience and style controls, not user‑selectable moral or political value systems or per‑user safety/"wokeness" levels.
  3. Core values and filtering remain centrally set, not user‑personalized

    • Anthropic’s Claude is governed by a fixed “Constitutional AI” framework: a single set of principles (e.g., human‑rights‑inspired rules) that define model behavior, explicitly not per‑user moral personalization. (anthropic.com)
    • Mainstream U.S. chatbots—including ChatGPT, Gemini, Copilot, and Claude—still enforce unified safety and content policies around politics, hate speech, and harmful instructions. Regulatory and political pressure (e.g., U.S. state investigations into alleged political bias) has pushed toward tighter, centralized controls, not optional user‑controlled value dials. (theverge.com)
      In other words, the industry did not move to broad user‑level value/filter personalization as a prerequisite for success.
  4. Users do encounter objectionable or misaligned outputs—yet overall usage grows

    • Google’s Gemini image‑generation controversy in February 2024 is a clear example where outputs (racially anachronistic historical images, odd moral answers) offended and angered users; Sundar Pichai publicly acknowledged Gemini’s responses had "offended our users" and showed bias, and Google temporarily paused people‑image generation. (wusf.org)
    • Anecdotally, some users say they “don’t use Gemini anymore” or complain that Gemini or Grok became too censored or too "woke" and claim to have dropped them. (reddit.com)
    • However, despite repeated controversies, both ChatGPT and Gemini show rapidly rising user numbers and revenues, not broad abandonment. (business-standard.com) Surveys of AI attitudes show growing concern and mistrust around AI, but mainly about jobs, misinformation, and general unease—not documented mass churn specifically because outputs clashed with individual values. (theverge.com)
  5. Net assessment vs. the prediction

    • Friedberg’s strong claim was that the only way such Q&A services would be successful is by enabling user‑level value/filter personalization, and that if they didn’t, users encountering misaligned outputs would broadly say "I don’t want to use this product anymore."
    • In reality, by late 2025, the leading systems:
      • Have limited, mostly stylistic/context personalization, not user‑selectable moral or ideological filters.
      • Have faced real backlash over perceived bias and "wokeness," and some individual users have indeed quit.
      • Yet are commercially thriving at very large scale, with usage and revenue still growing.

Because general‑purpose AI Q&A products have become widely and commercially successful without the kind of user‑level value/filter personalization he described, and because there is no evidence that a "significant share" of users has abandoned them for that specific reason, the core conditional claim is contradicted by observed outcomes. Hence the prediction is best classified as wrong, even if it correctly anticipated that alignment controversies would matter for some users and spur demand for more personalization features.

aimarkets
Due to dissatisfaction with heavily filtered/bias-tuned large models, multiple competing AI models will proliferate and open‑source models will gain a significant competitive advantage ('win') in the market over more tightly controlled proprietary models.
And so it is actually an opportunity for many models to proliferate, for open source, to win.View on YouTube
Explanation

The prediction had two parts: (1) many competing models, including open source, would proliferate, and (2) open‑source models would gain a significant competitive advantage and effectively ‘win’ over tightly controlled proprietary models. The first part is clearly happening: surveys and market analyses show that more than half of organizations now use open‑source AI tools or models alongside proprietary ones, and open‑source ecosystems like Llama, Qwen, Mistral, and DeepSeek are a major driver of innovation and cost savings.(mckinsey.com) However, the second, stronger claim that open source would win in the market has not materialized by late 2025. A recent study (Nagle & Yue) finds open models account for only about 20% of usage and roughly 4% of AI-market revenue, despite similar performance and much lower operating cost, indicating that revenue and overall usage are still dominated by closed models.(itpro.com) Enterprise data from Menlo Ventures and related analyses show proprietary providers like Anthropic, OpenAI, and Google now command the vast majority of enterprise LLM workloads, with Anthropic at ~32%, OpenAI ~25%, Google ~20%, while open‑source models (largely Llama and a few others) serve only about 13% of corporate workloads and that share has recently declined.(finance.yahoo.com) Commentaries on the 2025 enterprise landscape explicitly note that open‑source adoption has plateaued or fallen in production use, with organizations sticking to closed models due to performance gaps, integration complexity, and compliance concerns, even though they recognize open source’s cost advantages.(itpro.com) In short, while dissatisfaction with heavily filtered models did help spur a diverse, vibrant open‑source ecosystem, the market outcome so far is a hybrid world where closed models dominate spend and mission‑critical workloads; open source has grown and is important, but it has not secured the clear competitive market win the prediction envisioned.

ai
In response to criticism of Gemini, Google will not materially change the ideological framing of the model but will instead make the biased behaviors less overt and harder to detect, rather than eliminating them.
what I'm afraid of is that what Google will do is not change the underlying ideology that this AI model has been trained with, but rather they'll dial it down to the point where they're harder to call out. And so the energy will just be more subtle.View on YouTube
Explanation

Evidence cuts both ways and doesn’t let us decisively say whether Google merely hid the prior ideological tilt or substantially changed it.

On the “prediction seems right” side:

  • In Google’s first public response after the Gemini image backlash (Feb 22–23, 2024), leadership said the people‑image feature had been over‑calibrated toward diversity and historical “inaccuracies,” and they paused people‑generation while working on an “improved version.” They explicitly framed the problem as tuning and calibration, not as abandoning the underlying goal of diverse representation.(blog.google)
  • Statements from Gemini product leads stressed that image generation was designed to “reflect our global user base” and that they would continue to do this for general prompts, while just “further tun[ing]” historical contexts. That is very close to what Sacks described: keeping the same normative objective but making the obviously biased behaviors less blatant.(cnbc.com)
  • Early external criticism from former Googlers portrayed Gemini’s controversial outputs as a straightforward reflection of an internally “woke” culture, implying the ideological framing behind the model was intact even after the public outcry.(benzinga.com)

On the “prediction seems wrong” side:

  • A July 2024 comparative preprint found that Gemini’s political answers tended to be more centrist overall than those of several competing models, which suggests at least some deliberate re‑balancing toward the middle rather than simply hiding an unchanged ideology.(preprints.org)
  • Multiple 2025 academic audits measuring political slant across leading LLMs report that Gemini does exhibit liberal / left‑leaning tendencies, but often with a smaller or more mixed bias than some rivals, and that its behavior can be steered closer to neutrality with appropriate prompts.(news.stanford.edu)
  • A 2025 benchmark based on Anthropic’s “evenhandedness” metric rated Gemini 2.5 Pro as the most even‑handed model tested (97% score), which strongly indicates Google has invested in more symmetric political handling rather than merely obscuring one side’s bias.(aionx.co)

Because we cannot see Google’s internal training objectives or value‑alignment instructions, and external tests give mixed but improving pictures (still some left‑leaning tendencies, yet increasingly neutral by formal metrics), we can’t definitively say whether the “underlying ideology” remained the same and was just made subtler, or whether Google materially shifted its alignment toward genuine neutrality. The core of Sacks’s prediction concerns Google’s intentions and ideology, which are not directly observable from public data. That makes the truth of the prediction fundamentally indeterminate, so the fairest verdict is ambiguous.

Chamath @ 01:07:20Inconclusive
aimarkets
If Google were to massively license high‑quality training data (on the order of tens of billions of dollars per year) and position itself as the most reliable, truth‑focused AI provider, this strategy could support Google achieving a market valuation on the order of $10 trillion in the future.
We are going to be the truth tellers in this new world of AI. So when everybody else hallucinates, you can trust Google. To tell you the truth, that is a $10 trillion company, right?View on YouTube
Explanation

Chamath’s statement is both conditional and open‑ended in time: he says that if Google positions itself as the trusted, non‑hallucinating “truth‑teller” in AI and pays on the order of tens of billions of dollars per year to license high‑quality training data, then that would support Google being a ~$10T company.

As of November 30, 2025:

  1. Alphabet’s market value is far below $10T but rising. Alphabet/Google’s market cap is around $3.8–3.9T and approaching $4T, not $10T. (companiesmarketcap.com)
  2. The specific precondition (massive data‑licensing spend) hasn’t happened. Public reporting does not show Google spending “tens of billions per year” specifically on training data licensing. Instead, the largest "tens of billions" deals around Google are others (e.g., Anthropic) committing tens of billions to buy Google’s TPU compute capacity, not Google paying for data. (techradar.com)
  3. Google’s AI branding is about ‘responsible’ and ‘for everyone,’ not uniquely a ‘truth‑teller’ moat. Google heavily markets “responsible AI” and AI “built for everyone,” but there is no clear, widely recognized shift where Google is uniquely perceived as the truth‑focused, non‑hallucinating AI provider in the sense Chamath described. (toolify.ai)
  4. No time horizon was specified. Chamath did not say Google would reach $10T by 2025 or any concrete date—only that such a strategy could support a $10T valuation at some point in the future.

Because (a) the strategy he described has not actually been implemented as stated, and (b) there is no explicit deadline by which the $10T outcome must occur, we cannot evaluate whether his conditional claim about that strategy’s ultimate valuation impact is correct or incorrect. We can only say Alphabet is currently far short of $10T while following a somewhat different AI strategy.

Therefore, the prediction is inconclusive at this time.

Within a few months after February 2024, the quality of LLMs will improve so quickly that people will look back on early‑2024 LLM behavior (and this discussion) as primitive and notably bad, analogous to how 1996 web search compares to later eras.
I think we're like in the 1996 era of Llms, and in a couple of months, the pace, things are changing. I think we're all going to kind of be looking at these days and looking at these pods and being like, man, remember how crazy those things were at the beginning and how bad they were?View on YouTube
Explanation

Friedberg expected that within a few months of February 2024, LLM quality would advance so fast that early‑2024 models would soon look primitive and notably bad, like 1996 web search. In reality, the main releases in the next few months—Google’s Gemini 1.5 (announced Feb 15, 2024), Anthropic’s Claude 3 family on March 4, 2024, and OpenAI’s GPT‑4o on May 13, 2024—were meaningful but incremental improvements over GPT‑4, Claude 2.x, and Gemini 1.0, not an order‑of‑magnitude jump. Claude 3 and later 3.5 Sonnet slightly outperform GPT‑4 on many benchmarks, while GPT‑4o raises MMLU from about 86.5 to 88.7 and improves multimodal and voice capabilities, but these are refinements rather than a shift that makes earlier models look ‘bad’.(en.wikipedia.org) Even by late 2025, industry coverage characterizes ChatGPT’s evolution through GPT‑4o, GPT‑4.5, and GPT‑4.1 as steady, incremental progress with ongoing issues like hallucinations, rather than a single transformative break point.(techradar.com) Older models such as GPT‑4 and Claude 3 remain in active use and are still viewed as strong systems, not as obviously obsolete in the way mid‑1990s web search is compared with later generations.(help.openai.com) Thus, both the short timeline (‘a couple of months’) and the implied scale of quality change did not materialize as predicted.

techaimarkets
If Google continues on its current Gemini/product-ideology path, it will lose a meaningful number of users/consumers to competing AI/search products that offer a better experience; competitive market forces will correct for Google’s missteps rather than allowing it to retain dominance with an inferior product.
I actually think to your point, Google could be going down the wrong path here in a way that they will lose users and lose consumers, and someone else will be there eagerly to sweep up with a better product. I don't think that the market is going to fail us on this oneView on YouTube
Explanation

Available evidence to late 2025 shows no meaningful user exodus from Google Search or its broader consumer ecosystem due to the Gemini/“woke AI” controversies, and Google has maintained – even strengthened – its dominance.

  1. Gemini “woke” backlash happened, but Google course‑corrected rather than simply “continuing on the same path.” After the February 2024 uproar over racially inaccurate historical images and other politically skewed outputs, Google paused Gemini’s ability to generate images of people and publicly acknowledged it had “got it wrong,” promising and implementing fixes. Later controversies (e.g., on how Gemini discussed pedophilia) added criticism but were again met with adjustment and public contrition, not a hardline insistence on the original behavior. (forbes.com) This weakens the antecedent of Friedberg’s conditional claim (“if Google continues on this path”).

  2. Search users have not shifted in large numbers to competitors; Google’s market share remains ~90%. As of May 2025, StatCounter‑summarized data shows Google at roughly 89–90% global search share, with Bing at ~4% and other search engines in the low single digits. This is the first time in over a decade Google even dipped slightly below 90%, but it remains overwhelmingly dominant in both global and U.S. search. (en.wikipedia.org) There is no sign of a mass user migration that would constitute a “meaningful” loss of users in the sense Friedberg implied (i.e., enough to seriously erode dominance).

  3. Google’s search and ads businesses are growing, not shrinking, despite AI competition. Alphabet’s 2024–2025 earnings show double‑digit revenue growth and strong search/ads performance: Q4 2024 and multiple 2025 quarters report Search & Other ad revenue up roughly 10–15% year‑over‑year, with total advertising revenue also growing solidly. (cnbc.com) A Nasdaq analysis of Alphabet’s “$100 billion quarter” notes that Search still holds about 90% of global search traffic and that AI‑driven search features are supporting, not undermining, ad revenue. (nasdaq.com) This is inconsistent with a story where market forces are punishing Google by pulling users and ad dollars away.

  4. Google’s AI features are seeing massive adoption rather than abandonment. AI Overviews in Search reached over 1.5 billion monthly users by Q1 2025, and Google reports hundreds of millions of active Gemini users, with rapid growth. (cnbc.com) That suggests users are engaging with Google’s AI‑infused search rather than fleeing it for rivals.

  5. Rivals like Perplexity and Bing are growing, but at a scale that doesn’t yet threaten Google’s dominance. Perplexity processed about 780 million queries in May 2025 (~30 million per day) and has raised large sums at a high valuation, but that volume is tiny compared to Google’s billions of daily searches. (en.wikipedia.org) Bing’s global share remains only a few percent. (en.wikipedia.org) These gains are meaningful for startups and Microsoft but not yet “market correction” on the scale Friedberg described.

  6. Financial markets view Alphabet as an AI and search winner, not a company being disciplined for an inferior product. Reporting from the FT, WSJ, Reuters, Barron’s, and others in late 2025 describes Alphabet adding over $1 trillion in market cap, nearing a $4 trillion valuation, and being treated as a leading AI beneficiary. Analysts emphasize its reinforced dominance in search and AI and its strong earnings and margins. (ft.com) That outcome is the opposite of what you’d expect if “the market [were] not going to fail us” and were punishing Google for keeping an inferior, ideologically skewed product.

Putting this together: while there was real user backlash and some incremental competitive pressure, there is no strong evidence that Google has lost a meaningful number of users/consumers in a way that seriously erodes its search dominance or that market forces have “corrected” Google’s position. Instead, Google remains the overwhelmingly dominant search provider with growing AI usage and strong financial performance. On the observable 2024–2025 data, Friedberg’s prediction has not materialized and is best scored as wrong rather than merely “too early” or ambiguous.

ai
No single company will be able to achieve a monopoly on training data or information control for AI models through paid content licensing, because the open internet already contains sufficient data to prevent effective monopolization and a resulting 'disinformation age' driven by one actor.
I think the open internet has enough data that there isn't going to be a monopoly on information by someone spending money for content from third parties. I think that there's enough in the open internet to give our all. Give us all kind of, you know, the security that we're not going to be monopolized away into some disinformation age.View on YouTube
Explanation

As of November 30, 2025, no single company has achieved anything close to a monopoly on AI training data or on informational control via paid content licensing.

On the data supply side, there is an active, multi‑firm race to license training data: OpenAI, Google, Meta, Microsoft, Apple, Amazon and others all buy datasets from publishers, stock-image libraries, and brokers, rather than one firm dominating the market. Reuters describes a "generative data gold rush" in which all of these companies are licensing archives and specialized content, not just a single player. (reuters.com) OpenAI has signed numerous media deals (Associated Press, Axel Springer, News Corp, Future, Vox, The Atlantic, Axios, etc.), with at least one major deal (News Corp) reported as exclusive for that publisher’s journalism, but most others are explicitly non‑exclusive and confined to particular archives. (apnews.com) This gives OpenAI advantages, but not a monopoly over all high‑value human text.

Crucially, the open internet and open corpora remain very large and widely used. Common Crawl alone provides hundreds of billions of web pages as a free, open corpus used in many LLMs. (commoncrawl.org) The Allen Institute’s Dolma dataset (≈3T tokens) is fully open and powers the OLMo family of models, with both data and code released. (allenai.org) Harvard’s public‑domain books corpus and open datasets like The Pile further expand freely available high‑quality text. (wired.com) Competitive open‑ or semi‑open models such as Meta’s Llama 3, DBRX, and others, trained heavily on public data, demonstrate that state‑of‑the‑art systems can still be built without relying on one firm’s licensed archives. (techcrunch.com) Regulators and researchers do worry that large firms could use data and partnership "moats" to entrench power, and that crawler blocking by many news outlets (especially high‑factual ones) may skew future training data. (time.com) But these are warnings about potential concentration, not findings that a single actor already controls information.

Because Friedberg’s claim is forward‑looking ("there isn’t going to be a monopoly" and we won’t be "monopolized away into some disinformation age"), it cannot be definitively verified or falsified after only ~21 months. What can be said is that, so far, the world looks more like his scenario than its opposite: multiple powerful firms, robust open data sources, and no single company with effective monopoly control over training data or information. However, future legal, economic, or regulatory shifts could still change this, so the prediction’s ultimate truth value remains inconclusive (too early) rather than clearly right or wrong.

politicsconflict
If Transnistria (a Russian enclave in Moldova) formally seeks annexation by Russia and/or holds a referendum to join Russia following the February 2024 discussions, Western governments will treat this as evidence of broader Russian expansionist aims and the Ukraine conflict will significantly escalate as a result.
So now Transnistria is right there and could theoretically make a play to try and join Russia. Why do I think this is a big deal? Because if something like this happens, it could really expand the Ukraine war. The West is going to use this as evidence that Putin wants to invade multiple countries and invade, you know, a bunch of countries in Europe. And this could lead to a major escalation in the war.View on YouTube
Explanation

Available reporting shows that after the late‑February 2024 "congress of deputies" in Transnistria, local leaders appealed to Russia for “protection” from Moldovan economic and political pressure, but did not formally request annexation or announce a new referendum to join Russia. Multiple outlets note that such an annexation appeal had been widely rumored in advance, yet officials in Chișinău, Kyiv, and Tiraspol later explicitly denied that any annexation request was made; the adopted resolution instead asked Moscow for measures to protect Transnistria and complained of an “economic war” by Moldova. (regtrends.com)

Western governments did react warily to Transnistria’s February 2024 appeal—U.S. and European officials framed it as part of ongoing Russian pressure and hybrid tactics toward Moldova, not as a new, concrete step of territorial annexation. (theguardian.com) There is no clear evidence that this episode was treated as proof that Russia was about to “invade multiple countries in Europe,” nor that it directly triggered a discrete, major escalation in the Ukraine war distinct from the broader, already‑ongoing conflict dynamics.

Because the key trigger condition in the prediction—Transnistria formally seeking annexation and/or holding a new referendum to join Russia after the February 2024 discussions—has not occurred, the core claim about how the West would respond and how much the Ukraine war would escalate remains untested. We therefore cannot determine whether the predicted reaction would have been right or wrong, and the outcome is best classified as ambiguous.