Last updated Nov 29, 2025
Prediction
Chamath
Chamath @ 01:02:17Inconclusive
aipolitics
Over time (within the next several years after February 2023), there will be multiple competing versions (on the order of 3–10) of major AI language tools, some of which will explicitly expose filter settings that reveal and allow users to choose among different political leanings of their outputs.
That is what you'll have because this is I think we mentioned this before, but what will make all of these systems unique is what we call reinforcement learning... I think that eventually Jason mentioned this before, but there'll be 3 or 4 or 5 or 10 ten competing versions of all of these tools. And some of these filters will actually show what the political leanings are, so that you may want to filter content out. That'll be your decision. I think all of these things will happen over time.View on YouTube
Explanation

As of November 30, 2025, the prediction is partially realized but not clearly confirmed or falsified.

1. Multiple competing major LLM tools (3–10+) – already true
Since early 2023, the ecosystem has evolved to include several widely used, broadly capable language tools beyond OpenAI’s ChatGPT, such as Anthropic’s Claude, Google’s Gemini, xAI’s Grok, China’s DeepSeek chatbot, Proton’s Lumo, and others. These are distinct, general-purpose assistants offered by different companies and are direct competitors in the same market space. (en.wikipedia.org)
This clearly satisfies the “3–10 competing versions” part of Chamath’s statement.

2. User-facing filters that explicitly expose and let you choose political leanings – not (yet) realized
What we do see:

  • Mainstream vendors emphasize reducing political bias and pushing toward neutrality, not letting users dial the system toward left or right. OpenAI and Anthropic publicly describe efforts and tools to measure and reduce political bias in ChatGPT and Claude, aiming for evenhandedness rather than user-selectable ideology. (theverge.com)
  • Governments (e.g., via Trump’s 2025 “woke AI” executive order) are pressuring suppliers of AI used by the US government to be ideologically neutral, again pushing away from openly configurable partisan modes. (washingtonpost.com)
  • There are ideologically branded separate chatbots such as TUSK’s Gippr AI, marketed explicitly as a conservative/right‑leaning alternative to ChatGPT, but they present a fixed perspective rather than a user-adjustable slider or filter in a mainstream tool. (htv10.tv)
  • External researchers routinely measure and publish the political orientation of LLMs (e.g., David Rozado’s work on “The political preferences of LLMs”), but that information lives in papers and articles, not as an in-product control for end users. (cybernews.com)

What we do not see, in any major general-purpose assistant as of late 2025, is a built-in settings UI that clearly labels the model’s political stance and lets ordinary users choose “conservative / centrist / liberal” (or similar) output modes for the same base system. Instead, vendors explicitly avoid features that look like configurable political persuasion, due both to internal policy and regulatory scrutiny.

3. Timing: why this is “inconclusive (too early)” rather than “wrong”
Chamath’s wording was that “over time” and within the next “several years” after early 2023, there would be many competing tools and that “some of these filters will actually show what the political leanings are,” giving users that choice. We are only about 2.75 years past February 2023, which is within a reasonable reading of “several years” (often 3–5+).

  • The first part (many competing LLMs) is clearly correct already.
  • The second part (explicit, user-configurable political-lean filters in major tools) has not materialized yet, but given the loose timeframe (“over time”, “several years”) it is premature to say it definitively will not happen.

Because a core component of the prediction (exposed political-leaning filters) has not yet occurred, but the stated horizon has not clearly elapsed, the fairest scoring as of November 2025 is “inconclusive (too early)” rather than fully “right” or “wrong.”