Last updated Nov 29, 2025
Prediction
Friedberg
General-purpose AI Q&A services (such as Google Gemini) will only be commercially successful if they implement user-level personalization of values and filtering; if they do not, over time a significant share of users will abandon these products after encountering outputs that conflict with their objectives.
I think the only way that a company like Google or others that are trying to create a general purpose, knowledge Q&A type service are going to be successful is if they enable some degree of personalization... If they don't allow this, eventually everyone will come across some search result or some output that they will say doesn't meet their objectives... they're going to say, I don't want to use this product anymore.View on YouTube
Explanation

Major general‑purpose AI Q&A services have become commercially successful without offering the kind of explicit user‑level values and filtering personalization Friedberg described.

  1. Strong commercial success of non‑personalized LLMs

    • OpenAI’s ChatGPT has on the order of 700–800 million weekly active users in 2025, processes billions of messages per day, and is generating revenue at a run rate in the tens of billions of dollars. (business-standard.com)
    • Google’s Gemini app has also grown rapidly, surpassing 450M+ monthly active users by mid‑2025 and then around 650M MAUs by Q3 2025. (timesofindia.indiatimes.com)
      These are clear cases of large‑scale commercial success.
  2. What personalization actually exists

    • ChatGPT offers Custom Instructions and “Memory mode,” letting users set preferences for tone, level of detail, language, and some persistent context across chats—essentially style and task personalization. (tomsguide.com)
    • Google Gemini similarly added Personal Context / memory so it can remember user details and preferences to tailor answers, plus features like Temporary Chats. (theverge.com)
      These are convenience and style controls, not user‑selectable moral or political value systems or per‑user safety/"wokeness" levels.
  3. Core values and filtering remain centrally set, not user‑personalized

    • Anthropic’s Claude is governed by a fixed “Constitutional AI” framework: a single set of principles (e.g., human‑rights‑inspired rules) that define model behavior, explicitly not per‑user moral personalization. (anthropic.com)
    • Mainstream U.S. chatbots—including ChatGPT, Gemini, Copilot, and Claude—still enforce unified safety and content policies around politics, hate speech, and harmful instructions. Regulatory and political pressure (e.g., U.S. state investigations into alleged political bias) has pushed toward tighter, centralized controls, not optional user‑controlled value dials. (theverge.com)
      In other words, the industry did not move to broad user‑level value/filter personalization as a prerequisite for success.
  4. Users do encounter objectionable or misaligned outputs—yet overall usage grows

    • Google’s Gemini image‑generation controversy in February 2024 is a clear example where outputs (racially anachronistic historical images, odd moral answers) offended and angered users; Sundar Pichai publicly acknowledged Gemini’s responses had "offended our users" and showed bias, and Google temporarily paused people‑image generation. (wusf.org)
    • Anecdotally, some users say they “don’t use Gemini anymore” or complain that Gemini or Grok became too censored or too "woke" and claim to have dropped them. (reddit.com)
    • However, despite repeated controversies, both ChatGPT and Gemini show rapidly rising user numbers and revenues, not broad abandonment. (business-standard.com) Surveys of AI attitudes show growing concern and mistrust around AI, but mainly about jobs, misinformation, and general unease—not documented mass churn specifically because outputs clashed with individual values. (theverge.com)
  5. Net assessment vs. the prediction

    • Friedberg’s strong claim was that the only way such Q&A services would be successful is by enabling user‑level value/filter personalization, and that if they didn’t, users encountering misaligned outputs would broadly say "I don’t want to use this product anymore."
    • In reality, by late 2025, the leading systems:
      • Have limited, mostly stylistic/context personalization, not user‑selectable moral or ideological filters.
      • Have faced real backlash over perceived bias and "wokeness," and some individual users have indeed quit.
      • Yet are commercially thriving at very large scale, with usage and revenue still growing.

Because general‑purpose AI Q&A products have become widely and commercially successful without the kind of user‑level value/filter personalization he described, and because there is no evidence that a "significant share" of users has abandoned them for that specific reason, the core conditional claim is contradicted by observed outcomes. Hence the prediction is best classified as wrong, even if it correctly anticipated that alignment controversies would matter for some users and spur demand for more personalization features.