Last updated Nov 29, 2025
aipoliticsgovernment
Over the coming years, AI systems will undergo a cycle similar to past media (print, radio, TV, internet) where: (1) there will be government censorship battles, stakeholder fights, and widespread claims of bias and untruth; and (2) this controversy phase will eventually subside, leading to a stable ecosystem of differentiated AI systems serving different audience preferences.
And just because we're calling it AI doesn't mean there's necessarily absolute truth in the world, as we all know, and that there will be different opinions and different manifestations and different textures and colors coming out of these different AI systems that will give different consumers, different users, different audiences, what they want. And those audiences will choose what they want. And in the intervening period, there will be censorship battles with government agencies. There will be stakeholders fighting. There will be claims of untruth. There will be names of claims of bias. You know, I think that all of this is is very likely to pass in the same way that it has in the past, with just a very different manifestation of a new type of media.View on YouTube
Explanation

As of November 30, 2025, only the first part of Friedberg’s prediction can be meaningfully evaluated.

  1. Censorship battles, stakeholder fights, and bias/untruth claims are clearly happening.

    • Governments have moved to regulate and, in some cases, directly constrain AI outputs. China’s Cyberspace Administration requires generative AI content to uphold Communist Party ideology and refuse politically sensitive topics, effectively enforcing systemic censorship over AI systems. (en.wikipedia.org)
    • The EU’s AI Act, in force since August 2024, includes bans and strict rules around manipulative uses of AI, deepfakes, and AI systems influencing democratic processes, triggering intense political debate over free expression and democratic risks. (en.wikipedia.org)
    • In the U.S., Biden’s 2023 Executive Order on “Safe, Secure, and Trustworthy” AI set a detailed federal governance agenda and was then rescinded by President Trump in January 2025, illustrating ongoing political conflict over how tightly AI should be constrained. (en.wikipedia.org)
    • Major AI companies (OpenAI, Anthropic, Meta, Midjourney, etc.) are embroiled in high‑profile lawsuits over training data, copyright, and alleged harms, reflecting intense stake­holder battles between tech firms, media, rights holders, and users. (businessinsider.com)
    • Studies and commentary have documented political and ideological bias in leading chatbots (e.g., ChatGPT vs. Gemini in the 2024 EU elections), and these findings feed public narratives that AI systems are “woke,” “propagandistic,” or otherwise untrustworthy. (arxiv.org)
      Collectively, this strongly supports his near‑term claim that AI would become a battleground over censorship, bias, and truth claims.
  2. Differentiated AI systems for different audiences are clearly emerging.

    • xAI’s Grok is explicitly marketed as not “woke” and has been tuned over 2023–2025 to give more right‑leaning, politically incorrect, and Musk‑aligned responses, making it a clear example of an AI system positioned for a particular ideological audience. (en.wikipedia.org)
    • Research documents that models like ChatGPT and Gemini respond differently to political questions, and separate projects market themselves as conservative or alternative chatbots, indicating a move toward audience‑specific AI products. (arxiv.org)
    • In China, domestic chatbots are constrained by state censorship rules and produce government‑aligned narratives, which is another form of audience‑specific (national/ideological) AI. (en.wikipedia.org)
      This supports his idea that there will be “different opinions and different manifestations” across AI systems serving different user groups.
  3. But the end‑state of a stable, relatively controversy‑free ecosystem has not yet arrived, and the time horizon is "over the coming years."

    • The EU AI Act is still being fought over and even partially rolled back or delayed under heavy industry and political pressure, showing that regulatory consensus is not settled. (theverge.com)
    • Major lawsuits against AI companies continue to be filed and only some have reached partial resolution, while new settlements and trials (e.g., Anthropic’s large settlement with authors) are still reshaping norms. (en.wikipedia.org)
    • Political conflict over AI governance in the U.S. has intensified rather than cooled (e.g., the rescinding of EO 14110, ongoing legislative proposals such as the Generative AI Copyright Disclosure Act). (en.wikipedia.org)
    • Ongoing scandals over deepfakes and election interference in 2024–2025 keep AI at the center of public controversy rather than in a stable, background role. (isdglobal.org)

Because Friedberg’s prediction explicitly describes a cycle in which the current phase of battles and bias accusations will "very likely" pass and give way to a more stable, mature media‑like equilibrium, and because we are only about 2.8 years out from the February 2023 prediction with controversies still escalating, it is too early to say whether the full cycle will complete as he described.

Conclusion:

  • Part (1) of the prediction — that AI would spark censorship battles, stakeholder fights, and claims of bias/untruth — is strongly supported by events since 2023.
  • Part (2) — that these controversies will subside into a stable ecosystem of differentiated AI systems — cannot yet be confirmed or falsified. The regulatory, legal, and cultural environment around AI is still in flux.

Given that the core forecast concerns a multi‑year cycle and its eventual resolution, the fairest assessment as of November 2025 is “inconclusive (too early)” rather than right or wrong.