Last updated Nov 29, 2025

E116: Toxic out-of-control trains, regulators, and AI

Fri, 17 Feb 2023 08:55:00 +0000
Back to episodes
The approximately $350,000 donated to Beast Philanthropy from this poker game will be used to provide food assistance to on the order of tens of thousands of people (i.e., at least 10,000 individuals) in the future.
So when people have food insecurity, these guys provide them food. And so this will help feed, I don't know, tens of thousands of people, I guess.View on YouTube
Explanation

Public information confirms that (a) the episode’s charity plug for the poker game directed listeners to donate to Beast Philanthropy, alongside the Humane Society’s “Going Big for Beagles” campaign, but does not state the exact total raised or how that specific pot of money was used, and (b) Beast Philanthropy is a food‑focused nonprofit that channels donations into fighting food insecurity and related assistance.

Evidence:

  • The episode description for E116 lists Beast Philanthropy’s donation link as one of the two target charities of the poker event but gives no breakdown of the poker proceeds or their later impact. (podcasts.apple.com)
  • Beast Philanthropy (MrCharity, Inc.) reports multi‑million‑dollar annual revenue and expenses as a 501(c)(3) food‑bank charity, confirming it is a functioning, sizeable organization rather than a shell; however, its public financials and Form 990 filings do not trace specific incoming donations (like a single $350k gift) to specific outcome numbers. (projects.propublica.org)
  • The Beast Philanthropy homepage and donate page emphasize that donations are used to “fight food insecurity and provide food and assistance to people and communities in need,” and they publish aggregate outputs: over 55.8 million pounds of food, 46.5 million meals delivered, and 8.1 million individuals fed, demonstrating large‑scale food assistance. (beastphilanthropy.org)
  • At the same time, Beast Philanthropy also funds other non‑food projects (clean‑water wells, school tech, medical procedures, etc.) through the same nonprofit, so unrestricted donations are not guaranteed to be allocated exclusively to food. (en.wikipedia.org)
  • Typical U.S. food‑bank partners like Feeding America state that each donated dollar can provide on the order of 10 meals, implying that a $350,000 contribution to an efficient food‑distribution charity could plausibly translate into millions of meals and thus tens of thousands of people served, but this is an external benchmark, not Beast‑specific reporting. (nasdaq.com)

Putting this together: it is highly plausible on efficiency and scale grounds that $350,000 routed through Beast Philanthropy ultimately supported food assistance for at least tens of thousands of people. However, the charity’s public reports do not attribute outputs to that specific poker donation, nor do they publish a clear, organization‑specific “meals (or people) per donated dollar” figure that would let us rigorously verify the claim. Because enough time has passed but the impact of that particular donation cannot be isolated from available data, the prediction cannot be decisively marked correct or incorrect and is best classified as ambiguous.

Sacks @ 00:52:33Inconclusive
health
As a result of the East Palestine, Ohio vinyl chloride train derailment and controlled burn, there will be a noticeably elevated incidence of cancer among the exposed population that becomes evident roughly 20 years after the incident (around 2043), implying that many residents will develop cancer attributable to this event.
I guess we'll know in 20 years when a lot of people come down with cancer.View on YouTube
Explanation

The prediction is about long‑term cancer incidence (~20 years after the February 2023 East Palestine, Ohio derailment). As of the current date (November 30, 2025), only about 2 years have passed since the incident, far short of the 20‑year timeframe implied by the quote ("we'll know in 20 years when a lot of people come down with cancer").

Epidemiological studies of cancer risk from environmental exposures (like vinyl chloride) typically require long latency periods (often decades) before a statistically clear increase in cancer incidence can be attributed to a specific event in a population. Current public reporting and early health assessments after the February 3, 2023 Norfolk Southern derailment and controlled burn of vinyl chloride in East Palestine focus on short‑term symptoms (respiratory irritation, headaches, rashes) and environmental contamination monitoring, not on long‑term cancer outcomes, which cannot yet be observed.

Because the prediction explicitly concerns what will be evident around 2043, and we are still in 2025, it is too early to determine whether there will be a "noticeably elevated" cancer rate attributable to this event. Therefore, the correctness of the prediction cannot yet be evaluated.

politicsgovernmentai
Within approximately 3–5 years from February 17, 2023 (i.e., by between early 2026 and early 2028), AI ethics and bias will become an active agenda item for politicians and be the subject of government attention or action.
Here's an agenda item that politicians haven't gotten to yet, but I'm sure in three, four, five years they will. AI Ethics and Bias.View on YouTube
Explanation

Jason predicted that within roughly 3–5 years from February 17, 2023, AI ethics and bias would become an active agenda item for politicians and the subject of government attention or action.

By late 2025, this has clearly already happened at multiple levels of government:

  • In October 2022 (even before the prediction), the White House released the Blueprint for an AI Bill of Rights, centered on protections against algorithmic discrimination, data privacy, and other civil-rights-focused AI principles, explicitly intended to guide policy and regulation. (bidenwhitehouse.archives.gov)
  • On October 30, 2023, President Biden signed Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of AI, directing federal agencies to address AI risks including civil rights harms, discrimination, and consumer protection, making AI ethics and bias a core federal policy agenda. (en.wikipedia.org)
  • The U.S. AI Safety Institute was created within NIST in November 2023, with an AI Safety Institute Consortium established in 2024, to operationalize safe and responsible AI practices for government and industry. (en.wikipedia.org)
  • The EU’s 2024 Artificial Intelligence Act establishes a comprehensive regulatory framework for AI, explicitly tied to protecting fundamental rights and addressing high‑risk and biased AI systems, showing another major political jurisdiction legislating around AI ethics and algorithmic bias. (en.wikipedia.org)
  • In the U.S., California’s AI bills (the 2024 Safe and Secure Innovation for Frontier AI Models Act, SB 1047, and the 2025 Transparency in Frontier Artificial Intelligence Act, SB‑53) focus on AI safety, catastrophic risk, and accountability, reflecting sustained legislative attention to responsible AI. (en.wikipedia.org)
  • Under President Trump in 2025, Executive Order 14179 and the Preventing Woke AI in the Federal Government order explicitly target perceived ideological and DEI‑related bias in AI systems, making AI bias a prominent partisan and policy issue. (en.wikipedia.org)

These actions involve presidents, national legislatures, and state lawmakers, and they squarely concern AI ethics, discrimination, and bias. Because such government attention and concrete policy measures arrived well within (and in fact earlier than) the 3–5‑year window Jason named, the prediction that AI ethics and bias would become an active political agenda item is best scored as right.

techai
Google's Bard large language model product will include a feature that provides source citations for its answers.
Some of them do neeva notably is doing citations. And I think, I think Google's Bard is going to do citations as well.View on YouTube
Explanation

Evidence shows that Google’s Bard (now Gemini) does include a built‑in mechanism for citing sources in its answers, even if it does so inconsistently.

  • Shortly after Bard’s rollout in March 2023, Google’s own FAQ (quoted by Search Engine Land) stated that “If Bard does directly quote at length from a webpage, it cites that page.” This makes clear that Bard has an explicit feature to attach citations when it quotes web content. (searchengineland.com)
  • Coverage of Bard’s 2023 updates notes that a citation feature debuted around May 2023, where Bard would show sources when its generated text closely matched online content, reinforcing that citations are an intentional product capability rather than an accidental side effect. (thelatch.com.au)
  • Later reviews of Bard describe “Citation of Sources” as one of its pros (while criticizing that it is inconsistent), confirming that the product, as used in practice, can and does provide source links in its answers. (ai-productreviews.com)

Jason’s prediction was that Google’s Bard LLM product “is going to do citations as well”—i.e., that it would include a citation feature. Since Bard indeed has a built‑in mechanism for citing sources in some responses, the prediction is judged right, even though Bard does not always or prominently show citations for every answer.

Chamath @ 01:02:17Inconclusive
aipolitics
Over time (within the next several years after February 2023), there will be multiple competing versions (on the order of 3–10) of major AI language tools, some of which will explicitly expose filter settings that reveal and allow users to choose among different political leanings of their outputs.
That is what you'll have because this is I think we mentioned this before, but what will make all of these systems unique is what we call reinforcement learning... I think that eventually Jason mentioned this before, but there'll be 3 or 4 or 5 or 10 ten competing versions of all of these tools. And some of these filters will actually show what the political leanings are, so that you may want to filter content out. That'll be your decision. I think all of these things will happen over time.View on YouTube
Explanation

As of November 30, 2025, the prediction is partially realized but not clearly confirmed or falsified.

1. Multiple competing major LLM tools (3–10+) – already true
Since early 2023, the ecosystem has evolved to include several widely used, broadly capable language tools beyond OpenAI’s ChatGPT, such as Anthropic’s Claude, Google’s Gemini, xAI’s Grok, China’s DeepSeek chatbot, Proton’s Lumo, and others. These are distinct, general-purpose assistants offered by different companies and are direct competitors in the same market space. (en.wikipedia.org)
This clearly satisfies the “3–10 competing versions” part of Chamath’s statement.

2. User-facing filters that explicitly expose and let you choose political leanings – not (yet) realized
What we do see:

  • Mainstream vendors emphasize reducing political bias and pushing toward neutrality, not letting users dial the system toward left or right. OpenAI and Anthropic publicly describe efforts and tools to measure and reduce political bias in ChatGPT and Claude, aiming for evenhandedness rather than user-selectable ideology. (theverge.com)
  • Governments (e.g., via Trump’s 2025 “woke AI” executive order) are pressuring suppliers of AI used by the US government to be ideologically neutral, again pushing away from openly configurable partisan modes. (washingtonpost.com)
  • There are ideologically branded separate chatbots such as TUSK’s Gippr AI, marketed explicitly as a conservative/right‑leaning alternative to ChatGPT, but they present a fixed perspective rather than a user-adjustable slider or filter in a mainstream tool. (htv10.tv)
  • External researchers routinely measure and publish the political orientation of LLMs (e.g., David Rozado’s work on “The political preferences of LLMs”), but that information lives in papers and articles, not as an in-product control for end users. (cybernews.com)

What we do not see, in any major general-purpose assistant as of late 2025, is a built-in settings UI that clearly labels the model’s political stance and lets ordinary users choose “conservative / centrist / liberal” (or similar) output modes for the same base system. Instead, vendors explicitly avoid features that look like configurable political persuasion, due both to internal policy and regulatory scrutiny.

3. Timing: why this is “inconclusive (too early)” rather than “wrong”
Chamath’s wording was that “over time” and within the next “several years” after early 2023, there would be many competing tools and that “some of these filters will actually show what the political leanings are,” giving users that choice. We are only about 2.75 years past February 2023, which is within a reasonable reading of “several years” (often 3–5+).

  • The first part (many competing LLMs) is clearly correct already.
  • The second part (explicit, user-configurable political-lean filters in major tools) has not materialized yet, but given the loose timeframe (“over time”, “several years”) it is premature to say it definitively will not happen.

Because a core component of the prediction (exposed political-leaning filters) has not yet occurred, but the stated horizon has not clearly elapsed, the fairest scoring as of November 2025 is “inconclusive (too early)” rather than fully “right” or “wrong.”

aipoliticsgovernment
Over the coming years, AI systems will undergo a cycle similar to past media (print, radio, TV, internet) where: (1) there will be government censorship battles, stakeholder fights, and widespread claims of bias and untruth; and (2) this controversy phase will eventually subside, leading to a stable ecosystem of differentiated AI systems serving different audience preferences.
And just because we're calling it AI doesn't mean there's necessarily absolute truth in the world, as we all know, and that there will be different opinions and different manifestations and different textures and colors coming out of these different AI systems that will give different consumers, different users, different audiences, what they want. And those audiences will choose what they want. And in the intervening period, there will be censorship battles with government agencies. There will be stakeholders fighting. There will be claims of untruth. There will be names of claims of bias. You know, I think that all of this is is very likely to pass in the same way that it has in the past, with just a very different manifestation of a new type of media.View on YouTube
Explanation

As of November 30, 2025, only the first part of Friedberg’s prediction can be meaningfully evaluated.

  1. Censorship battles, stakeholder fights, and bias/untruth claims are clearly happening.

    • Governments have moved to regulate and, in some cases, directly constrain AI outputs. China’s Cyberspace Administration requires generative AI content to uphold Communist Party ideology and refuse politically sensitive topics, effectively enforcing systemic censorship over AI systems. (en.wikipedia.org)
    • The EU’s AI Act, in force since August 2024, includes bans and strict rules around manipulative uses of AI, deepfakes, and AI systems influencing democratic processes, triggering intense political debate over free expression and democratic risks. (en.wikipedia.org)
    • In the U.S., Biden’s 2023 Executive Order on “Safe, Secure, and Trustworthy” AI set a detailed federal governance agenda and was then rescinded by President Trump in January 2025, illustrating ongoing political conflict over how tightly AI should be constrained. (en.wikipedia.org)
    • Major AI companies (OpenAI, Anthropic, Meta, Midjourney, etc.) are embroiled in high‑profile lawsuits over training data, copyright, and alleged harms, reflecting intense stake­holder battles between tech firms, media, rights holders, and users. (businessinsider.com)
    • Studies and commentary have documented political and ideological bias in leading chatbots (e.g., ChatGPT vs. Gemini in the 2024 EU elections), and these findings feed public narratives that AI systems are “woke,” “propagandistic,” or otherwise untrustworthy. (arxiv.org)
      Collectively, this strongly supports his near‑term claim that AI would become a battleground over censorship, bias, and truth claims.
  2. Differentiated AI systems for different audiences are clearly emerging.

    • xAI’s Grok is explicitly marketed as not “woke” and has been tuned over 2023–2025 to give more right‑leaning, politically incorrect, and Musk‑aligned responses, making it a clear example of an AI system positioned for a particular ideological audience. (en.wikipedia.org)
    • Research documents that models like ChatGPT and Gemini respond differently to political questions, and separate projects market themselves as conservative or alternative chatbots, indicating a move toward audience‑specific AI products. (arxiv.org)
    • In China, domestic chatbots are constrained by state censorship rules and produce government‑aligned narratives, which is another form of audience‑specific (national/ideological) AI. (en.wikipedia.org)
      This supports his idea that there will be “different opinions and different manifestations” across AI systems serving different user groups.
  3. But the end‑state of a stable, relatively controversy‑free ecosystem has not yet arrived, and the time horizon is "over the coming years."

    • The EU AI Act is still being fought over and even partially rolled back or delayed under heavy industry and political pressure, showing that regulatory consensus is not settled. (theverge.com)
    • Major lawsuits against AI companies continue to be filed and only some have reached partial resolution, while new settlements and trials (e.g., Anthropic’s large settlement with authors) are still reshaping norms. (en.wikipedia.org)
    • Political conflict over AI governance in the U.S. has intensified rather than cooled (e.g., the rescinding of EO 14110, ongoing legislative proposals such as the Generative AI Copyright Disclosure Act). (en.wikipedia.org)
    • Ongoing scandals over deepfakes and election interference in 2024–2025 keep AI at the center of public controversy rather than in a stable, background role. (isdglobal.org)

Because Friedberg’s prediction explicitly describes a cycle in which the current phase of battles and bias accusations will "very likely" pass and give way to a more stable, mature media‑like equilibrium, and because we are only about 2.8 years out from the February 2023 prediction with controversies still escalating, it is too early to say whether the full cycle will complete as he described.

Conclusion:

  • Part (1) of the prediction — that AI would spark censorship battles, stakeholder fights, and claims of bias/untruth — is strongly supported by events since 2023.
  • Part (2) — that these controversies will subside into a stable ecosystem of differentiated AI systems — cannot yet be confirmed or falsified. The regulatory, legal, and cultural environment around AI is still in flux.

Given that the core forecast concerns a multi‑year cycle and its eventual resolution, the fairest assessment as of November 2025 is “inconclusive (too early)” rather than right or wrong.

Over time, and faster than most observers in early 2023 expect, the capability to run large language models and build data-scraping–based alternatives will commoditize, enabling new competitors to emerge against the early incumbents (e.g., OpenAI, Google, Microsoft).
What I'm saying, what I'm arguing is that over time, the ability to run llms and the ability to scan, to scrape data, to generate a novel, you know, alternative to the ones that you guys are describing here is going to emerge faster than we realize there will be.View on YouTube
Explanation

Friedberg predicted that the ability to run LLMs and build web‑data–based alternatives would commoditize faster than most early‑2023 observers expected, enabling many new competitors beyond OpenAI/Google/Microsoft. Evidence through late 2025 strongly supports this.

  1. Rapid commoditization of LLM capabilities
    • Hugging Face’s own 2023 retrospective calls 2023 the “year of open LLMs,” documenting a wave of open‑weight releases (LLaMA, Llama 2, StableLM, MPT, Falcon, Qwen, Mistral, Yi, etc.) and emphasizing that smaller models plus quantization made LLMs “really accessible to many more people.” (huggingface.co)
    • A July 2023 study already identified ~15,821 LLMs on Hugging Face, just months after ChatGPT’s breakout, showing extremely fast proliferation. (arxiv.org)
    • By July 2024, there were ~784,000 open‑weight models on Hugging Face, and by June 2025 more than 1.8 million models total on the platform—an ecosystem far beyond a handful of proprietary systems. (gretel.ai)

  2. Ability to run LLMs spread to commodity and edge devices
    • Google’s Pixel 9/9a phones include the on‑device Gemini Nano model for real‑time summarization, translation, and captioning, demonstrating LLM‑style inference running locally on consumer smartphones. (en.wikipedia.org)
    • Apple Intelligence, enabled by default starting with iOS 18.3 on Apple Silicon devices, provides systemwide generative features using small on‑device models; Apple also brings these AI capabilities to a $599 iPhone 16e, broadening access. (theverge.com)
    These shifts show that “the ability to run LLMs” is no longer confined to specialized data centers; it’s increasingly a standard capability of mass‑market hardware.

  3. New competitors vs the early incumbents
    • The 2023 Hugging Face review lists strong open models from Meta, Stability AI, Mosaic, Salesforce, Falcon (TII), Alibaba’s Qwen, Mistral, Yi, Deci, and others, many with open or open‑weight licenses and strong benchmark performance—direct alternatives to closed systems. (huggingface.co)
    • Databricks released DBRX in March 2024 as an open model that outperformed earlier open models from Meta and Mistral on a range of benchmarks, showing sophisticated models from data‑platform players, not just hyperscalers. (en.wikipedia.org)
    • Meta’s Llama 3 / 3.1 family (up to 405B parameters) is positioned as competitive with leading proprietary models like GPT‑4o and Claude 3.5 Sonnet, and is made widely downloadable and deployable across major clouds—further reducing dependence on a few closed providers. (theverge.com)
    • A 2025 survey of major AI companies lists Anthropic, Meta, DeepSeek, Databricks, Hugging Face, Amazon, Nvidia and others alongside OpenAI, Google and Microsoft as core generative‑AI players, reflecting a multi‑vendor landscape. (kiplinger.com)
    • Microsoft’s own Copilot ecosystem now integrates models from Anthropic and also supports Meta, xAI and DeepSeek models, explicitly moving beyond exclusive reliance on OpenAI. (reuters.com)

  4. Speed relative to early‑2023 expectations
    • By mid‑2023, there were already tens of thousands of LLMs on Hugging Face, and the 2023 retrospective notes that new open models were arriving monthly, then weekly, and eventually even daily, including strong entries from new players and from China. (arxiv.org)
    • Within roughly two years of the February 2023 prediction, high‑quality open models (e.g., Llama 3.1, DBRX, Mistral’s Mixtral) and on‑device models from Apple and Google made it routine for organizations and even individuals to run capable LLMs without depending solely on a small trio of incumbents.

Taken together, the explosive growth of open and downloadable models, their deployment on commodity and edge hardware, and the emergence of multiple strong LLM providers show that LLM capability has indeed commoditized, and it happened on a faster timeline than the early‑2023 narrative of a tightly concentrated, hyperscaler‑only future suggested. This matches Friedberg’s prediction well.