Last updated Nov 29, 2025
Sacks @ 01:20:40Inconclusive
politicsaigovernment
Over the next few years, Democratic‑leaning U.S. states will enact and enforce AI regulations targeting 'algorithmic discrimination,' which will cause major AI providers to deploy state‑specific versions of their models with DEI‑oriented constraints ('woke AI') for use in those jurisdictions.
what you're going to see is that the blue states will drive this ban on quote unquote, algorithmic discrimination, which will lead to Di being promoted in models, which is what the Biden administration wanted. You will see the return of woke AI at the state level.View on YouTube
Explanation

As of November 30, 2025, it is too early to determine whether Sacks’s multi‑year prediction is right or wrong.

  1. Timeframe not yet elapsed. The prediction explicitly concerns what will happen “over the next few years.” Only about two months have passed since the October 3, 2025 podcast. Key state AI laws that could test this prediction—like Colorado’s Artificial Intelligence Act—do not begin substantive effect until 2026 and have even had their effective dates pushed back, so their real-world impact on model behavior has not yet materialized. (infographics.bclplaw.marketing)

  2. Blue / Democratic‑leaning states are advancing AI rules around algorithmic discrimination. Colorado’s AI Act (enacted 2024) is the leading example: it creates a comprehensive framework for “high‑risk” AI systems and expressly aims to prevent algorithmic discrimination in consequential decisions (employment, housing, credit, etc.). (infographics.bclplaw.marketing) New York and California have also pushed AI legislation and enforcement approaches that emphasize bias, civil rights, and consumer protection in automated decision‑making and hiring tools. (aitechtrend.com) This supports the first part of the forecast—that blue states would drive regulation framed around “algorithmic discrimination.”

  3. No evidence yet of state‑specific “woke AI” model variants. The crucial second part of the prediction is that such state laws would force major AI providers to deploy state‑specific, DEI‑constrained model versions (“return of woke AI at the state level”). Public reporting on how companies are planning for Colorado-, California-, and New York‑style rules shows emphasis on documentation, risk‑management programs, transparency, and impact assessments, but does not report OpenAI, Google, Meta, or others shipping separate “Colorado‑only” or “California‑only” model forks with uniquely constrained outputs. (lathropgpm.com) In practice, commentators note that large companies often respond to the strictest state rules by raising their baseline standard nationwide (as happened with privacy and emissions rules), rather than building many jurisdiction‑specific products. (govfacts.org) That cuts against the mechanism Sacks predicts, but the relevant laws aren’t yet in full force, so this might still change.

  4. Regulatory and political environment is still in flux. There is an active fight over whether states can keep or expand these rules: a bipartisan coalition of state attorneys general has urged Congress not to preempt state AI regulations, while the Trump administration and allied groups have attacked certain state regimes as creating “woke” AI and are pushing for federal preemption or a single national approval regime. (reuters.com) Because this preemption battle is unresolved, it remains unclear whether blue‑state rules will ultimately be strong enough to compel the kind of state‑specific model behavior Sacks describes.

Given (a) the explicitly multi‑year horizon, (b) the fact that the most relevant state laws have not yet been implemented or enforced at scale, and (c) the lack of current evidence of state‑specific “woke AI” model variants, the prediction cannot yet be judged as clearly right or clearly wrong. The appropriate classification at this time is inconclusive (too early).