The only way that I see for model developers to comply with this law is to build in a new Di layer into the models, to basically somehow prevent models from giving outputs that might have a disparate impact on protected groups. So we're back to woke AI again, and I think that's the whole point of this Colorado law.View on YouTube
As of November 30, 2025, Colorado’s SB 24-205 (the Colorado Artificial Intelligence Act) has not yet gone into effect, and its implementation has been formally delayed to June 30, 2026. Multiple legal summaries note that lawmakers pushed the effective date from February 1, 2026 to June 30, 2026 during an August 2025 special session, precisely to reconsider and possibly revise the law’s obligations before they bite. (orrick.com)
The law, in its current form, imposes a duty of reasonable care to avoid “algorithmic discrimination” on developers and deployers of high‑risk AI systems used in consequential decisions (employment, credit, housing, healthcare, etc.). Compliance is framed around risk‑management programs, impact assessments, transparency, and disclosures, with a rebuttable presumption of reasonable care if those process requirements are met. (hrlawwatch.com) It does not explicitly require building a separate DEI or fairness “layer” that censors or reshapes all model outputs.
Importantly, the statute’s stringent provisions apply mainly to high‑risk decision systems, not to general‑purpose chatbots. Commentaries underline that generative AI systems like ChatGPT are generally out of scope unless they are actually used to make consequential decisions, in which case they are regulated only in that specific use context. (wsgr.com)
Public guidance for businesses preparing for the Colorado law focuses on governance—risk assessments, documentation, bias testing, consumer notice, and AG reporting—and does not show evidence that major model developers have rolled out Colorado‑specific DEI layers or a distinct “woke AI” behavior profile to comply with SB 24‑205. (seyfarth.com) The article quoting Sacks about a looming “woke AI” patchwork is commentary on what he thinks will happen, not documentation that such changes have actually been implemented. (meritalk.com)
Because (1) the law is not yet in force, (2) its final form may still change before June 30, 2026, and (3) there is no concrete evidence that developers have already adopted the specific compliance strategy Sacks predicted (a new DEI layer that suppresses outputs causing disparate impact, leading to jurisdiction‑specific “woke AI”), it is too early to determine whether his prediction will ultimately be right or wrong. Therefore the correct status is **“inconclusive (too early).”