Last updated Nov 29, 2025
aitechmarkets
Major copyright lawsuits against generative AI systems (e.g., over training on code, images, or text) will result in settlements that create a mechanism for rightsholders to identify when their content was used and to claim ongoing monetary compensation, similar to YouTube’s Content ID system.
That's what's going to happen here. There'll be a settlement where they are going to be able to claim their content.View on YouTube
Explanation

As of November 30, 2025, major copyright battles around generative AI have produced money damages and some licensing frameworks, but not the kind of Content ID–style system Jason predicted.

  • The largest generative‑AI copyright settlement to date is the $1.5B class‑action deal between authors/publishers and Anthropic. It creates a settlement fund that pays about $3,000 per copied book to rights holders who file claims, compensating past use of pirated copies, but it does not create an ongoing system to detect future use of individual works in training or outputs, nor a standing dashboard where rightsholders can register content and choose monetization options as on YouTube.(reuters.com)
  • Other headline lawsuits against generative‑AI companies – e.g., the New York Times and other newspapers vs. OpenAI and Microsoft – remain active with core copyright claims still moving toward trial, so no settlements from them have created such an infrastructure either.(apnews.com)
  • Visual‑art cases (artists vs. Stability AI/Midjourney/DeviantArt; Getty Images vs. Stability AI) have led to partial dismissals and, in the UK Getty case, a ruling largely favoring Stability AI on core copyright issues, not to a settlement that mandates a universal identification/claims system.(insideglobaltech.com)
  • In music, record‑label lawsuits against AI music startups Suno and Udio have recently been settled through licensing deals that allow new licensed models and impose download restrictions. These are standard catalog licenses with opt‑in by label artists; they do not create a generalized mechanism for all rightsholders to see when their works were used in training and to claim ongoing per‑use compensation across AI platforms.(reuters.com)
  • Separately from litigation, some companies have built limited compensation schemes—e.g., Shutterstock’s Contributor Fund shares revenue with contributors whose content is in licensed training datasets, and Adobe trains Firefly on licensed/stock content while compensating contributors—but these are voluntary business arrangements, not outcomes of major copyright settlements, and they still don’t let any rightsholder upload a work, have the system detect training or output uses across the industry, and choose to monetize or block them in a Content ID–like way.(submit.shutterstock.com)
  • Tools like Have I Been Trained? do let creators search certain public datasets such as LAION‑5B to see whether their images appeared in training data, but this is an independent activist project, not a settlement‑mandated system, and it offers opt‑out/advocacy rather than integrated, ongoing monetary compensation.(makeuseof.com)

Because no major generative‑AI copyright settlement has yet produced a YouTube‑style, industry‑level system that (a) identifies when specific copyrighted works were used in training or outputs and (b) lets rightsholders claim ongoing compensation based on that usage, Jason’s specific prediction has not come true to date, even though some partial, narrower analogues (class‑action funds, catalog licenses, contributor funds) have emerged.