So now what happens is, when you have Senate hearings about this stuff, it's more likely that it's confidential behind closed doors. It's under the purview of national security. All these things are beneficial to OpenAI. And then secondly, they were able to get Elon to drop his lawsuit conveniently... The only thing left is to get shares into the hands of the blackrocks, the tea rose, the all the big mutual fund apparatuses of the world that then syndicate to all the individual investors of the world. And you have everything. You have government connections. You have no real legal overhang. Then the likelihood that an IRS agent all of a sudden decides to audit OpenAI is basically zero.View on YouTube
Chamath’s premise that OpenAI would tighten its connections to U.S. national security and big institutional capital has largely played out, but his conclusion—that this would leave OpenAI with “no real legal overhang” and effectively no serious regulatory harassment—has been falsified by events since mid‑2024.
1. Parts that did come true (inputs to his reasoning)
- Deepened national‑security ties. In December 2024 OpenAI partnered with defense contractor Anduril to deploy AI for “national security missions,” and in June 2025 the Pentagon awarded OpenAI a one‑year contract worth up to $200 million to develop “prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains,” under the new OpenAI for Government initiative.(reuters.com) This is exactly the kind of national‑security alignment he was talking about.
- Big institutional and mutual‑fund capital. OpenAI has repeatedly raised money via large primary and secondary transactions. A 2025 secondary share sale at a $500B valuation let employees sell about $6.6B of stock to major asset managers and sovereign/PE investors, including T. Rowe Price, SoftBank, Dragoneer and Abu Dhabi’s MGX.(reuters.com) In early 2025 OpenAI also added Adebayo Ogunlesi—CEO of Global Infrastructure Partners and a senior managing director and board member at BlackRock—to its board, explicitly tying it into BlackRock’s infrastructure‑finance orbit.(reuters.com) So the “BlackRock / T. Rowe / big mutual‑fund apparatus” component is directionally accurate.
- National‑security framing in Congress exists, but not exclusively. The Senate’s AI “Insight Forums,” which include Sam Altman, are deliberately closed‑door sessions to brief senators on AI risks and strategy, including national‑security aspects.(cnbc.com) At the same time, Altman also continues to appear in open, televised hearings such as the May 2025 Senate Commerce Committee hearing on U.S.–China AI competition, where national security was central but the proceeding was fully public.(apnews.com) So there is a mix of public and confidential, “national‑security‑framed” engagement, but not a clear shift to only or predominantly closed settings.
2. The main prediction about legal/regulatory risk is contradicted by events Chamath’s punch line was that, once these government and Wall Street ties were in place, OpenAI would have “no real legal overhang” and that the probability of an IRS audit (and similar regulatory harassment) would be “basically zero.” On observable facts, the opposite has happened: OpenAI’s legal and regulatory exposure has grown substantially.
- Major, ongoing copyright litigation with existential downside. The New York Times and other publishers are pursuing a consolidated federal lawsuit in Manhattan alleging that OpenAI and Microsoft used millions of copyrighted articles without authorization to train their models. In March 2025, Judge Sidney Stein allowed the core copyright‑infringement claims to proceed, potentially to a jury, after consolidating multiple suits (including those by prominent authors) into a single case.(apnews.com) The Times is explicitly seeking billions in damages and even destruction of training datasets—exactly the kind of large legal overhang Chamath implied would be neutralized.
- Expanded FTC scrutiny and investigations. The FTC opened a formal investigation into OpenAI in 2023 over privacy, data security, and consumer‑harm issues related to ChatGPT.(cnbc.com) In 2025 it went further, launching a targeted inquiry into “companion” chatbots used by teens, explicitly naming OpenAI among others and demanding detailed information on harms, safety controls, and data handling.(ft.com) These are not the actions of regulators standing down because of OpenAI’s political and financial connections; they are intensifying oversight.
- Serious tort exposure: wrongful‑death litigation. In August 2025, the parents of a 16‑year‑old filed Raine v. OpenAI in California state court, alleging that ChatGPT contributed to their son’s suicide by giving detailed instructions on how to hang himself and failing to protect a vulnerable user.(en.wikipedia.org) The case directly challenges OpenAI’s safety practices and could set a key precedent for product liability in generative AI—another very real legal overhang.
- Continuing, high‑profile corporate litigation with Elon Musk. Although Musk dropped his original 2024 lawsuit, he filed a new one later that year attacking OpenAI’s restructuring into a more conventional for‑profit entity and seeking to block it. In 2025 a federal judge rejected Musk’s bid for a preliminary injunction but granted an expedited trial timetable, and OpenAI in turn countersued Musk for alleged bad‑faith sabotage and a “sham” takeover bid.(politico.com) This ongoing cross‑litigation is textbook “legal overhang” for a company contemplating major financings and structural changes.
- IRS‑specific point is simply untestable, but the broader claim fails. IRS examinations are confidential unless a company chooses to disclose them; there is no public evidence either way of an IRS audit of OpenAI. What we can see is that, despite OpenAI building out a sophisticated tax‑compliance function, the broader landscape of lawsuits and federal investigations entirely contradicts the notion that regulatory or legal pressure has effectively vanished.(openai.com) Chamath’s statement was not just about the IRS; it was that OpenAI would have “no real legal overhang” thanks to its alignment with powerful interests, and that is clearly not borne out.
3. Senate‑scrutiny pattern is mixed, not clearly shifted as predicted We do see closed‑door, national‑security‑tinged engagements (AI Insight Forums; DoD and OpenAI‑for‑Government work).(reuters.com) But high‑stakes oversight has also continued in very public form, like the May 2025 Senate Commerce hearing where Altman testified under TV cameras about U.S.‑China AI competition and the need for infrastructure and regulation.(apnews.com) In other words, the observable pattern is both public and private scrutiny, not a clear step‑function shift toward mostly confidential national‑security venues.
Bottom line: Chamath correctly anticipated that OpenAI would deepen ties with U.S. national‑security institutions and major asset managers, but the core forecast—that this alignment would insulate OpenAI from serious legal and regulatory problems, leaving “no real legal overhang” and effectively zero chance of aggressive enforcement—has been decisively contradicted by the proliferation of major lawsuits and regulatory investigations since mid‑2024. On balance, the prediction is wrong.