Last updated Nov 29, 2025
governmentai
Following the May 2024 resignation of OpenAI’s superalignment team leaders (including Ilya Sutskever and Jan Leike), one or more government regulators or oversight bodies will formally contact and interview these former employees about OpenAI’s technology and safety practices, leading to new, previously non-public information about OpenAI’s internal state-of-the-art models or safety processes becoming public (e.g., via hearings, reports, or leaks) within the next 1–2 years.
my guess is there are certain regulatory people who are going to have interest in the fact that this team just left. They're going to make a phone call, they're going to ask this team to come in and have a conversation, and they're going to start to ask a lot of questions about what the state of technology is over there. And I suspect that some things are going to start to come out.
Explanation

Evidence since the May 2024 superalignment departures matches the core of Friedberg’s prediction: former OpenAI safety/superalignment staff have been formally questioned by government oversight bodies, and that has produced new public information about OpenAI’s frontier models and safety processes.

Key points:

  1. Former superalignment member testified before the U.S. Senate.
    In September 2024, the U.S. Senate Judiciary Subcommittee on Privacy, Technology, and the Law held a hearing at which former OpenAI employees testified about AGI risks. One of them, William Saunders, is a former OpenAI researcher who spent three years on the superalignment team before resigning in early 2024 over safety concerns.(techtarget.com) This matches the scenario Friedberg described—government overseers bringing in members of “this team” to ask detailed questions.

  2. His testimony revealed previously non‑public internal safety and model details.
    In that Senate hearing, Saunders described:

    • That AGI might be feasible in as little as three years, explicitly tying this to OpenAI’s recent frontier model (GPT‑o1).
    • That OpenAI had not prioritized internal security, and there were extended periods where vulnerabilities allowed employees to bypass access controls and steal its most advanced models, including GPT‑4.
    • That OpenAI had “repeatedly prioritized speed of deployment over rigor,” leaving serious safety gaps.(techtarget.com)
      These details about internal access‑control weaknesses, theft risk for GPT‑4–level systems, and the way safety reviews were traded off against deployment were not part of OpenAI’s prior public messaging; they emerged through sworn testimony in an official oversight setting.
  3. Regulatory / oversight pressure also surfaced internal policies that discouraged safety whistleblowing.
    In July 2024, lawyers for a group of OpenAI whistleblowers sent a detailed letter and formal complaint to the U.S. Securities and Exchange Commission (SEC) alleging that OpenAI’s employment, severance, and NDA practices illegally restricted employees from warning regulators about grave safety risks. The letter describes specific contract clauses (waiving federal whistleblower compensation, requiring company permission before talking to regulators, threats of “severe sanctions”) and notes that the SEC has responded to the complaint.(washingtonpost.com) That complaint and the surrounding coverage publicly exposed internal legal and safety‑governance practices that had not been disclosed before—and they exist precisely in the context of regulator oversight.

  4. All of this unfolded soon after the May 2024 superalignment resignations, within the 1–2 year window.
    Ilya Sutskever and Jan Leike announced their departures and the effective disbanding of OpenAI’s superalignment team in mid‑May 2024.(ndtv.com) The SEC whistleblower complaint (July 2024) and the Senate testimony by Saunders (September 2024) both occurred within months of those departures and well inside Friedberg’s 1–2 year horizon.

  5. Nuance: leaders vs. team members.
    So far, there’s no public record that regulators have formally interviewed Sutskever or Leike themselves in hearings or regulatory reports. The clearest on‑the‑record interactions involve other members of the superalignment/safety ecosystem (e.g., Saunders, plus ex‑OpenAI governance staff like Daniel Kokotajlo in related advocacy and testimony).(openaifiles.org) Still, the spirit of Friedberg’s prediction was that once “this team” left, regulators/oversight would pull them in, ask hard questions about the state of the tech and safety, and “some things are going to start to come out.” That’s exactly what happened with Saunders’ Senate testimony and the SEC‑focused whistleblower actions.

Because government oversight bodies have indeed brought in former OpenAI superalignment/safety staff, and those proceedings have surfaced new, previously non‑public information about OpenAI’s cutting‑edge models and safety/security processes, within the predicted timeframe, the prediction is best classified as right (even if the most visible testimony so far has come from team members rather than the two most senior leaders).