We've got another decade of evolution here to make these things sustainable and have a high quality.
As of November 30, 2025, Jason’s prediction is about a 10‑year horizon (from December 2024 to roughly the mid‑2030s). That means we can only falsify it early if the end state he describes has already been reached significantly sooner than a decade.
Jason’s claim, normalized: it will take roughly another decade before fully AI‑generated software systems (i.e., systems whose codebases are largely or entirely produced by AI rather than human authors) are sustainably deployable at high quality in heavily regulated, mission‑critical environments (e.g., medical devices, aviation, critical financial infrastructure, safety‑critical industrial control, etc.).
Current evidence in 2024–2025 points the opposite way: AI‑generated code is widely used but remains error‑prone and security‑risky, and it is typically deployed with substantial human oversight and testing, not as fully autonomous end‑to‑end system generation.
Key signals:
- A Veracode study found that about 45% of AI‑generated code contained security flaws, with no clear security advantage in newer or larger models, underscoring that apparently production‑ready AI code often hides serious vulnerabilities. (techradar.com)
- An Aikido report found that AI‑generated code now accounts for nearly a quarter of production code and is already responsible for one in five major security breaches; 69% of developers and security professionals have seen serious vulnerabilities in AI‑written code, and only a small minority believe secure code can be produced without human oversight. (itpro.com)
- Large‑scale academic analyses of AI‑generated code on GitHub and controlled benchmarks show that while much of it is functional, it systematically introduces bugs and security vulnerabilities, including hard‑coded secrets and path traversal issues, and is overall more prone to high‑risk security problems than human‑written code. (arxiv.org)
- The emerging practice of "vibe coding" (heavy reliance on LLMs to write code from natural language) has been associated with concrete security failures; for example, one vibe‑coding startup’s generated apps had widespread vulnerabilities exposing personal data. Even advocates note that generative systems struggle with complex, multi‑file, safety‑critical software and pose serious maintainability and debugging challenges. (en.wikipedia.org)
- Regulators and insurers are tightening around AI rather than treating it as ready for unsupervised mission‑critical deployment: major insurers are moving to exclude or limit AI‑related risks due to the opaque and unpredictable behavior of large models, especially in high‑stakes contexts. (ft.com)
Taken together, these sources indicate that in 2024–2025, AI is a powerful assistant within software engineering pipelines, but not yet trusted to autonomously generate entire safety‑ or mission‑critical systems in regulated sectors without intensive human control, verification, and traditional assurance processes.
However, Jason’s claim is about the future pace of progress through the mid‑2030s, not about the exact state of 2025. We are only ~11 months into a 10‑year window. There is no evidence that his timeline is already violated (i.e., that we have already reached broadly accepted, fully AI‑generated, high‑quality mission‑critical systems in regulated environments), but it is also far too early to know whether his decade‑scale forecast will ultimately prove accurate.
Because the forecast’s target date lies roughly 9–10 years in the future and current evidence neither confirms nor contradicts the eventual outcome, the correct status as of November 30, 2025 is:
inconclusive (too early to tell), but not falsified so far.