because obviously we're going to solve every vertical eye problem in short order.View on YouTube
As of late November 2025 (a bit over three years after October 2022), several of the cited “vertical” problems are indeed at or beyond human level (e.g., chess/Go via AlphaZero/AlphaGo and related systems; poker via Libratus/Pluribus, which achieved superhuman performance against top pros years earlier).(en.wikipedia.org) However, the prediction was that all major vertical AI problems, including self‑driving and GPT‑style language, would be effectively solved in short order.
Self‑driving is clearly not “effectively solved.” GM has shut down its Cruise robotaxi unit after years of large losses and a serious pedestrian-dragging incident, citing long timelines and high costs.(apnews.com) Waymo has had to issue large software recalls and remains under NHTSA investigation after multiple crashes with barriers and other low‑impact incidents, and its service is still limited to specific cities and conditions rather than general, everywhere, all‑weather autonomy.(nypost.com) Tesla’s Robotaxi launch in Austin requires safety monitors, has triggered an NHTSA investigation for erratic driving behavior, and its Full Self‑Driving remains a supervised driver‑assistance system facing ongoing regulatory and legal scrutiny, not a solved, unsupervised Level‑4/5 system.(en.wikipedia.org) Collectively, this shows that one of the flagship verticals Jason referred to—self‑driving—remains far from broadly, reliably solved.
Other verticals he implicitly lumps in, like image generation and GPT‑style language models, have made large strides but still show clear unsolved issues. 2025 overviews of image generation note the field as "mature" for many professional uses, yet different tools still trade off quality, control, speed, and prompt alignment rather than representing a single, fully‑solved capability.(superallen.org) Microsoft’s rollback of a DALL·E‑3 variant in Bing due to significant quality regressions underlines that even this narrow vertical continues to exhibit non‑trivial failure modes.(theverge.com) Large language models in 2024–2025 still hallucinate and misreason often enough that entire research threads and benchmarking efforts are devoted to quantifying and mitigating these errors, including in law, science, code, and embodied agents—evidence that the "GPT‑3‑type" language problem is not effectively solved either.(academic.oup.com)
Because at least one of the central example domains (self‑driving) remains very clearly unsolved in practice, and others (LLMs, image gen) still have widely acknowledged, fundamental shortcomings rather than being "done," the generalized claim that "we're going to solve every vertical AI problem in short order" has not come true within the few‑year window he implied. Therefore, the prediction is best classified as wrong.