So there will be new tools that law enforcement will be able to use. And if somebody is creating phishing sites at scale, they're going to be probably pretty easy for law enforcement eyes to detect. So let's not forget that there'll be copilots written for our law enforcement authorities. They'll be able to use that to basically detect and fight crime.View on YouTube
Evidence since April 2023 shows partial confirmation of Sacks’ prediction, but not clearly enough to call it definitively right or wrong.
1. Adoption of AI tools and “copilots” by defenders (including public sector)
- Microsoft has rolled out Security Copilot, an AI assistant for security analysts that helps summarize incidents, analyze vulnerabilities, and investigate threats such as phishing, and expanded its availability in 2024; it’s explicitly marketed for security operations centers and can be (and is) used by public-sector and government customers. (kelo.com)
- Microsoft is now promoting “agentic AI” via Security Copilot as a way to automate and speed up cyber‑defense workflows, emphasizing reduced response times and more effective handling of complex threats. (news.microsoft.com)
- Interpol’s innovation lab in Singapore uses AI systems to monitor millions of cyberattacks daily and to counter deepfake scams, sextortion, and phishing schemes, showing that an international law‑enforcement body is already using AI at scale for exactly these kinds of online threats. (ft.com)
- India has launched specialized training for “cyber commandos” to detect and neutralize AI‑driven cyberthreats, including automated phishing attacks, indicating that police units are being equipped and trained to confront AI‑enabled scams with advanced technical tools. (timesofindia.indiatimes.com)
- India’s Vastav AI deepfake‑detection platform is offered free to law‑enforcement and government agencies to combat AI‑generated media used in fraud and disinformation, further illustrating deployment of AI tools on the defensive side. (en.wikipedia.org)
Taken together, these examples support the first part of the prediction: law‑enforcement and closely allied public‑sector security teams are adopting AI‑based tools, some of which are directly branded and function as “copilots,” to assist in detecting and investigating online crime, including phishing and related scams.
2. Has this made large‑scale phishing “relatively easy” to detect and reduced criminals’ advantage?
The available data cut against a clear "yes":
- Europol’s 2025 Serious and Organized Crime Threat Assessment warns that AI is turbocharging organized crime, enabling more scalable scams, deepfakes, and online fraud, and stresses how AI makes criminals “harder to detect,” rather than reporting that detection has become easy. (reuters.com)
- In the U.S., FBI Internet‑crime data show that overall losses from online crime are still rising sharply, and reporting from states like Indiana attributes a big part of the surge in effective phishing, romance scams, and impersonation fraud to criminals’ use of AI, not to successful neutralization by law enforcement. (axios.com)
- Google and others are deploying powerful AI‑based scam and phishing detectors (e.g., Google Messages’ AI scam detection on billions of texts), but these are primarily consumer‑ and provider‑side defenses, and even they are framed as responses to rapidly escalating AI‑enabled scams, not as having turned phishing into a solved or trivially detectable problem. (wired.com)
- Major law‑enforcement operations like Operation First Light 2024 coordinated global crackdowns on phishing, investment, and romance scams, seizing hundreds of millions of dollars, but public reporting does not show that AI‑based copilots have made mass phishing “easy” to detect in a way that clearly shifts the overall balance of advantage away from criminals; instead, such operations are presented as partial successes in an expanding problem space. (en.wikipedia.org)
Overall, both sides are arming themselves with AI: law‑enforcement and defenders are indeed adopting AI tools and copilot‑like systems, but organized criminals are simultaneously leveraging AI to scale and sophisticate phishing and other fraud. Leading agencies (Europol, FBI) currently emphasize the increasing threat from AI‑enabled crime, not that AI has made mass phishing straightforward to detect or has clearly reduced criminals’ net advantage.
Because one core component of the prediction (widespread adoption of AI tools/copilots by law enforcement) is happening, while the stronger claim (that this makes large‑scale phishing “relatively easy” to detect and reduces AI‑criminals’ advantage) is not clearly supported and arguably contradicted by threat‑assessments and loss data, the overall forecast cannot be cleanly scored as simply right or wrong. The situation is mixed and depends heavily on how one interprets “relatively easy” and “reducing the advantage,” so the most accurate verdict is ambiguous.