Opinion / Commentary

Security Awareness Training Was Built to Spot Bad Phishing — AI Has Made That Irrelevant

The FTC's $2.1 billion social media fraud figure is not a user education failure. It is evidence that the threat model security awareness training was designed for no longer exists. AI-generated fraud does not produce the observable cues our training teaches users to detect — and the industry needs to acknowledge this before it spends another decade on the wrong solution.

CipherWatch Editorial · Security Intelligence Platform
5 min read

The Federal Trade Commission’s finding that Americans lost $2.1 billion to social media scams in 2025 — a 47% increase in a single year — will produce a predictable set of responses from the security industry. More phishing simulations. Updated awareness modules. A refreshed top-ten list of red flags to look for. Employee re-certification requirements.

All of it will be wrong.

Not wrong in the sense that it produces no value. Wrong in the sense that it is solving a problem that is no longer the problem. The security awareness training industry was built on a specific and now-obsolete threat model: that phishing and social engineering attacks are detectable by observation. Badly spelled emails. Suspicious URLs. Urgent language. Requests that break normal process. The logic was coherent: make users more observant, train them to recognise the tells, reduce the hit rate.

That model assumed the attacker was making mistakes. AI has eliminated the mistakes.

What Changed

A romance scam conducted by a human operator across multiple simultaneous targets will eventually produce inconsistencies: the timeline slips, the story conflicts, the emotional manipulation becomes formulaic. A human scammer has limited capacity to personalise and limited ability to maintain perfect continuity across weeks of conversation.

An AI persona has no such constraints. It maintains perfect consistency, adapts its communication style to match the target’s vocabulary and register, incorporates real contextual details scraped from the target’s public social media history, and operates across unlimited simultaneous relationships. It does not get tired, does not make chronological errors, and does not repeat the same script.

The 312% increase in deepfake impersonation fraud documented in the FTC report represents the same dynamic applied to executive impersonation. The conventional tell for a voice phishing or video call fraud attempt was that it sounded wrong, or looked wrong, or the framing was off. An AI-generated deepfake trained on hours of authentic video and audio of a CEO does not look or sound wrong. It passes the amateur detection test that security awareness training prepared users to apply.

The $4,800 median individual loss in AI-enhanced investment scams is instructive. These are not naive users failing to notice obvious errors. These are people who were targeted with personalised, credible, contextually appropriate content — and who lost significant sums because nothing in their training equipped them to recognise it.

The Industry’s Inconvenient Position

Security awareness training is a substantial commercial industry. Annual market size is estimated north of $5 billion. Vendors have every incentive to position their products as the solution to the AI fraud problem rather than acknowledge that the problem has structurally exceeded their approach’s capability.

But the evidence does not support the claim. Security awareness training has been mainstream for two decades. Phishing simulation programmes are near-universal in enterprises with more than 500 employees. And yet the FTC numbers are moving the wrong direction, and the attacks succeeding are not the crude, obviously-malicious ones that training prepares users to identify. They are the ones that training cannot detect because the observable signals have been removed.

This is not a reason to abandon user education entirely. It is a reason to be honest about what it can and cannot do, and to stop treating it as the primary defence.

What Security Actually Needs

The failure of observable-signal detection points toward controls that do not depend on human recognition. If the attacker can remove the tells, move the detection to the transaction, not the communication.

For financial fraud, this means out-of-band verification for any request above a defined threshold — not verification by the same channel the request arrived on, because that channel may be compromised. It means velocity controls on transfers that trigger review regardless of how credible the authorisation appeared. It means treating “unusual request, but communicated convincingly” as a reason to escalate, not a reason to proceed.

For credential phishing, it means phishing-resistant authentication that removes the value of credential harvesting regardless of how convincing the phishing site is. Passkeys and hardware FIDO2 tokens defeat credential phishing not by making users better at recognising fake sites, but by making the credentials non-transferable between contexts. The attacker’s deepfake does not help if the credential cannot be replayed.

For social engineering broadly, it means designing organisational processes so that no single employee interaction — however convincing — can authorise a high-impact action without structural controls that verify intent through an independent path.

The Uncomfortable Conclusion

The FTC’s $2.1 billion figure is not evidence that users need better training. It is evidence that the design principle of “train the user to be the last line of defence” was always a fragile foundation — and AI has broken it.

Security awareness training has a role in a mature programme: it reduces attack surface, it builds a culture of reporting, it catches the unsophisticated attacks that still exist. But it is not a defence against AI-generated fraud, and treating it as one is not just ineffective — it is actively harmful if it substitutes for the structural controls that would actually work.