My organisation recently completed its annual phishing simulation campaign. The results came back and — as they do every year — a meaningful percentage of employees clicked the link. The report went to leadership. Leadership asked what we were going to do about it. I gave the same answer I give every year: we’re going to make them do more training.
I’ve been giving that answer for eight years. The click rate goes up a little, then comes down a little, then goes up again. It oscillates around roughly the same number no matter how good the training is. And yet the recommendation is always the same: more training. Better simulations. Gamified learning. Quarterly campaigns instead of annual ones.
I want to make a case that we are optimising the wrong variable.
What Awareness Training Actually Does
Security awareness training is built on a premise that’s rarely examined: that if employees knew better, they would behave better. This is a reasonable assumption about some human behaviour. It is a poor assumption about behaviour under the conditions employees actually work in.
The average knowledge worker receives dozens of emails a day. They’re context-switching constantly. They’re under deadline pressure. They have learned, correctly, that most links in their inbox are fine and that their job requires engaging with links. The mental model that security training tries to install — “pause, inspect the sender, hover the link, consider the context” — competes directly with the mental model that makes someone good at their job: process information quickly and act.
The research on this is not flattering to the training industry. Repeated simulated phishing campaigns do reduce click rates temporarily. They do not produce durable behaviour change. They do produce employees who are better at recognising training simulations. These are not the same thing.
The Architectural Question We’re Avoiding
Here is the question I never hear asked in security awareness discussions: what would have to be true about our environment for a single employee click to be inconsequential?
It’s not a rhetorical question. There are organisations where a phishing click is, at most, a minor incident — the credential is captured but cannot be used to access anything sensitive; the endpoint runs in an isolated environment; the blast radius is near zero. These organisations didn’t get there by training their employees harder. They got there by building environments where human error is contained rather than amplified.
In most organisations, a single phishing click can result in: credential theft that gives access to email and cloud storage; lateral movement across a flat network; privilege escalation to domain administrator; and full ransomware deployment within 72 hours. We know this because it has happened, repeatedly, to organisations with mandatory annual security awareness training.
The training budget hasn’t solved this. The architecture is the problem.
Where I’d Rather Spend the Money
For the cost of a mid-market security awareness training platform — call it £150,000 a year for a large enterprise — you could fund roughly one and a half additional security engineers. Those engineers could implement network segmentation that limits lateral movement, harden privilege access management so that phishing a standard user doesn’t hand over domain admin, build conditional access policies that prevent token replay, or deploy application control that stops credential-harvesting malware from executing.
Any one of those projects does more to reduce the real-world impact of a phishing click than any training programme I’ve seen.
I’m not arguing that employees should be ignorant of phishing. I’m arguing that treating human behaviour as the primary risk reduction lever is an architectural admission of defeat. It says: we have built an environment where the entire security posture rests on whether every employee makes the right decision, every time, under conditions hostile to careful decision-making.
The Compliance Problem
The honest reason we keep spending on awareness training is compliance. ISO 27001, Cyber Essentials Plus, NIST CSF, PCI DSS, SOC 2 — they all require demonstrable security awareness programmes. Auditors want to see completion rates and phishing simulation results. They are much less interested in whether you’ve implemented network segmentation or least-privilege access.
This is a genuine problem with how compliance frameworks have evolved. They’ve institutionalised a control that has weak evidence of effectiveness and made it a checkbox that organisations tick regardless of whether it’s the highest-value use of their security budget. The compliance requirement creates a floor on awareness training spending and a ceiling on scrutiny of whether it works.
What I’d Actually Keep
I’m not advocating for eliminating security awareness entirely. There are categories of security behaviour where training meaningfully moves the needle: password manager adoption, recognising social engineering in voice calls, understanding how to report suspicious activity correctly. These are discrete, teachable behaviours with measurable outcomes.
What I’d cut is the annual phishing simulation industrial complex — the expensive platforms, the league tables, the mandatory remedial training for employees who clicked. The evidence that these produce durable behaviour change is thin. The evidence that they make employees anxious and resentful about security is considerably stronger.
Build environments where clicks are less catastrophic. Teach people what they can actually learn. Stop treating human fallibility as the root cause of security failures in systems designed to amplify it.