Have you implemented, or are you considering implementing, an anti-phishing training program? If so, you’re likely very focused on end-user click rates and the tale they will tell with regard to your organization’s susceptibility to phishing attacks. But here’s a word of caution: Do not let your program live and die by click rates alone. Just as there is more to successful cybersecurity education than phishing tests, there is more to measuring program success than tracking end-user click rates.
The simple truth of phishing test click rates is that these metrics don’t provide a complete view of phishing-related risk. And here’s why.
One phishing email example is just that: one example
Each simulated attack you send is essentially a snapshot of susceptibility at a moment in time—and that susceptibility snapshot is very firmly linked to that specific test. Factors like how an email is structured, its level of sophistication, and why a topic resonate with the users who engaged with it are fluid. These elements can all vary over time, which means that every different phishing test—like every active attack from the wild—is likely to tell a different story.
Click rates don’t necessarily reflect knowledge levels
Just because users happen to make the right decision on one phishing test doesn’t mean they did so because they spotted the threat and practiced active avoidance. Maybe someone else told them not to click. Maybe they were too busy to notice the email. Maybe the message didn’t resonate with them. Here we have those fluid factors again—and another reason why you can’t definitively count non-clicking users as knowledgeable users.
Click rates can be manipulated
Your gut reaction may be to disagree on this point—you don’t make your users click, after all; they do that all on their own (outside of the occasional oddity or back-end trickery that can happen with sandboxes and such). But take a step back. The reality is that those fluid factors (again!) absolutely can influence the likelihood of a click happening (or not happening). The relevance of a topic, the timing of a message, how difficult red flags are to spot, the level of personal detail… all of these elements affect click rates.
Phishing test administrators can consciously—and even unconsciously—make things more or less difficult for end users. They can even manipulate mock attacks in order to influence a data trend. As such, this is the primary reason why click rates on simulated phishing attacks should not be the sole source of truth when it comes to evaluating the success (or failure) of a security awareness training program.
Take it beyond the phish
There are a number of reasons why you should look past phishing tests and click/no-click data rates when evaluating vulnerabilities and gauging progress. Question-based knowledge assessments and training modules offer great opportunities to round out your cybersecurity education initiatives and get users thinking about the positive impact of good cyber hygiene beyond the inbox.
But just as you should seek to broaden your employees’ horizons, you should also think beyond your security awareness training tools when it comes to measuring success. Look to the security events that you already are (or should be) tracking, including metrics like the following:
You should start to see positive improvements with each these measurements as your users progress through a well-rounded, effective program. However, the last item listed offers a particularly good sign that users are becoming more apt to check and evaluate the emails they receive on a day-to-day basis.
Click rates are certainly useful measurements within anti-phishing training programs. Just keep them in perspective. After all, it’s overall improvement within those additional metrics noted that will offer true indicators of advancing knowledge—and give you the opportunity to gauge ROI on your education efforts.