Why I’m done calling humans the weakest link

Cybersecurity has long suffered from a people problem, but not in the way we often hear about. As industry that is based on enabling communication across the globe via the internet and many types of devices, many of us practitioners are very bad at communicating to people.

cybersecurity design failures

A primary example is the phrase “humans are the weakest link” which is well known phrase in our industry. This phrase implies that if it were not for human our systems would be fully secure, but most worryingly projects the message to non-cybersecurity people that there are inferior to us. So not only does this phrase alienate our fellow workers it is a phrase that I firmly believe is unfair and completely misleading. The real issue around cybersecurity is not human error, it is the failure of the technology and the system designs and architecture to support real human behavior.

Despite years of awareness campaigns, data breaches linked to phishing and credential misuse continue to dominate incident reports and news headlines. And after each of these breaches the vendors and experts commenting on the breach will reuse the phrase “humans are the weakest link” laying the blame not on any failures in the technology meant to protect us but, instead placing the blame on the person using the computer. Even if a person did get phished or fell victim to a malicious email this should not prompt another round of finger-pointing. Instead, it should raise urgent questions about why so many of our systems still leave people so vulnerable.

Take phishing, for example. If a malicious email lands in an inbox and a staff member clicks it, the typical response is to blame the individual for not spotting the signs. But why did the email get through in the first place? Why did the email filters not stop it, or sandboxing isolate it, or threat detection flag it? When these technical controls fail, the human does not become the “weakest link” instead they become the “last line of defense”.

Much of the problem lies in the design of our digital systems. User interfaces are often unclear, inconsistent, or overly complex. Security warnings are written in language that only makes sense to IT professionals. Pop-ups offer binary choices with no explanation. Default settings prioritize convenience over safety, or worse monetization of person data over security and privacy. These design flaws create a perfect storm. In today’s world people are being asked to make security-critical decisions based on minimal information, when all they really want to do is to get on with their actual work.

Worse still, as an industry we have trained people to ignore interruptions. Click fatigue is real. After years of clicking through cookie banners, software updates, and login prompts, people learn to click “allow”, “accept”, or “proceed” without reading the details. In that context, clicking on a phishing link is not a failure of common sense, it is a predictable consequence of poor design and over-reliance on user vigilance which criminals actively exploit.

Adding to the challenge is our overconfidence in training. Many organizations roll out a couple of online awareness modules each year, typically during October for Cybersecurity Awareness Month, and assume that is enough to prepare staff for an evolving threat landscape. But expecting people to become cyber-aware through a handful of generic videos is deeply unrealistic. We do not train people to ride a bicycle or to drive a car using e-learning alone, yet we expect office workers to defend against increasingly sophisticated attacks with little more than a compliance exercise that often just a video of a few minutes followed by a multiple-choice quiz.

This points to a wider issue in our approach to security. Rather than building safety into systems and processes, we too often push that responsibility and burden onto the human. We design tools that require people to behave like experts, then blame them when they fail. It is a backwards model. If a system is so fragile that a single mistaken click can bring down an entire network, then the problem is not the person, it is the system.

We need to shift our priorities. Security should not depend on perfect human behavior. Instead, it should be a product of good design, secure defaults, and resilient infrastructure. Tools should guide safe behavior without requiring technical knowledge. Threats should be identified and dealt with before they ever reach the user. And when something does go wrong the response should be to improve the system, not punish the individual.

This means holding our technology to a higher standard. Why are phishing emails still getting through? Why do critical warnings still look like generic pop-ups? Why are people expected to manage multiple complex passwords when better authentication options exist? The answers to these questions point to a failure of the industry to prioritize usability, clarity, and robustness.

To clarify, this is not about abandoning awareness efforts altogether. But awareness should be one part of a broader, more thoughtful strategy. It should empower, not shame. It should acknowledge that mistakes are inevitable, and design systems that are resilient enough to absorb them. Crucially, it should treat staff as allies, not as scapegoats.

If we want better outcomes, we need to stop asking why people keep getting it wrong and start asking why the systems we build make it so easy to fail. The responsibility for secure behavior does not lie solely with the individual. It lies with the entire design of the digital environment they are working in. Until we address that, no amount of training or awareness will be enough.

Don't miss