← Back to knowledge base

What are the most common security mistakes employees make?

Practical guidance on common security mistakes employees make for organizations that want to improve secure behavior structurally.

Recently updated

From insight to action

See how to turn this topic into a practical awareness program with training, phishing simulations and clear management reporting.

Founder & Security Awareness Specialist · 2LRN4

International research shows that human action is the direct cause in more than 80 percent of data breaches: a click on a phishing email, a misaddressed attachment, an approved MFA prompt that turned out to come from an attacker rather than a colleague. That does not mean employees are clueless. It means attackers design well and that workload plays a role. Which mistakes occur most often in 2026, why do people make them, and what can your organisation do about it?

Why employees make mistakes, and why it is not about being clueless

A common reflex after an incident is to say the employee was "clueless", "should not have clicked" or "ought to have known better". That framing is wrong and demonstrably harmful in 2026. Modern attacks are no longer the Nigerian prince with typos. They are perfectly phrased, contextually appropriate messages, often in the real visual identity of a supplier or colleague, arriving at the wrong moment.

B.J. Fogg's behaviour model helps see the real causes: a mistake arises from a combination of motivation, ability and knowledge. A rushed finance employee with ten open emails, a tired manager on a Friday evening, a new joiner still finding their way around unfamiliar systems: all are people who would make the right choice on a quieter day. The problem is not in their heads, it is in the combination of workload, design and context.

Recognising this changes how you look at mistakes: not as violations, but as signals that a process or training does not match reality. An organisation that maps its common mistakes learns faster where the programme needs reinforcement than any risk analysis could tell it.

The seven most common mistakes in 2026

A workable top seven, not as blame towards employees but as a map of where your programme needs attention:

  • Clicking on a suspicious message. Still the most common entry point. Since AI-generated phishing, even experienced employees can fall for it, not through inattention but because a request matches something they were already expecting.
  • Reusing passwords. A stolen password from a private-service breach is tried by criminals straight away on the business account. Without a password manager this is an almost unavoidable mistake.
  • Approving an unexpected MFA prompt. Push bombing is routine in 2026. An employee who at four in the morning gets a prompt and groggily taps "approve" hands the attacker, who already had the password, complete access.
  • Sharing data with the wrong recipient. An attachment with customer data to the wrong John, a Teams link shared externally by accident, a sensitive document in a public folder. Accidental sharing is a major source of breaches and is underestimated because it does not feel like a "hack".
  • Using shadow IT and unauthorised AI tools. An employee who quickly translates a document via a free online tool, or summarises a confidential report with an external AI service, unknowingly shares sensitive information with a third party.
  • Deleting suspicious messages instead of reporting them. An attacker benefits from silence. An employee who clicks away a doubtful message denies the security team the chance to warn other colleagues about the same campaign.
  • Leaving devices unattended and unlocked. At the office, on the train, in a café. A few minutes are enough for someone to read, copy or carry off sensitive data.

How context and workload cause mistakes

Almost all the mistakes in the top seven share something: they happen under pressure. An employee reading email calmly usually spots the signals. The same employee with ten minutes until a meeting, mind still on another problem, a customer call coming in, clicks on the exact same message without thinking. Attention is scarce in a modern working day, and attackers know it.

A second factor is social expectation. A request from the boss, a friendly supplier you have known for years, a colleague in trouble: in all of these cases normal social instinct (helping, responding fast, not asking awkward questions) works against the security reflex. That is not cluelessness, that is being human.

A third factor is the absence of a clear alternative. An employee who does not know what safe behaviour looks like takes the path of least resistance. Train therefore not only what is wrong, but above all what the correct alternative is: call back on a known number, ask through another channel, use the approved service.

Why a blame culture makes things worse

When mistakes lead to visibility, conversations or even measures, employees learn one thing: hide mistakes. Someone who clicks by accident and is then taken aside by their manager will not report the next suspicious email. Because what if it was not phishing, and they have drawn attention for nothing?

The result is an organisation that looks safe on paper (low click rate in simulations) but is actually blind. The attacker comes in through a real email nobody reports and spreads for days or weeks before anyone raises the alarm. Report rate, not click rate, is the real defensive indicator.

A workable rule: a mistake is a learning moment, not a violation. Whoever clicks gets a short explanation ("these are the three signs you could have spotted") and gets on with their work. Whoever reports gets a thank-you. Consistent reporters are highlighted as role models. These three simple rules change culture more deeply over time than any training.

What awareness training can and cannot solve

Awareness training is not a silver bullet. What it does: it reduces the chance of mistakes where recognition helps (phishing, vishing, social engineering), it builds routines for the right response (verify, report), and it strengthens the culture in which reporting feels natural.

What training does not do: it does not fix processes that are structurally unsafe (a payment process without a four-eyes rule), it does not compensate for missing technology (no password manager, no report button in the mail client), and it does not overcome workload that is simply too high for attentive work. Relying only on training solves a third of the problem.

The right combination is therefore threefold: technology that makes mistakes harder (filters, MFA, password manager, passkeys), processes that catch mistakes (four-eyes rule, verification routines), and awareness training that teaches employees what to do when technology and process turn out not to be enough.

How to anchor this in an awareness programme

Analyse the mistakes that have actually occurred in your organisation in the past twelve months. Which incidents happened, what kind of mistake was at their root, which departments recurred? That pattern tells you better where your programme should focus than any generic top seven.

Translate that analysis into concrete training modules. An organisation with many wrong-recipient incidents has a different agenda than one with many push-bombing attempts. A hospital has a different set of mistakes than a logistics company, a municipality a different one than a law firm. Generic 'everything for everyone' content leaves this potential on the table.

And finally: make mistakes visible as a positive datum. A news item saying "this month finance quickly reported four suspicious emails, one of which turned out to be real CEO fraud" teaches more than ten generic posters. It confirms that reporting works and normalises that even experienced people come across something suspicious. With that you shift the conversation from "who makes mistakes" to "how do we keep each other safe", which is exactly where an awareness programme should be heading.

From explanation to action

See how 2LRN4 turns this topic into a workable programme with training, phishing simulation and management reporting.

View the training page

Related articles

Sources

FAQ

Which mistake do employees make most often?

Clicking on a suspicious message remains the most common entry point in breaches. In 2026 AI-generated phishing emails are so convincing that even experienced employees can be caught. Right behind that come password reuse and approving unexpected MFA prompts.

Is an employee who clicks on phishing careless?

Usually not. Modern phishing plays on workload, authority and social instinct. A rushed employee facing a credible request does not recognise it, not through inattention but because the message does not stand out as deviation.

How do I prevent employees from hiding mistakes?

By treating mistakes as learning moments, not violations. Whoever clicks gets a short explanation and moves on. Whoever reports gets a thank-you. No individual numbers to managers, no mandatory retraining. That is the only sustainable route to a high reporting culture.

What role do workload and context play?

A big one. Almost all common mistakes arise under time pressure or interrupted attention. A calm employee almost always spots a suspicious email. The same employee with ten open tasks clicks without pausing.

Does awareness training solve all mistakes?

No. Training works for mistakes where recognition helps (phishing, vishing, social engineering) and for building reporting routines. It does not fix unsafe processes and does not compensate for missing technology. Combine training with technology (MFA, password manager) and process (four-eyes rule).

What is accidental data sharing?

An attachment with customer data sent to the wrong colleague, a Teams link accidentally shared in public, a sensitive document in a public folder. It does not feel like a hack, but is a major source of breaches and often falls under GDPR reporting duties.

What is shadow IT, and why is it a risk?

Shadow IT is the use of unauthorised tools and services, such as a free online translator or an external AI service to summarise a confidential document. The risk is that sensitive data ends up with a third party without the organisation having control.

External source: NCSC - Awareness resources

Next step

Use this article as the foundation and then see how 2LRN4 turns this topic into audience segmentation, training and reporting.