Security Amnesia: When Habit Becomes a Vulnerability

Jun 17, 2025 min read

This is a personal blog and all content herein is my personal opinion and not that of my employer.


Enjoying the content? If you value the time, effort, and resources invested in creating it, please consider supporting me on Ko-fi.

Support me on Ko-fi

Introduction

After I shared the last post on Security Amnesia, a friend replied with a story that hit home:

“When I was younger, me and my sister typed up CAA exam papers for commercial pilots (dad was a CAA flight instructor). There’s a phenomenon called an ‘action slip’ – like reaching for the gearstick on the wrong side in a US rental car and ramming your knuckles against the door.”

“In MFA workflows, we do repetitive manoeuvres – open password manager, MFA, copy password, admin login, MFA again, ooen privileged tool, MFA again… it’s really easy to mess up.”

That idea – action slips – describes something we all experience in security work: when a well-rehearsed habit fires off in the wrong place, and suddenly you’ve made a mistake without even realising it.


What’s an Action Slip?

In cognitive psychology, an “action slip” happens when an automatic routine is triggered in the wrong context. Your brain tries to be helpful – it thinks you’re doing what you always do – but the environment has changed just enough to make that action inappropriate.

Classic examples:

  • Reaching for a manual gearstick in a left-hand drive automatic
  • Typing your phone PIN into an ATM
  • Saying “you too” when someone says “enjoy your meal”

These slips were famously studied by James Reason in his Generic Error-Modelling System (GEMS), part of the foundation for modern human error research.


Security Habits Set Us Up to Fail

Modern security workflows are repetitive and cognitively noisy. We:

  • MFA into systems several times a day
  • Copy/paste secrets between vaults and interfaces
  • Jump between prod and dev environments
  • Approve identity requests under time pressure

This makes action slips not just likely, but inevitable. And in security, small errors can have oversized consequences.


Real-World Action Slip Examples

Here are a few all-too-familiar ones:

🧠 1. Password pasted into the username field

You’re in a rush. You copy a password and paste it… into the username box.

Now your password is:

  • Probably logged
  • Tied to a known username
  • Maybe even exposed on a shared screen

👆 2. Approving MFA on autopilot

You get a push notification. You’re used to approving them.

But you weren’t logging in. An attacker was.

This fits a known form of confirmation bias: our brain expects the request to be valid because it usually is.

🔥 3. Logging into prod instead of staging

You think you’re testing something harmless, but you’re pointed at production.

Oops.

💬 4. Pasting secrets into the wrong window

You meant to paste into your password manager. You pasted into Teams. Or Zoom. Or Jira.

Suddenly that secret is stored, synced, maybe even searchable.

🔐 5. Saving credentials in your browser

You use a password manager… but reflexively hit “Save password” in Chrome.

Now a sensitive login lives in browser storage – likely unencrypted and synced.


Aviation Solved This. Why Haven’t We?

Pilots transitioning between aircraft types are explicitly trained for action slips. They expect them.

In cybersecurity? We expect users to flawlessly switch contexts, interfaces, and environments without error. When mistakes happen, we usually blame them.

But this isn’t user error. It’s design failure – something we’ve long known in Human Factors engineering.


What Can We Do?

Telling people to “pay more attention” doesn’t work. They’re not being careless – they’re being human.

We can:

✅ Design more human-aware interfaces

  • Clear visual differences between environments (e.g. red = prod)
  • Warnings if passwords are typed in the username field
  • Smart clipboard expiry or masking
  • Confirm dialogs with context (“Approving login to app-x in region-y”)

🧩 Introduce the right kind of friction

  • Not more steps – smarter ones
  • Slight interface variability to disrupt autopilot
  • Prompt re-auth only when something meaningful changes

This aligns with concepts from usable security design – reducing mental load, not just increasing barriers.

👥 Treat human error as a system design flaw

  • Avoid logging sensitive inputs
  • Build workflows that discourage reuse of secrets
  • Design for reality, not ideal behaviour

Anticipating Critique

“Isn’t this just user error?”

Yes – and no.

Yes, it’s humans making mistakes. But no, it’s not their fault. It’s our job as system designers to account for predictable human behaviour under cognitive load.

“This isn’t a new insight.”

Right again – aviation, healthcare, and industrial safety have studied this for years. That’s exactly why it’s worth revisiting in cybersecurity, where we rarely apply those lessons.

“Adding friction slows things down.”

True. But the right friction at the right time can prevent silent failure. Good design shapes behaviour; it doesn’t just block it.


Conclusion

Security professionals aren’t failing because they’re sloppy. They’re failing because they’re running on habit – and the systems around them don’t support human defaults.

“You don’t rise to the level of your security training. You fall to the level of your habits.”

If we want fewer errors, we need fewer opportunities for those habits to slip.

Design accordingly.


Thanks to Callum Wilson for inspiring this follow-up post. As always, comments are welcome.

comments powered by Disqus