Human and Organizational Performance: What It Actually Means in Practice

Human and Organizational Performance (HOP) changes how organizations think about error, blame, and systems. Here's what it actually means — and why it matters for EHS leaders who want better outcomes.

If you've spent any time in EHS circles over the last decade, you've heard the term Human and Organizational Performance — HOP. You've probably seen the training materials. You may have sat through the overview presentation. And there's a decent chance you came away thinking: okay, but what does this actually mean for what I do on Monday morning?

That's what I want to address. Because HOP is not a program. It's not a buzzword. It's a fundamentally different way of understanding why things go wrong — and when applied correctly, it changes everything from how you investigate incidents to how you design work.

Where HOP Came From

You don't need a deep academic history here, but a little context helps. Human and Organizational Performance draws from decades of research and practice from people like James Reason (who gave us the Swiss Cheese Model), Erik Hollnagel (who pushed us to study why things go right, not just wrong), and Sidney Dekker (who challenged the entire premise of how we assign blame after accidents).

The core insight that runs through all of their work: humans make errors. Always have. Always will. The question isn't how to find the human who made the mistake — it's how to design systems that don't allow single human errors to become catastrophic outcomes.

That shift sounds simple. It is not. It runs directly against how most organizations are wired to respond when something goes wrong.

The Core Principle: Humans Are Fallible. Systems Should Account for That.

HOP starts with five principles that are worth knowing, but I'll cut to the one that matters most in practice:

Error is normal. It is not a character flaw.

Humans make mistakes because we are human. We get distracted. We misread situations. We make assumptions based on experience that turns out to be wrong in a specific context. We work with incomplete information. We get tired. None of this makes us bad workers or careless workers — it makes us human workers.

The implication for organizations is significant: if human error is inevitable, your job is not to find error-free humans. Your job is to build systems where errors don't automatically result in harm. Redundancies. Clear procedures. Verification steps. Conditions where it's easy to do the right thing and hard to do the wrong thing.

Simple in concept. Hard to execute. But that's the target.

Why Blame-Based Responses Make Systems Weaker

Here's the part most organizations get wrong. When something goes wrong — an incident, a near miss, a quality failure — the instinctive response is to find the person who made the mistake and address that person. Retrain them. Discipline them. Replace them.

This feels like accountability. It is not accountability. It is blame. And blame makes your system weaker.

Why? Because when your response to error is blame, a few things happen:

People stop reporting. If the consequence of surfacing a mistake is discipline, people will hide mistakes. And hidden mistakes compound. The near miss that goes unreported becomes the incident that injures someone.

You fix the wrong thing. If your corrective action is "counsel the employee," you've addressed one person and left the system unchanged. The next person who steps into the same conditions faces the same risk.

You lose learning. Every incident is information. It's the system telling you something about itself. When you respond to incidents by blaming people instead of studying systems, you throw away that information. You get dumber instead of smarter.

Blame feels like action. HOP insists that real action means understanding the system that produced the error.

What This Means for Incident Investigation

This is where HOP has the most immediate practical application for EHS leaders.

Traditional investigations ask: Who made the mistake? What rule did they violate? What do we do about them?

HOP-informed investigations ask: What were the conditions that made this error likely? What did the system look like from the worker's perspective in that moment? What would we expect a normal, capable person to do given those conditions?

That last question is particularly powerful. It's called the Substitution Test — a concept from Dekker's work. Would a different worker, with similar experience, facing the same conditions, have made a similar choice? If the answer is yes — and it usually is — then you don't have a people problem. You have a conditions problem.

Practical changes to how you investigate:

  • Interview to understand, not to assign fault. "Walk me through what you were seeing and thinking" gets you information. "Why did you do that?" puts someone on the defensive.
  • Look upstream. What was the work environment like? What were the time pressures? What did the procedure say versus what did common practice look like?
  • Ask what made the error easy. Not just what the error was, but what in the system set the conditions for it.
  • Write corrective actions that address systems. "Retrain employee" is rarely a real corrective action. "Redesign the procedure to make the correct step more obvious" is.

What EHS Leaders Should Actually Do Differently

If you want to apply HOP thinking without turning it into a theory exercise, here's what it looks like in practice:

Change how you talk about near misses. Create real psychological safety around reporting. People should believe — because it's true — that surfacing a problem will lead to a system fix, not a disciplinary conversation.

Get out and observe work as it actually happens. Not work as it's described in the procedure. HOP practitioners call this "work-as-done" versus "work-as-imagined." The gap between those two things is where most incidents live.

Ask better questions after incidents. Move away from "who" and toward "why did the system allow this." That one shift will improve the quality of your corrective actions significantly.

Design for error. When you're writing procedures or designing workspaces, ask: what's the most likely mistake someone could make here? Then design to make that mistake less likely or less consequential.

The Limits of HOP — It's Not an Excuse

I want to be direct about something, because I've watched HOP get misapplied.

HOP is not a framework for avoiding individual accountability. There are situations — willful violations, reckless behavior, clear misconduct — where individual accountability is absolutely appropriate. HOP doesn't say otherwise. What it says is that most errors are not willful, and that defaulting to blame for normal human error is counterproductive.

Know the difference. Hold people accountable for the things that warrant it. But don't conflate accountability with blame, and don't reach for discipline as a first response to every error.

Better System Thinking, Better Outcomes

HOP, at its core, is an invitation to be more curious and less reactive. It asks you to slow down when something goes wrong and actually understand why — not at the surface level, not at the person level, but at the system level.

Organizations that do this well don't just have better incident investigations. They have better safety performance overall. Because they're actually learning from what happens, actually fixing the right things, and actually building systems where it's easier to work safely than to work unsafely.

That's the goal. HOP is one of the clearest paths to getting there.

It's not a buzzword. It's not a program you roll out and complete. It's a way of thinking about your work — one that produces better questions, better investigations, and better outcomes over time.

Start with the next thing that goes wrong. Ask why the system allowed it. Go from there.