Understanding “Human Error”
Humans make mistakes. Any system that depends on perfect performance by humans is doomed to failure. In fact, the risk of an accident is more a function of the complexity of the system than it is the people involved. Humans are not the weak link in a process. We are a source of resilience. We have the ability to respond to unpredictable inputs and variability in the system.
The contents of this post are based on the work of Sydney Dekker in his book “The Field Guide to Understanding Human Error.” Professor Dekker is a pilot and human factors engineer. Most of his work comes from analyzing industrials accidents and plane crashes. Now we can blame pilots for their crashes, but most of the time the causes of such accidents are multifactorial (non-standardized language, bad weather, overly crowded runway, equipment issues, etc). Understanding all these causes reveals that pretty much any pilot could have made this mistake. This wasn’t “human error,” but a poorly designed system. The system needs to change. The system needs instead to set the pilot up for success.
Local Rationality
When looking at these crashes, the systems engineers review the contents of the flight recorder (black box), talk to those who were there and review important theories to understand why the pilot made the choices they did.
No one comes to work wanting to do a bad job.
Sydney Dekker
The local rationality principle asks us to understand why an individual’s action made sense at the time. “The point is not to see where people went wrong, but why what they did made sense [to them].” We need to understand the entire situation exactly as they did at the time, not through the benefit of retrospection.
In “M&M Conferences of Olde” the audience acted as Monday morning quarterbacks interpreting the findings of the case while already knowing the outcome. The provider at-the-time did not have the benefit of this knowledge. Of course they would make different decisions. Our goal is to understand the case the same way as the provider at-the-time did. Don’t be a Monday morning quarterback, instead put yourself in their frame-of-mind on Sunday afternoon.
Just Culture
We balance the need to keep people accountable while acknowledging that most adverse events are not due to system problems. We want to emphasize learning from mistakes over blaming individuals. We need zero tolerance for blameworthy events (like recklessness or sabotage) while not unfairly blaming individuals for system problems.
The Just Culture algorithm was developed by James Reason (Managing the Risks of Organizational Accidents, 1997) and modified to apply to medicine. We ask a series of questions to determine the cause of an adverse event and offer an appropriate response.
- Deliberate Harm: Did the individual intend to cause harm? Did they come to work in someway impaired? This is sabotage and the person should be removed from patient care and dealt with appropriately.
- Incapacity Test: Was the individual impaired (eg, medical conditions, substances)? Remove the person from patient care, provide appropriate support and corrective actions as warranted.
- Foresight Test: Did the individual deviate from established policies, protocols or standard of care? If so, were the policies difficult to follow or did the person knowingly take reckless risks? If the former, explore systems issues. If the latter, reckless behavior may need to be addressed with corrective actions.
- Substitution Test: Could others with the same level of training have made the same choices? If so, this is a no-blame error.
An adverse event that passes all these tests reveals that system errors are likely at risk. Don’t blame the individual.
Analyzing Adverse Events
The single greatest impediment to error prevention in the medical industry is that we punish people for making mistakes.
Dr. Lucian Leape
We needed a new approach if we wanted to encourage bringing errors into the light for analysis to learn from these mistakes. Dekker describes six steps.
Step One: Assemble A Diverse Team
The team should include as many stakeholder perspectives as are pertinent. In medicine, we would include physicians, nurses, technicians, patients and others. This team needs to have expertise in patient care (subject matter expertise) and in quality review (procedural expertise).
The one group not included are those who were directly involved in the adverse event. Their perspective will be incorporated through interviews, but they do not participate in the analysis. They often lack the objectivity needed and may suffer secondary injury from reliving the incident.
Step Two: Build a Thin Timeline
In airplane crashes, investigators recover the flight recorder (black box) to create a timeline of events during the flight and conversations between parties. In medicine, we look at the chart to understand what happened and when.
This is a starting point, but it excludes the context needed to understand local rationality. We know what happened, but we don’t know why it happened.
Step Three: Collect Human Factors Data
Interview the people directly involved in the adverse event to understand what happened from their point of view. This is best done as early as possible as memory tends to degrade with time. Understand what was happening in the room, why did they make the choices they did, and what was their understanding of the situation and why. George Duomos presented a series of questions on the EMCrit Podcast to guide the collection of this human factors data.

Step Four: Build a Thick Timeline
With the human factors data in hand, overlay this on the thin timeline to build a thick timeline. This presents the events as they occurred within the context under which the providers were working. You may need to go back to interview providers repeatedly until you can understand what happened as they understood it at the time. The goal is for us to achieve local rationality.
Step Five: Construct Causes
We don’t find causes. We construct causes from the evidence we collect. The causes of the errors are complex and often not readily available to be discovered. We need to work to understand and propose possible causes. One method of organizing the causes is in a Ishikawa diagram (or fishbone diagram).

Step Six: Make Recommendations
At this point, we should have a good understanding of what happened. Now we need to propose potential solutions that would prevent this adverse event from occurring in the future.

Not all solutions are created equal. The ones that are easier to enact are often the least effective. The converse is unfortunately true, the most effective are the hardest to implement. Our goals, in order of effectiveness are to:
- How can we change the system to eliminate the hazard?
- How can we change the system to make it hard to do the wrong thing?
- How can we change the system to make it easy to do the right thing?
- How can we change individuals to make them do the right thing?
This pyramid is adapted from OHSA’s Hierarchy of Controls. Start with the first question. If there is no feasible way to achieve this, move to the second.
QI Slides
Use the case slides from the MM&I Instructions page to complete steps 1, 2, 3 and 4. Use the QI slides to walk through steps 5 and 6.

