Our healthcare systems are already built with successive layers of defenses to prevent hazards from reaching our patients, but occasionally despite our best efforts a hazard can sneak through all the defenses and cause harm. The holes in our defenses through which the hazard sneaks are errors. 

  • Errors do not occur randomly, there are usually reasons for these errors. 
  • A hazard must be present for an error to lead to harm. An error outside the presence of a hazard sits quietly until it may later cause problems. 
  • Even small errors can trigger serious accidents.
  • Everyone makes errors regardless of experience or training. Experts are just better at predicting and preventing their errors. This is worth reiterating: everyone makes errors, even those who know better. 

James Reason’s Swiss Cheese model of errors. In order for a hazard to reach a patient and cause harm (or “losses” in this depiction), it needs to get through several layers of defenses. The holes in these defenses are errors. They can be pre-existing just waiting to cause problems (latent conditions) or could occur at the time of the incident (active failures). 

Our healthcare system is built with a lot of protective features against errors. There are several people and processes looking at each decision. So for an error to get through to a patient, it has to get past all these defenses. James Reason likened this to lining up the holes in random slices of Swiss cheese. These holes can be pre-existing or newly formed. Latent conditions are pre-existing defects in the design of a process. These are errors lying in wait, waiting to surface in the right conditions. Active failures are errors that happen in the moment, whose effects are seen shortly afterward. Line up several of these errors perfectly and we can allow a potential hazard to reach the patient resulting in patient harm. 

When examined closely, it usually takes more than one error to lead to harm. The defensive system broke down in more than one place. We will revisit this concept of multiple causes leading to harm shortly when we perform root cause analyses. 

Harm or Adverse Events

The above diagram, from the Institute for Healthcare Improvement (IHI)  website, shows the set of all healthcare encounters. One subset of this includes all errors (incorrect action performed). Another subset includes all adverse events (patient harm). Note that not all adverse events are caused by errors and not all errors cause adverse events. Hence using “adverse event” and “error” interchangeably isn’t entirely accurate. 

An adverse event is an injury or harm caused by medical management rather than the disease a patient has. Harm is “unintended physical injury resulting from or contributed to by medical care that requires additional monitoring, treatment, or hospitalization, or that results in death.” 

  • A preventable adverse event is an adverse event caused by error. 
  • A negligent adverse event is a subset of preventable adverse events where a provider may have failed to meet the standard of care expected of an average physician qualified to take care of the patient. This is a legal definition. About half of preventable AEs are considered negligent. 
  • A non-preventable adverse event is one caused by medical care, but isn’t preventable. An example may be a newly discovered allergic reaction or leukopenia caused by a chemotherapeutic agent needed to treat a patient’s cancer. The action caused harm, but we couldn’t have known that it would.

An error is the failure of a planned action to be completed as intended (an error of execution) or the use of the wrong plan to achieve an aim (error of planning). 

  • An error that leads to patient harm is a preventable adverse event. And as described above, a subset of these are considered negligent. 
  • An error that doesn’t lead to patient harm is considered a near miss

So if we are looking at medication or drug related incidents, we would have medication errors and adverse drug events. Where these intersect (errors leading to harm), we have preventable adverse drug events. Those that were not caused by an error are non-preventable adverse drug events. The near misses here are potential adverse drug events. 

  • Adverse event = patient has an allergic reaction to a medication 
    • preventable adverse event = this was a known allergy that wasn’t caught by the many checks and balances in the system and the patient receives the medicine
    • negligent adverse drug event = the patient was given a medicine out of a bottle on the counter without the normal double checks performed by the pharmacy and nursing staff
    • non-preventable adverse event = this was an unknown allergy and the medication was needed to treat the patient’s condition 
  • Error = patient was given a medication to which they are allergic
    • preventable adverse event = this was a known allergy that wasn’t caught by the many checks and balances in the system and the patient receives the medicine
    • near miss = the patient received the medicine (or almost received the medicine) but didn’t develop an allergic reaction to the medication. 

Many things once thought inevitable are actually preventable. People assumed central-line associated bloodstream infections were a given side effect of central lines. These were considered non-preventable adverse events. Then a Michigan ICU started using a checklist and dropped their infection rates by 66% showing that a good percentage of those were preventable. The checklist mitigated errors that were being performed. 

Other types of error include 

  • errors of omission: what you were supposed to do but didn’t do
  • psychological harm: the patient didn’t require additional medical care but felt horrible from the event
  • financial harm: the harm led to costing the patient more money

Types of Error

Before we delve further into medical errors, we should clarify some terminology. 

  • Human error is what an individual (or a team) does or doesn’t do that can lead to harm. 
  • Human factors are the reasons why humans make errors. This could be the relationship between people, the knowledge available to them or the conditions under which they work (eg, fatigue). Human factors, if not properly managed, can lead to human error. And human error, under the right latent conditions and active failures in the presence of a hazard, can lead to harm. 
  • An unsafe act is an error or violation committed in the presence of a potential hazard that can lead to patient harm. 

Errors happen because our actions don’t go as we intended (error of execution), or we performed the action as intended, but it was the wrong choice for the situation. 

There are many ways to organize these subsequent errors. One is to start by asking if the action that led to the error was intended to happen or not. Though a person may have done the wrong thing, if the person did what they meant to do, the action is said to be intentional

  • If the person knew it would cause harm, we call this sabotage
  • If the person knew it was against the rules, but didn’t know it would cause harm, we call this a routine violation. This is a deliberate deviation from known rules. Reckless behavior. 
  • If the person didn’t know it was against the rules and wasn’t intending to cause harm, they made a mistake
    • A mistake is an inappropriate action chosen either due to faulty reasoning (rule-based) or 
    • lack of a proper knowledge base (knowledge-based).

Alternatively, a person may have meant to do one thing but unintentionally did something else instead. 

  • A slip is an observable failure of execution (I meant to walk normally but slipped and fell instead). This is often classified as an observable failure of execution.  
  • A lapse is a non-observable error, such as memory lapse, where you may forget steps in a process.  This is classified as a non-observable failure of execution. 

Types of Error: Case Study

On March 19, 2015, a driver and two passengers in a 200 Series Landcruiser wagon were driving down a muddy road in Australia at approximately 100 km/h when the front right wheel impacted a standing pool of water. This resulted in the steering wheel sharply pulling to the right. The driver overcorrected by turning the steering wheel hard to the left causing the vehicle to travel sideways. Control of the vehicle was lost as it ran off the left of the road colliding with several shrubs and small trees. 

None of the occupants of the vehicle sustained any injuries. The front passenger tire however was punctured by one of the shrubs and there was $2500 damage to the vehicle. They were able to drive it back to the road, replace the tire, and reach their destination where they reported the incident. 

Since this was an accident at work, an analysis of the incident occurred. The basic cause was noted to be that the driver failed to maintain control of the vehicle. There were several contributing factors. 

  • The driver operated the vehicle at an inappropriate speed for the road conditions. Driving at this speed was intentional, but the driver may not have known to drive slower on wet gravel (a knowledge-based mistake). In fact, the training that drivers got on the use of the 4WD vehicles was sporadic and there was no verification that people had the training. The training itself had little on how to safely drive on wet gravel. 
  • It is possible that the driver knew it was dangerous to drive so fast, but did it anyway. It is not likely that they meant to crash (which would be an act of sabotage), but rather that they were reckless, committing a violation. We often do this when we are speeding when we’re in a hurry. 
  • The driver may have known to steer into the skid, but instead in the panic of the moment steered the other direction, committing a slip. 
  • The passengers in the vehicle felt unsafe driving at such a speed, but they did not speak up. Technically, this would be a violation but it’s important to understand their motivations. Being quiet may not have been reckless behavior. It may have been the culture not to call out errors to one’s superiors. 

As seen in the example above, there were human errors that were violations, knowledge-based mistakes and even slips. However, the system is usually more at fault than the people. The driver received substandard training and the culture prevented people from speaking up. 

Remember that violations happen when you intentionally don’t follow the protocol or procedure. This doesn’t mean that you were trying to cause harm. Perhaps you didn’t recognize the risk of not following the rules. This is human nature. We are rewarded for taking shortcuts. We drive and text. We speed. We all do it. And that’s the point. It is human nature. Systems need to be built taking human nature into account. 

Thinking Fast and Slow

Two psychologists, Daniel Kahneman and Amos Taverksy (the former won the Nobel Prize in economics for their work, also popularized in the book Thinking Fast and Slow), described two types of thinking when we solve problems. 

  • System I Thinking is intuitive, automatic, rapid and effortless. The experience of experts allows them to quickly recognize patterns. The System I thinker uses these patterns and heuristics to simplify thought processes. Diagnosing the characteristic appearance of Bell’s palsy or shingles after you’ve seen a 100 of patients with these ailments utilizes System I. This is quick pattern recognition. 
  • System II Thinking is slow, methodical, meticulous and conscientious. The System II thinker has to carefully decide each next step by judiciously processing the information available to them. Diagnosing the reason out of dozens of possibilities for vague weakness in an immunosuppressed transplant patient needs System II. There is no clear pattern to follow in evaluating these complex patients. You need to think it through. 

Each of these modes of thinking fall prey to particular errors. 

  1. Automatic System I Thinking is vulnerable to skill-based errors. We commit errors of execution (we didn’t do what we meant to do correctly, slips) or errors of memory storage (we forgot an important step, lapses) because we’re not paying enough attention. We’re on autopilot. We made the right decision, we just didn’t execute it properly. 
  2. Controlled System II thinking is vulnerable to errors of planning. These mistakes can be rule- or knowledge-based mistakes. We may choose the wrong rule or have insufficient knowledge to make a correct choice. 

The point here is that one solution cannot address both types of thinking. An error made in System I thinking will not benefit from a lecture on the topic. These practitioners already know this information. They instead need systems to kick them out of autopilot to double check their work. An error made by a System II thinking may benefit from training. Those same checks on their work are less useful as they are already meticulously going through each step. 

Different Types Medical Errors

There are several common types of error we see in medicine: 

  • Medication errors
  • Surgical errors
  • Diagnostic errors
  • Communication errors
  • Nosocomial infections

Medication errors

By better understanding these errors and implementing standard practices, we can greatly reduce them.  Many medication errors were attributable to problems with handwriting – poor legibility contributing to misreading.  

Can you tell what the medication is that is being ordered in the first one? I have no idea. In the second one is it 60 units or 6 U of regular insulin? In the latter, 60 units of insulin were administered causing hypoglycemia. The electronic medical record is helping with the poor handwriting issue. 

Look-alike or sound-alike drugs pose another risk for medication errors. Dispensing a prescription for hydroxyzine (an antihistamine), you can see how a pharmacist could accidentally give the patient hydralazine (a potent anti-hypertensive medication), especially if they are stocked right next to each other. Brevital and Breviblock are two other medications that were commonly mistaken for each other. 

There are many factors that can lead to medication errors. 

  • unauthorized or wrong drug used
  • improper dose or quantity
  • wrong patient
  • wrong time or schedule
  • omission error
  • drug prepared incorrectly
  • prescribing error
  • extra dose
  • wrong rate
  • delay

Through education, standardization, adherence to policies, and the development of technologies we may be able to reduce some of these errors. Bar code medication administration checks a medication is given to the correct patient. Vials of similar sounding or same drugs but different doses can be made in different shapes, sizes and colors by the manufacturer.  

Surgical errors

With the large numbers of sponges, retractors, and other materials used in surgery, occasionally materials are mistakenly left in the patient.  The slide below is from a patient who activated the metal detector at an airport.  She had had abdominal surgery months earlier.  A subsequent x-ray disclosed this – a retained metal retractor from surgery. 

Surgical errors have especially come under scrutiny to develop methods to reduce risk.  Today, surgeons have a “time out” before initiating the procedure to make sure that they have the right patient, they are operating on the right body part, and that the team has what they need. After surgeries there are meticulous counts of all equipment. 

Watch as surgical resident Dr. Benton forces the surgery attending to follow the pre-surgical time-out checklist. He ends up saving his friend from harm. Of course, there’s a little bit of dramatic license involved here. 

Follow Safety Protocol (2:58)

Diagnostic Errors

Diagnostic errors are related to missing a clue in a patient’s history or failing to connect lab and historical data correctly. These are often difficult to measure and fix. This could be due to deficiencies in knowledge base or cognitive biases. We become more vulnerable to these sorts of errors the more tired or overworked we are. Remember, that even experts are prone to make these sorts of errors. They just know to double check their thinking to catch them. 

Communication Errors

One of the most critical causes of errors in health care relates to incomplete or inadequate communication between care providers.  Verbal communication can lose accuracy unless the parties are well trained or combine verbal communication with something written or call-and-response confirmation by the recipient of what they heard.

One particularly risky time for communication is during transitions of care. Handovers or handoffs happen during shift changes, when a patient moves from one unit to another, when a patient is discharged from the hospital or when the patient is visited by a consultant. 

One of the common examples of such communication issues is medication reconciliation. When a patient is moving from one unit to another or being admitted to the hospital, we need an accurate accounting of which medications a patient is taking and when. The list the hospital has from past visits may no longer be accurate. Additionally, the medication list may be altered during their stay in the hospital. Adverse drug events can result when this isn’t done properly, including allergic reactions, omitting needed medications or administering unused meds. This is also a good time to make sure the list of medications make sense and that there aren’t any drug-drug interactions.

Nosocomial Infections

Two common infections from hospital-related procedures that we routinely follow are central line related bloodstream infections (CLABSI) and catheter associated urinary tract infections (CAUTI). Proper aseptic technique and only using these procedures when necessary can help decrease this. 

Proper and frequent hand washing can also decrease the spread of infections from patient to provider to patient. We have a clean-in and clean-out policy where hand washing is required when going in and when leaving a patient’s room.