- All Management Learning Resources
- Human error
Executive summary
In investigating accidents, most people determine that many of the reasons why they occur is due to human error. However, research shows that most accidents occur due to a failure in systems rather than people (Reason, 1990). In this CQ Dossier we describe address the foundation of human error and how to utilize a systems approach can help you to understand and reduce workplace accidents.
Contents
What is human error?
Human error is a result of a sequence of events so is sometimes difficult to define. However, one definition by Reason (1990) captures the phenomenon:
“Error will be taken as a generic term to encompass all those occasions in which a planned sequence of mental or physical activities fails to achieve its intended outcomes and when these failures cannot be attributed to the intervention of some chance agency.”
What are the approaches to human error?
There are two main approaches to understanding human error – the person approach and the systems approach.
Person approach to human error
The person approach focuses on the unsafe acts and errors of employees through a focus on breakdowns on cognitive processes such as inattention, forgetfulness and behaviors, such as negligence and recklessness. The approach aims to reduce aberrant variability in human behavior.
Systems approach to human error
The systems approach acknowledges human fallibility and that errors occur even in High Reliability Organizations (HROs). From this perspective, human error is a symptom or consequence rather than a cause and this approach focuses on those organizational processes that influence human errors. Consequently, the systems approach utilizes interventions that change the working conditions.
Adopt a systems approach in understanding accidents
One of the best-known models in understanding accidents is the “Swiss Cheese” model (Reason, 1990). The model suggests that there are many defenses and conditions that prevent accidents but, like Swiss Cheese, there are holes that shift their location. Accidents occur when the holes in the layers momentarily line up to permit an opportunity for an accident. However, these accidents are rare so the main message of the Swiss Cheese Model is that the possibility of all the holes lining up in all the defenses at any one time is small.
The theory behind the Swiss Cheese Model is at the heart of a system approach to understanding how to stop accidents through defenses, barriers, and safeguards. For example, technology systems have many layers of defense including devices (e.g., alarms, automatic shutdowns) and human beings (e.g., control room operators). Although these systems are effective there are weaknesses. Although accidents are rare, the holes in the defenses can arise for two reasons:
- active failures and
- latent conditions.
Active failures: Unsafe acts committed by people
Active failures are the unsafe acts committed by people who are in direct contact with the system and usually take the form of slips, mistakes and procedural violations (Reason, 1990). One example of an active failure is the Chernobyl accident, the 1986 catastrophic nuclear accident (World Health Organization). At Chernobyl, the operators violated plant procedures and switched off several safety systems; this acted as a trigger for the disaster. It would be easy to just blame the operators but their actions were part of a systemic breakdown within the nuclear plant (Stang, 1996).
Latent conditions: Problems within a system
Latent conditions are problems within a system due to decisions made by human resources, such as designers and top level management. There are two kinds of adverse effects: increasing error-provoking conditions within the workplace (e.g., time pressure, lack of human resources) and they can create long-term weaknesses in the defenses (e.g., untrustworthy alarms). Latent conditions can remain dormant in the system for a long time before they combine with active failures to create an accident opportunity. Because they are long-term, they can be identified before they occur and so can be remedied before an adverse event occurs. This leads to proactive rather than reactive management.
How to reduce accidents through a systems approach
In adopting a systems approach, countermeasures are utilized that assume the human condition can’t be changed, but situations under which human beings work can be changed. This can be best done through implementing system defenses. When an adverse event occurs, the key issue is not who made the mistake but how and why the defenses failed (Reason, 1990). In the field of human factors, researchers have examined the feasibility and effectiveness of tools that manage unsafe incidents. Error management consists of
- limiting the incidence of dangerous errors and
- creating systems that tolerate the occurrence of errors and their damaging effects.
The second option adheres to the system approach through striving for an effective management policy for safety that is focused on several targets: individual, team, tasks, workplace, and the organization (Reason, 1990).
Use findings from HROs to strengthen system level safety
In recent years, researchers have focused on High-Reliability Organizations (HROs) and High Reliability Teams (HRTs) as exemplars for excellence in health and safety (Weick et al., 1999). HROs have systems operating in hazardous situations but with few adverse events and offer a template for the components of a resilient system because they demonstrate ‘safety health’. Weick and colleagues have examined HROs such as nuclear power plants, US Navy nuclear aircraft carriers, and air traffic control centers. These organizations manage complex technologies to circumvent major failures. These HROs are internally dynamic and intensely interactive. They perform exacting tasks under short time frames and yet have low incident rates and an almost complete absence of catastrophic failures over decades (Weick, 1987; Weick et al., 1999). One of the key features of HROs is their ability to shift control to experts in emergency situations yet to reconfigure back to a conventional routine once the emergency is over.
For example, military organizations define their goals in a clear way so for spontaneous bursts of activity to succeed, it is important that all members of the organization share these goals. HROs expect variability in human performance, but they also work hard to maintain a consistent mindset of vigilance (Weick et al., 1999). They expect employees to make errors and train their workforce to recognize errors and to recover them. Generally, HROs are preoccupied with the possibility of failure so they continuously search for system reforms.
In conclusion, adopting a systems approach to understanding how to reduce accidents is the most feasible route for organizations. Rather than blaming the individual, the systems approach understands that employees work in an environment that needs defenses from human error. The best organizations are those that proactively seek to strengthen defenses and to quickly act when an emergency event occurs. Factors such as safety climate, leadership and individual differences play an important role to prevent errors from the very beginning.
Key take-ways
- The systems approach acknowledges human fallibility
- Situations under which human beings work can be changed
- Active failures are unsafe acts committed by people who are in direct contact with the system
- Latent conditions are problems within a system due to decisions made by human resources
- A systems approach strives for an effective management policy for safety
- High Reliability Organizations (HROs) are exemplars for excellence in health and safety
Management skills newsletter
Join our monthly newsletter to receive management tips, tricks and insights directly into your inbox!
References and further reading
Reason, J., Human error. Cambridge University Press, Cambridge, 1990.
Stang, E. (1996). Chernobyl - System Accident or Human Error?, Radiation Protection Dosimetry, Volume 68, Issue 3-4, 1 December 1996, Pages 197–201.
Weick, K.E. (1987). Organizational culture as a source of high reliability. California Management Review, 29, 112-127.
Weick, K.E, Sutcliffe, K.M, & Obstfeld, D. (1999). Organizing for high reliability: processes of collective mindfulness. Research in Organizational Behavior, 21, 23-81.
World Health Organization. Chernobyl at 25th anniversary – Frequently Asked Questions – April 2011"(PDF). 23 April 2011.
About the Author