Learning from Incidents

Papers /Learning from Incidents

Learning from incidents: from normal accidents to high reliability

David L. Cookea* and Thomas R. Rohleder

1 of 27

D. L. Cooke and T. R. Rohleder: Learning from incidents 213
Published online in Wiley InterScience
(www.interscience.wiley.com) DOI: 10.1002/sdr
Abstract
Many disasters have occurred because organizations have ignored the warning signs of precursor
incidents or have failed to learn from the lessons of the past. Normal accident theory suggests
that disasters are the unwanted, but inevitable output of complex socio-technical systems, while
high-reliability theory sees disasters as preventable by certain characteristics or response systems
of the organization. We develop an organizational response system called incident learning in
which normal precursor incidents are used in a learning process to combat complacency and
avoid disasters. We build a model of a safety and incident learning system and explore its
dynamics. We use the model to motivate managers to implement incident learning systems as a
way of moving safety performance from normal accidents to high reliability. The simulation
model behavior provides useful insights for managers concerned with the design and operation of
incident learning systems. Copyright © 2006 John Wiley & Sons, Ltd.
Syst. Dyn. Rev. 22, 213–239, (2006)
Introduction
On January 28, 1986 seven crew members died when the space shuttle Chal-
lenger exploded just over a minute after take-off. The Report of the Presidential
Commission on the Space Shuttle Challenger Incident (1986) concluded that
neither NASA nor Thiokol, the seal designer, “responded adequately to inter-
nal warnings about the faulty seal design. . . . A well structured and managed
system emphasizing safety would have flagged the rising doubts about the
Solid Rocket Booster joint seal.”
On May 9, 1992 an explosion in the Westray mine at Plymouth, Nova Scotia,
killed 26 miners. There were many incidents leading up to the disaster that
could have claimed lives but instead ended up as production losses or “near-
misses.” Because of the many warning signs, Richard (1996) called Westray a
“predictable path to disaster.”
In May 1996, ValuJet Flight 592 exploded and crashed into a Florida swamp,
killing all 110 people on board. Langewiesche (1998) reports that by early 1996
the U.S. Federal Aviation Authority was concerned “about the disproportion-
ate number of infractions committed by ValuJet and the string of small bang-
ups it had had.”
David L. Cooke is
Adjunct Assistant
Professor of
Operations
Management in the
Haskayne School of
Business and in the
Department of
Community Health
Sciences at the
University of Calgary.
He has a PhD and
MBA in Operations
Management from
the University of
Calgary and a BSc. in
Chemical Engineering
from the University of
Birmingham, England.
Prior to his doctorate,
Dr. Cooke enjoyed a
long career in the
chemical industry
in the areas of
process engineering,
business and
product development,
technical services
management and
safety management.
He was formerly the
Director of Safety and
Emergency Planning
with NOVA Chemicals
Corporation. His
research interests
include safety
management,
system dynamics, and
OM/OR applications
in health care.
System Dynamics Review Vol. 22, No. 3, (Fall 2006): 213–239
Published online in Wiley InterScience
(www.interscience.wiley.com) DOI: 10.1002/sdr.338
Copyright © 2006 John Wiley & Sons, Ltd.
213
Learning from incidents: from normal accidents
to high reliability
David L. Cookea* and Thomas R. Rohleder a
a Haskayne School of Business, University of Calgary, 2500 University Drive NW, Calgary, Alberta, Canada
T2N1N4.
* Correspondence to: David L. Cooke. E-mail: dlcooke@ucalgary.ca
Received January 2006; Accepted May 2006
Download PDF
1 of 27


Who is to Blame (Human Failure in Chernobyl)

Read More