One of the weird things about safety is that we spend so much effort on safety analysis during design, despite the fact that almost all accidents happen after design is completed. One explanation is that addressing problems by building safety into the design is inherently more effective. A more cynical thought might be that we think of building things as “real” engineering, but looking after them afterwards as a lesser job. In any case it’s a genuine problem that for most systems, there’s disproportionate effort put into making them safe at the point of commissioning given where the risks are coming from through the life of the system. The major exceptions are big structural projects – skyscrapers, dams, tunnels and bridges. These are most dangerous whilst they are still being built. Here the problem can sometimes go in the reverse direction. We put a lot of attention into making sure the finished design is safe, but sometimes forget about the intermediate steps. A bridge, tunnel or building that is structurally sound when complete can still be quite dangerous to build.
Sean Ellis visits DisasterCast this episode to provide a detailed discussion of TWA 800 and the associated conspiracy theories about US armed forces being responsible for the accident. We also discuss a couple of real accidents involving missiles and airliners, Iran Air 655 and Korean Air Lines 007.
DisasterCast has covered some pretty weird topics. We’ve dealt with pilot defenestration, spontaneous human combustion, and exploding death stars. I don’t think we’ve ever described an accident quite as strange as the 15 foot wall of molasses that destroyed part of Boston in 1919.
This episode was recorded in the Safety Science Innovation Lab, and comes filled with thoughts about how we tell stories about safety. Do we even have theories of safety, or just meta-narratives – patterns of storytelling? What’s the difference between a story and empirical evidence? Of course, it wouldn’t be DisasterCast if I didn’t include a story as well. Two airliners, heading straight for each other in the skies about Yugoslavia. How did it happen, and, more importantly, how can we stop it happening again?
The safety course mentioned at the start of the show is:
Graduate Certificate in Safety Leadership
When I claim that the chance of my front-lawn rocket exploding is “ten to the minus six”, just what does that mean? Does it mean the same thing to me as it does to you? Does it mean anything at all? How can I misuse scope, timeframes, exposure and units to make something obviously dangerous meet numeric safety targets? With quantitative risk assessment I can mislead others, but am I danger of misleading myself as well?
Most content in this episode is based on my own publications. You can access copies at ResearchGate.
This episode features the BP Texas City Refinery explosion of 2005. Unlike most accidents featured on the show, it is a story
of management fully aware of danger as a situation tumbled towards disaster. Knowing you have a problem may be an important part of fixing it – but only part.
What is independence? Why does it matter for safety? Why can’t we have perfect independence, and why wouldn’t we want it even if we could have it? Are there times independence is an actively bad thing? And what happens when independence is vital, but just isn’t in place … ?
This episode is about a clash of principles I call the “Question of Final Authority”.
The question is: In a given situation, should automation be designed to prevent system states which the designers judge to be dangerous, or should the interface provide facility for the operator to execute any control at any time?”
The dilemma regarding whether to provide hard interlocks or allow overrides can be found in many industries:
For road transport: should speed limits be automatically enforced, or should drivers have the ultimate control?
For military engines: should thermal limits be allowed to be temporarily exceeded through the use of “battle shorts” in emergency or combat situations?
For aircraft: should “alpha” or “flight envelope” protection be strictly enforced, or permitted to be exceeded at the judgement of the pilot?
For railways: should signal interlockings be overridden (or signals permitted to be disobeyed) in order to move trains out of dangerous situations?
For smart infusion pumps, which provide limits on medication doses: should these limits be soft, where they can be overridden by doctors or hard, where they can never be overridden?
This episode discusses measurement of safety and the Imperial Sugar disaster.
Measurement is the foundation of both research and business improvement. If we can’t compare
two companies, or our own company at two points in time, how can we know whether our
safety management is working? How can we know if our safety management is even likely to work?
For major accident hazards, there simply aren’t enough data points to measure the effect of individual
safety improvements. We can work backward in time to create more data, but that then makes that same data less
Once we’ve twisted your brain enough with the various methods of safety measurement, we’ll relax by talking about
a series of deadly explosions at a sugar factory.
In the 1970s and 1980s there was a series of accidents which triggered a really intensive examination of organisational safety. Both the idea and reality of management failure weren’t new in safety research; what was special about each of these accidents is that they all occurred in industries that had strong safety regulation in place. Previously you could just observe that the accident happened because of a lack of safety management. Suddenly that wasn’t enough. There was plenty of safety management going on, it just wasn’t working. More sophisticated explanations were needed.
This episode mentions the new Graduate Certificate in Safety Leadership at Griffith University. If you’re in Australia, check it out. If you do apply, be sure to mention that you heard about it on the podcast (I don’t get recruitment fees or anything like that, it’s just good to know).