DisasterCast has covered some pretty weird topics. We’ve dealt with pilot defenestration, spontaneous human combustion, and exploding death stars. I don’t think we’ve ever described an accident quite as strange as the 15 foot wall of molasses that destroyed part of Boston in 1919.
This episode was recorded in the Safety Science Innovation Lab, and comes filled with thoughts about how we tell stories about safety. Do we even have theories of safety, or just meta-narratives – patterns of storytelling? What’s the difference between a story and empirical evidence? Of course, it wouldn’t be DisasterCast if I didn’t include a story as well. Two airliners, heading straight for each other in the skies about Yugoslavia. How did it happen, and, more importantly, how can we stop it happening again?
The safety course mentioned at the start of the show is:
Graduate Certificate in Safety Leadership
When I claim that the chance of my front-lawn rocket exploding is “ten to the minus six”, just what does that mean? Does it mean the same thing to me as it does to you? Does it mean anything at all? How can I misuse scope, timeframes, exposure and units to make something obviously dangerous meet numeric safety targets? With quantitative risk assessment I can mislead others, but am I danger of misleading myself as well?
Most content in this episode is based on my own publications. You can access copies at ResearchGate.
This episode features the BP Texas City Refinery explosion of 2005. Unlike most accidents featured on the show, it is a story
of management fully aware of danger as a situation tumbled towards disaster. Knowing you have a problem may be an important part of fixing it – but only part.
What is independence? Why does it matter for safety? Why can’t we have perfect independence, and why wouldn’t we want it even if we could have it? Are there times independence is an actively bad thing? And what happens when independence is vital, but just isn’t in place … ?
This episode is about a clash of principles I call the “Question of Final Authority”.
The question is: In a given situation, should automation be designed to prevent system states which the designers judge to be dangerous, or should the interface provide facility for the operator to execute any control at any time?”
The dilemma regarding whether to provide hard interlocks or allow overrides can be found in many industries:
For road transport: should speed limits be automatically enforced, or should drivers have the ultimate control?
For military engines: should thermal limits be allowed to be temporarily exceeded through the use of “battle shorts” in emergency or combat situations?
For aircraft: should “alpha” or “flight envelope” protection be strictly enforced, or permitted to be exceeded at the judgement of the pilot?
For railways: should signal interlockings be overridden (or signals permitted to be disobeyed) in order to move trains out of dangerous situations?
For smart infusion pumps, which provide limits on medication doses: should these limits be soft, where they can be overridden by doctors or hard, where they can never be overridden?
This episode discusses measurement of safety and the Imperial Sugar disaster.
Measurement is the foundation of both research and business improvement. If we can’t compare
two companies, or our own company at two points in time, how can we know whether our
safety management is working? How can we know if our safety management is even likely to work?
For major accident hazards, there simply aren’t enough data points to measure the effect of individual
safety improvements. We can work backward in time to create more data, but that then makes that same data less
Once we’ve twisted your brain enough with the various methods of safety measurement, we’ll relax by talking about
a series of deadly explosions at a sugar factory.
In the 1970s and 1980s there was a series of accidents which triggered a really intensive examination of organisational safety. Both the idea and reality of management failure weren’t new in safety research; what was special about each of these accidents is that they all occurred in industries that had strong safety regulation in place. Previously you could just observe that the accident happened because of a lack of safety management. Suddenly that wasn’t enough. There was plenty of safety management going on, it just wasn’t working. More sophisticated explanations were needed.
This episode mentions the new Graduate Certificate in Safety Leadership at Griffith University. If you’re in Australia, check it out. If you do apply, be sure to mention that you heard about it on the podcast (I don’t get recruitment fees or anything like that, it’s just good to know).
This episode is about attempts to make things safer that actually make things worse. The episode focusses on the work of two specific authors, Edward Tenner (Why Things Bite Back: Technology and the Revenge of Unintended Consequences) and Lisanne Bainbridge (The Ironies of Automation). There are examples throughout the episode, but the main case studies are China Air 006 and the New Orleans Hurricane Protection System.
We’re up to 30 episodes of DisasterCast, and we still haven’t talked about the Titanic. Why start now?
This episode talks around the Titanic. We talk about icebergs, lifeboats, shipwrecks and radios, but not the sinking of the unsinkable.
The next episode will be about dangerous safety features – ways that people can or have been hurt by systems specially designed to keep them safe. If you have any suggests, post a comment to this episode, or use the feedback link above.