This episode covers an Iranian military transported downed by lightning, the Milford Haven Texaco Refinery explosion, and the dangers of blasphemy on a golf course. Lightning alone is seldom enough to cause a major disaster, but it creates a system disturbance putting resilience to the test. This episode also asks why there are so many different names for safety practitioners, and yet again plugs the Graduate Certificate in Safety Leadership from Griffith University.
You can now support DisasterCast by subscribing on Patreon. A $1 (or your local currency equivalent) donation per episode is easy to set up, and will help the show continue. Subscribing will also give you access to bonus resources for each episode.
Unfortunately due to my inter-continental move, the DisasterCast Episode for this week is not ready.
I’ll try to make it a late episode rather than a totally skipped episode, but realistically it won’t be out until the
One of the weird things about safety is that we spend so much effort on safety analysis during design, despite the fact that almost all accidents happen after design is completed. One explanation is that addressing problems by building safety into the design is inherently more effective. A more cynical thought might be that we think of building things as “real” engineering, but looking after them afterwards as a lesser job. In any case it’s a genuine problem that for most systems, there’s disproportionate effort put into making them safe at the point of commissioning given where the risks are coming from through the life of the system. The major exceptions are big structural projects – skyscrapers, dams, tunnels and bridges. These are most dangerous whilst they are still being built. Here the problem can sometimes go in the reverse direction. We put a lot of attention into making sure the finished design is safe, but sometimes forget about the intermediate steps. A bridge, tunnel or building that is structurally sound when complete can still be quite dangerous to build.
Sean Ellis visits DisasterCast this episode to provide a detailed discussion of TWA 800 and the associated conspiracy theories about US armed forces being responsible for the accident. We also discuss a couple of real accidents involving missiles and airliners, Iran Air 655 and Korean Air Lines 007.
DisasterCast has covered some pretty weird topics. We’ve dealt with pilot defenestration, spontaneous human combustion, and exploding death stars. I don’t think we’ve ever described an accident quite as strange as the 15 foot wall of molasses that destroyed part of Boston in 1919.
This episode was recorded in the Safety Science Innovation Lab, and comes filled with thoughts about how we tell stories about safety. Do we even have theories of safety, or just meta-narratives – patterns of storytelling? What’s the difference between a story and empirical evidence? Of course, it wouldn’t be DisasterCast if I didn’t include a story as well. Two airliners, heading straight for each other in the skies about Yugoslavia. How did it happen, and, more importantly, how can we stop it happening again?
When I claim that the chance of my front-lawn rocket exploding is “ten to the minus six”, just what does that mean? Does it mean the same thing to me as it does to you? Does it mean anything at all? How can I misuse scope, timeframes, exposure and units to make something obviously dangerous meet numeric safety targets? With quantitative risk assessment I can mislead others, but am I danger of misleading myself as well?
Most content in this episode is based on my own publications. You can access copies at ResearchGate.
This episode features the BP Texas City Refinery explosion of 2005. Unlike most accidents featured on the show, it is a story
of management fully aware of danger as a situation tumbled towards disaster. Knowing you have a problem may be an important part of fixing it – but only part.
What is independence? Why does it matter for safety? Why can’t we have perfect independence, and why wouldn’t we want it even if we could have it? Are there times independence is an actively bad thing? And what happens when independence is vital, but just isn’t in place … ?
This episode is about a clash of principles I call the “Question of Final Authority”.
The question is: In a given situation, should automation be designed to prevent system states which the designers judge to be dangerous, or should the interface provide facility for the operator to execute any control at any time?”
The dilemma regarding whether to provide hard interlocks or allow overrides can be found in many industries:
For road transport: should speed limits be automatically enforced, or should drivers have the ultimate control?
For military engines: should thermal limits be allowed to be temporarily exceeded through the use of “battle shorts” in emergency or combat situations?
For aircraft: should “alpha” or “flight envelope” protection be strictly enforced, or permitted to be exceeded at the judgement of the pilot?
For railways: should signal interlockings be overridden (or signals permitted to be disobeyed) in order to move trains out of dangerous situations?
For smart infusion pumps, which provide limits on medication doses: should these limits be soft, where they can be overridden by doctors or hard, where they can never be overridden?