Episode 45 of DisasterCast tries something a little different. This episode has three different stories of exactly the same accident – the deaths of 13 firefighters in the Rocky Mountains.
This episode is about single points of failure, common cause failures, and the Hinsdale Central Office Fire.
In this episode I discuss some of the people who’ve helped shape my own thinking on safety research. The second part of the episode is all about mixing things together to create explosions. There is a hydrogen/oxygen explosion, a water/molten iron detonation, and an AZDL / Persulphate blast.
Subscribe via www.patreon.com/disastercast to receive early access to episodes, along with the associated accident reports.
This episode covers an Iranian military transported downed by lightning, the Milford Haven Texaco Refinery explosion, and the dangers of blasphemy on a golf course. Lightning alone is seldom enough to cause a major disaster, but it creates a system disturbance putting resilience to the test. This episode also asks why there are so many different names for safety practitioners, and yet again plugs the Graduate Certificate in Safety Leadership from Griffith University.
You can now support DisasterCast by subscribing on Patreon. A $1 (or your local currency equivalent) donation per episode is easy to set up, and will help the show continue. Subscribing will also give you access to bonus resources for each episode.
Unfortunately due to my inter-continental move, the DisasterCast Episode for this week is not ready.
I’ll try to make it a late episode rather than a totally skipped episode, but realistically it won’t be out until the
One of the weird things about safety is that we spend so much effort on safety analysis during design, despite the fact that almost all accidents happen after design is completed. One explanation is that addressing problems by building safety into the design is inherently more effective. A more cynical thought might be that we think of building things as “real” engineering, but looking after them afterwards as a lesser job. In any case it’s a genuine problem that for most systems, there’s disproportionate effort put into making them safe at the point of commissioning given where the risks are coming from through the life of the system. The major exceptions are big structural projects – skyscrapers, dams, tunnels and bridges. These are most dangerous whilst they are still being built. Here the problem can sometimes go in the reverse direction. We put a lot of attention into making sure the finished design is safe, but sometimes forget about the intermediate steps. A bridge, tunnel or building that is structurally sound when complete can still be quite dangerous to build.
Sean Ellis visits DisasterCast this episode to provide a detailed discussion of TWA 800 and the associated conspiracy theories about US armed forces being responsible for the accident. We also discuss a couple of real accidents involving missiles and airliners, Iran Air 655 and Korean Air Lines 007.
DisasterCast has covered some pretty weird topics. We’ve dealt with pilot defenestration, spontaneous human combustion, and exploding death stars. I don’t think we’ve ever described an accident quite as strange as the 15 foot wall of molasses that destroyed part of Boston in 1919.
This episode was recorded in the Safety Science Innovation Lab, and comes filled with thoughts about how we tell stories about safety. Do we even have theories of safety, or just meta-narratives – patterns of storytelling? What’s the difference between a story and empirical evidence? Of course, it wouldn’t be DisasterCast if I didn’t include a story as well. Two airliners, heading straight for each other in the skies about Yugoslavia. How did it happen, and, more importantly, how can we stop it happening again?
The safety course mentioned at the start of the show is:
Graduate Certificate in Safety Leadership
When I claim that the chance of my front-lawn rocket exploding is “ten to the minus six”, just what does that mean? Does it mean the same thing to me as it does to you? Does it mean anything at all? How can I misuse scope, timeframes, exposure and units to make something obviously dangerous meet numeric safety targets? With quantitative risk assessment I can mislead others, but am I danger of misleading myself as well?
Most content in this episode is based on my own publications. You can access copies at ResearchGate.