Episode 20 – An Unexpected Risk Assessment

There is a fine line between confidence and stupidity. In the 1970s the London Ambulance Service tried to implement a computer aided despatch system, and failed because they couldn’t get the system’s users to support the change. In the late 1980s they tried again, but the system couldn’t cope with the expected load.

Clearly, implementing a system of this sort involved significant managerial and technical challenges. What better way to handle it then, than to appoint a skeleton management team and saddle them with an impossible delivery timetable.

The London Ambulance Service Computer Aided Despatch System and Management Aided Disaster is described on this Episode by George Despotou. George also talks about the safety challenges of connected health.

Episode 20 transcript is here.

References

  1. ZDNet News Item about 999 System outage
  2. London Ambulance Service Press Release
  3. Anthony Finkelstein LASCAD page with an academic paper, the full report and case study notes
  4. University of Kent LASCAD case study notes [pdf]
  5. Caldicott Report mentioned in George’s Connected Health piece
  6. The Register news article mentioned in George’s piece
  7. BBC News article on hacking heart pumps
  8. George’s Dependable Systems Blog

The post Episode 20 – An Unexpected Risk Assessment appeared first on DisasterCast Safety Podcast.

Episode 19 – Star Trek Transporters and Through Life Safety

Have you ever noticed that very few people get hurt during the design of a system. From
precarious assemble-at-home microlight aircraft to the world’s most awesome super-weapons, the hazards that can actually occur at design time are those of a typical office environment – power sockets, trips, falls and repetitive strain injury. Our safety effort during this time is all predictive. We don’t usually call it prediction, but that’s what modelling, analysis, and engineering judgement ultimately are. We’re trying to anticipate, imagine and control a future world.

And even though it’s easy to be cynical about the competence and diligence of people in charge of dangerous systems I really don’t think that there are evil masterminds out there authorising systems in the genuine belief that they are NOT safe. At the time a plant is commissioned or a product is released there is a mountain of argument and evidence supporting the belief of the designers, the testers, the customers and the regulators that the system is safe. Why then, do accidents happen?

That’s what this episode is about. We’ll look at some of the possible reasons and how to manage them, then discuss an accident, the disaster that befell Alaska Airlines Flight 261. Just in case you’ve got a flight to catch afterwards, we’ll reset our personal risk meters by discussing an alternate way to travel, the transporters and teleportation devices from Star Trek and similar Sci Fi experiences.

Transcript is available here.

References

  1. Memory Alpha (Star Trek Wiki) article on Transporters.
  2. NTSB Report on the Alaska Airlines 261 Crash.

The post Episode 19 – Star Trek Transporters and Through Life Safety appeared first on DisasterCast Safety Podcast.