September 19th, 2024

Drift towards danger and the normalization of deviance (2017)

High-hazard activities often face safety issues due to incomplete procedures, leading to accepted unsafe practices. This normalization of deviance can result in catastrophic failures, highlighting the need for comprehensive safety management.

Read original articleLink Icon
ConcernAwarenessFascination
Drift towards danger and the normalization of deviance (2017)

High-hazard activities depend on established rules and procedures to ensure safety, but these guidelines are often incomplete, leading to deviations by frontline workers. This gap between "work-as-imagined" and "work-as-done" is recognized in human factors research, particularly by French ergonomists who studied the differences between prescribed and actual work. Over time, these deviations can lead to a normalization of deviance, where unsafe practices become accepted due to their repeated use without immediate negative consequences. Jens Rasmussen's concept of "drift to danger" describes how organizational behavior can shift towards riskier practices under pressures for cost-effectiveness and efficiency. This gradual process often goes unnoticed until an accident occurs, as safety boundaries are not clearly defined and can change over time. The phenomenon is not driven by malicious intent but is a natural outcome of adaptive behaviors within complex systems. Historical accidents, such as the Challenger and Columbia space shuttle disasters, exemplify how normalization of deviance can lead to catastrophic failures. These incidents highlight the importance of understanding the systemic factors that contribute to safety lapses, emphasizing that safety management must consider both proactive and reactive measures across all levels of an organization.

- High-hazard activities rely on incomplete rules and procedures, leading to deviations in practice.

- Normalization of deviance occurs when unsafe practices become accepted over time.

- "Drift to danger" describes the gradual shift towards riskier behaviors due to organizational pressures.

- Safety boundaries are often fuzzy and can change, complicating risk management.

- Historical accidents illustrate the systemic nature of safety failures and the need for comprehensive safety management.

AI: What people are saying
The comments reflect a deep concern about the normalization of deviance in various contexts, emphasizing its implications for safety and decision-making.
  • Personal experiences highlight how individuals can unconsciously adopt unsafe practices over time.
  • References to historical examples and literature illustrate the widespread nature of this phenomenon across different fields.
  • Concerns are raised about the lack of practical guidance on combating normalization of deviance.
  • Connections are made between normalization of deviance and high-stakes situations, such as foreign policy and organizational culture.
  • Discussion includes the idea that normalization of deviance can lead to catastrophic outcomes, drawing parallels to various domains beyond physical safety.
Link Icon 22 comments
By @delichon - 7 months
I was using an angle grinder to strip paint off of a large table top and necessarily had to remove the guard. This is one of the most dangerous tools I have and I'm very aware of it, so I carefully gripped it toward the end of the handle well away from the disk. For the first few minutes I was very conscious about the position of my grip. Then a half hour later I glanced down and saw my top finger about a centimeter away from the spinning disk. I had gradually choked my grip up in order to get better leverage. Another few seconds and it could have become a more memorable incident. I normalized my deviance just by failing to notice it for a few minutes.

I see the same forces working on my software practices. For instance I start with wide and thorough testing coverage and over time reduce it to the places I usually see problems and ignore the rest. Sometimes production can be nearly maimed before I notice and adjust my grip.

By @jonah - 7 months
Related: Overton Window

"The Overton window is the range of policies politically acceptable to the mainstream population at a given time. It is also known as the window of discourse.

The term is named after the American policy analyst Joseph Overton, who proposed that an idea's political viability depends mainly on whether it falls within this range, rather than on politicians' individual preferences.[2][3] According to Overton, the window frames the range of policies that a politician can recommend without appearing too extreme to gain or keep public office given the climate of public opinion at that time."

While originally about politics, I feel it can be applied to many other aspects of humanity and maybe is just a specialized form of the normalization of deviance.

https://en.m.wikipedia.org/wiki/Overton_window

By @woopsn - 7 months
The referenced "researcher/guru" Sidney Dekker wrote a whole book titled Drift Into Failure. "Accidents come from relationships, not broken parts."

"Safety may not at all be the result of decisions that were or were not made, but rather an underlying stochastic variation that hinges on a host of other factors, many not easily within the control of those who engage in fine-tuning processes. Empirical success, in other words, is no proof of safety. Past success does not guarantee future safety. Murphy's law is wrong: everything that can go wrong usually goes right, and then we draw the wrong conclusion."

"Why, in hindsight, do all all these other parts (in the regulations, the manufacturer, the airline, the maintenance facility, the technician, the pilots) appear suddenly "broken" now? How is it that a maintenance program which, in concert with other programs like it never revealed any fatigue failures or fatigue damage after 95 million flight hours, suddenly became "deficient"? Why did none of these deficiencies strike anybody as deficiencies at the time?"

The central idea is not to (stop at) discovering what mistakes were made, but to understand why they didn't seem like mistakes to the individuals making them, and what suppressed the influence of anyone who might have warned otherwise.

By @082349872349872 - 7 months
I don't know if the American Alpine Journal still reads like this, but I once went through a pile of 1960s or 70s back issues, and at the time it seemed a fairly regular article genre was:

"First we were at an altitude where we probably weren't thinking all that sharply to begin with, and then we got tired, cold, and hungry, and that's when we made the stupid mistake that killed ${COLLEAGUE}."

By @einpoklum - 7 months
Brief excerpt re the second term:

A detailed analysis of the organizational culture at NASA, undertaken by sociologist Diane Vaughan after the [Challenger shuttle destruction] accident, showed that people within NASA became so much accustomed to an unplanned behaviour that they didn’t consider it as deviant, despite the fact that they far exceeded their own basic safety rules. This is the primary case study for Vaughan’s development of the concept of normalization of deviance.

By @roenxi - 7 months
I worry a lot about the similar forces that act on foreign policy and diplomacy. Unfortunately people don't get more cautious as the stakes get higher, organisations at all stakes and scales tend to fail in the same way.
By @Verdex - 7 months
For a low dimensional space, I think their diagrams make sense. Like, when working with large industrial machines factors that effect safety are probably how close you are to the machine and how fast everything is going and with what urgency.

Even here they have a section on how the safety performance boundary is fuzzy and dynamic.

I wonder though what things look like with super high dimensions. When there are a 100 different things that go into whether or not you're being safe. That boundary's fuzzy and dynamic nature might extend clear across the entire space. And the fact that failures happen due to rare occurrences suggests that we're not starting at a point of safety but actually starting in a danger zone that we've just been lucky enough not to encounter failures for.

100% unit test coverage comes to mind (even for simple getters). Where some might see a slide towards danger as the coverage goes down, another sees more time to verify the properties that really matter. And I don't see why we can't get into the scenario where both are right and wrong in incomparable ways.

By @yamrzou - 7 months
By @svaha1728 - 7 months
Boeing is far from an anomaly. They’ve just reached the stage where it’s noticeable.
By @Spivak - 7 months
I do wonder if the graph at the end is skewed with specifically the phrase "normalization of deviance" because it's searching all of Google books in aggregate and that phrase found a second home to describe lgbt acceptance among conservative political writers. It's not an incorrect usage per se if you assume their premise but it probably doesn't line up with discussions around workplace safety.
By @Eric_WVGG - 7 months
This reminds me of this odd sticker I once found that read "Safety Third".

I thought it was pretty hilarious, and eventually learned that it was part of this odd movement about fifteen years ago regarding the same thing as this article. More here: https://mikerowe.com/2022/03/the-origin-of-safety-third/

I put the sticker on my laptop, and once got confronted by a confused and possibly angry worksite manager who saw it in a cafe and demanded an explanation. I'll never forget how he took the slogan as some kind of personal affront.

By @dang - 7 months
Discussed (just a bit) here:

Practical Drift Towards Failure - https://news.ycombinator.com/item?id=21406452 - Oct 2019 (1 comment)

By @akavel - 7 months
Ok, but apart from just noticing it, how can I/we combat the normalization of deviance?

I don't see practical guidance on how to do it in the article? Do I just sit down and throw my arms in the air, and complain "oh, how things are going in a bad way"?

By @lanstin - 7 months
The article has the line:

> (in particular if they are encouraged by a “cheaper, faster, better” organizational goal)

This struck me, I have never remotely worked for a place that seriously believed "you get what you pay for." I wonder what that would be like.

By @travisjungroth - 7 months
I seriously read the title in the imperative and that it was going to be some contrarian inspirational essay.
By @derbOac - 7 months
The discussion in this essay applies to so many organizational domains if you stretch definitions just a bit.
By @mzmzmzm - 7 months
This is a compelling framework. While the author mostly applies it to examples of physically hazardous accidents, it could just as easily describe the lead up to economic crashes or other less tandible disasters.
By @torginus - 7 months
This diagram looks weird to me, looks like being lazy counteracts the effects of being cheap, so that being both is less dangerous than just trying to save money or effort alone.
By @evanjrowley - 7 months
This entire website is a gem. At least in my profession, I wish more peers would focus on these things.
By @_wire_ - 7 months
The cause of the Chernobyl power facility disaster was caused by running a test to determine how long power could be maintained with turbine run-down to cool the reactor during an accident under blackout conditions.

The reactor control systems are powered by the reactor itself, but this isn't considered a liability, because once started, such a device is not intended to be stopped; shutdowns are large costly affairs intended to occur rarely for refueling. The reaction is regarded as a force of nature like a running river. But the reactor can be operated in high vs. low power modes. Notably, as a system, the device is most hazardous when transitioning between power modes, especially towards low power mode.

It was expected that in certain emergencies, reactor power would be lowered to the point where steam generator turbine inertia is intended to work like a battery of reserve power used to cool the reactor, but knowing precisely how well this works requires verification. To conduct tests the operators intentionally drove the reactor towards the edge of its low power operational limits, overriding safety protocols and subsystems to create the preconditions of the experiment. Disaster ensued eve when the operators feared they had lowered power to much to the brink of an expensive non-routine shutdown so they goosed it, creating a feedback loop into over power. Operators made a last ditch attempt to control the crisis using the emergency core shutdown system, a mechanism of last resort, but a poorly handled design edge case resulted in the shutdown mechanism to create an enormous power surge which caused the core cooling system to explode: A 3 giga-watt thermal core spiked to 30 giga-watt thermal and the lid blew off, so to speak.

The disaster was directly caused by testing of facilities to handle a theoretical emergency, and would have been avoided if the testing was not performed.

But beyond this, the test protocol required driving the machine into a hazardous state, leading to the operators' accidental discovery of tripwire for a catastrophic failure mode that, although it had been a matter of conjecture in contingency planning, was regarded as so unlikely by planners that needed retrofitting of the emergency shutdown system was deferred. "Off" is the least-desired operational state of the reactor, so making an expensive effort to address a conjecture with a hazard of the systems most unlikely mode of operation was not a high priority.

There's a vague parallel between the Chernobyl disaster and the Pan-Am, KLM airport disaster at Tenerife, where a constellation of exceptional conditions led to a collision of two fully loaded 747s. The ostensible cause was an off-by-one error by an arriving flight crew member in the counting taxi-ways, bringing his plane into the path of the other during the others take-off, and the other assuming that a routine but ambiguous figure of speech on the part of control meant clearance to take off, when actually it just meant control's acknowledgment of the departing captain's statement of readiness to proceed with take off.

And Titanic will not be forgotten.

In these disasters, everybody was fully engaged and driving into mayhem with everything running according to plan, but under an unlikely confluence of conditions.

Philosophically, a proper plan depends on equality between the conditions of the plan and the execution of events, but paradoxically there's only one place for true equality in the entire universe: in concept. So all plans are at best provisional. This observation could lead to more wonder about the contours of probability gradients in systems designs.

By @throwaway984393 - 7 months
I find this kind of thing fascinating. In the BDSM rope bondage world there is a lot of ceremony and almost theatrics about safety. But there's actually no real safety, because the participants keep doing things everyone knows is unsafe. The Takate Kote tie is probably responsible for 80% of nerve impingement damage in rope bondage, yet it's wildly popular because people find it pleasing and they keep coming up with new variations on it. Every time you bring up its danger, people like to shout you down like you're over-reacting and they're sick of hearing from you, and then they go give some poor newbie wrist drop.
By @empath75 - 7 months
One place that I see this happening is the Ukraine/Russia conflict where just because there hasn't been a nuclear exchange yet, people assume that there won't be, and keep pushing the line on acceptable escalation (on both sides -- Russia in starting the war, and the west in defending Ukraine). No we've got western tanks on the ground in Russia and Ukrainian drones bombing Moscow and who knows what is going to be the triggering event. 75 years of MAD doctrine thrown out the window and now we're deep in uncharted territory.