For anyone who was around on Tuesday, January 28, 1986, it’s difficult to watch a shuttle launch without remembering the Challenger disaster, when the space shuttle disintegrated 73 seconds after launch, killing all seven crew members. While the most commonly referenced explanation for what went wrong focuses on the technological failures associated with the O-rings, an examination of the decision process that led to the launch through a modern day “behavioral ethics” lens illuminates a much more complicated, and troubling, picture. . . .
On the night before the Challenger was set to launch, a group of NASA engineers and managers met with the shuttle contracting firm Morton Thiokol to discuss the safety of launching the shuttle given the low temperatures that were forecast for the day of the launch. The engineers at Morton Thiokol noted problems with O-rings in 7 of the past 24 shuttle launches and noted a connection between low temperatures and O-ring problems. Based on this data, they recommended to their superiors and to NASA personnel that the shuttle should not be launched.According to Roger Boisjoly, a former Morton Thiokol engineer who participated in the meeting, the engineers’ recommendation was not received favorably by NASA personnel. Morton Thiokol managers, noting NASA’s negative reaction to the recommendation not to launch, asked to meet privately with the engineers. In that private caucus, Boisjoly argues that his superiors were focused on pleasing their client, NASA. This focus prompted an underlying default of “launch unless you can prove it is unsafe,” rather than the typical principle of “safety first.” (See Boisjoly’s detailed take of what happened here, from the Online Ethics Center for Engineering).
The engineers were told that Morton Thiokol needed to make a “management decision.” The four senior managers present at the meeting, against the objections of the engineers, voted to recommend a launch. NASA quickly accepted the recommendation, leading to one of the biggest human and technical failures in recent history. . . .
To those of us who study behavioral ethics, the statement, “We need to make a management decision” is predictably devastating. The way we construe a decision has profound results, with different construals leading to substantially different outcomes. . . . We found that when individuals saw a decision through an ethical frame, more than 94% behaved ethically; when individuals saw the same decision through a business frame, only about 44% did so. Framing the decision as a “management decision” helped ensure that the ethics of the decision—saving lives—were faded from the picture. Just imagine the difference if management had said “We need to make an ethical decision.”
I am a little puzzled by the certainty that the "ethical" decision was not to launch. After all, going into space is risky. Would completely ethical people never take any risks, and thus never leave the ground? We tend to think that putting profits above human life is generally unethical, but what about small risks that would be very expensive to avoid? In this light I find the flaws in the engineers' own work to be even more important:
Despite their best intentions, the engineers were at fault too. The central premise centered on the relationship between O-ring failures and temperature. Both NASA and Morton Thiokol engineers examined only the seven launches that had O-ring problems. No one asked to see the launch date for the 17 previous launches in which no O-ring failure had occurred. Examining all of the data shows a clear connection between temperature and O-ring failure, with a resulting prediction that the Challenger had greater than a 99% chance of failure.
I think that means a 99% chance of an O-ring failure, but remember that had happened 17 times before without causing the Shuttle to blow up. Still, the statement that "if we launch tomorrow we will have an O-ring failure" might have led to a different decision.
It certainly is interesting that asking people to make an "ethical decision" or a "management decision" can lead them to give opposite answers. This must be how people who would never hurt another person with their hands can be part of corporate or government decisions that can lead to thousands being killed. And yet I must come back to the point that knowing what level of risk is "ethical" is in many situations a hard question.