Podcast: Challenger Disaster 40 Years Later— The Deadly Cost of Reversing Safety Burden
Three Key Takeaways
- Reverse the burden of proof: Require positive proof that something is safe before proceeding, rather than forcing engineers to prove it's unsafe.
- Simplify safety communication: Complex data failed to convince decision-makers, but a simple demonstration (O-ring in ice water) made the danger crystal clear.
- Protect technical authority: Engineers need more than just formal authority to stop unsafe operations — they need genuine psychological safety to exercise that power without career consequences.
Welcome to Process Safety with Trish and Traci, the podcast that for the past seven years has shared insights from past incidents to help avoid future events. Please subscribe to this free, award-winning podcast on your favorite platform so you can keep learning with Trish and me. I'm Traci Purdom, editor-in-chief of Chemical Processing, and joining me as always is Trish Keirn, director of Lead Like Kerin. Happy 2026, Trish. What are you looking forward to this year?
Trish: Wow, Traci, welcome to 2026. I can't believe we've been doing this for seven years.
Traci: I know.
Trish: That's amazing. I'm really looking forward to getting out with Lead Like Kiran and doing more exciting activities with people all over the world. I had a fantastic 2025, and I'm excited about heading into 2026.
Traci: You did have a fantastic 2025. Looking back at seven years of episodes and all the amazing topics we've covered, I'm excited about our first podcast of the new year. We're going to talk about a really interesting topic, one we both remember pretty well, and I'll let you set the scene.
Deadly Decision
Trish: Thanks. Imagine a scene where the tension in the room was absolutely palpable. Engineers gathered around the conference table, debating whether to release the product to the customer. It was late at night, and the product was due to go in service early the next morning. The engineers had worked hard all day trying to assemble testing data to determine if their product was safe to use. They analyzed it using different methods, but the conclusion was still unclear. The debate went round and round. Eventually, the customer was looped in via conference call, and they were very angry at the potential delay. After all, they'd purchased this product multiple times and had never been told there was a potential serious issue. It was very clear the customer just wanted the product.
They didn't believe there was an issue with it. They thought the engineers were just being overly cautious like always. After all, it had redundancy built into the design. If the primary system failed, the secondary system would work. It always had. Management was concerned about upsetting the customer. They just wanted the engineers to release the product and keep the customer happy. The engineers were getting quite distressed. They knew there could be a problem, but they couldn't prove it with certainty. Management wanted to overrule the engineers, but that wasn't allowed. The final technical decision lay with the technical experts. So in an effort to undermine their technical authority, the engineers were challenged to prove it was unsafe, which they couldn't do. Finally, management told the engineering manager to remember that he was a manager and he had to make a decision.
The clear implication was to release that product. The company released the product to their customer, and the next morning it killed seven people.
Challenger Space Shuttle Disaster
Traci: That is pretty powerful. Obviously we're talking about the Challenger space shuttle disaster that happened on Jan. 28, 1986. After 40 years, there are still valuable lessons to be learned, most notably, as you put it, the focus on proving something is safe to use rather than forcing people to prove that it's unsafe. Can you walk us through what happened on the 28th? You're talking about the 27th. Now let's talk about the 28th.
Trish: Yeah. The morning of Jan. 28, it was unseasonably cold in Florida. We also need to remember the launch of the Challenger had been delayed several times already. It was running very late in the schedule for a variety of reasons. NASA really wanted to launch that day. They were sick of the delays. They had to get this space shuttle up. There was a lot of political and societal pressure going on, which we might touch on later. So there was a real desire to launch that morning. But as I said, it was unseasonably cold that morning. In fact, it was below freezing in Florida, which is quite unusual.
It happens occasionally. A layer of ice built up on the rocket boosters, the shuttle itself, everything around it. As that ice built up, it caused the temperature to drop on what was called the O-rings in the solid rocket boosters. The solid rocket boosters were manufactured in multiple sections and joined together on site for launch so they could be transported from where they were fabricated. As they put these sections together, they used two O-rings — a primary O-ring and a secondary O-ring — to prevent leakage of fuel from those rocket boosters. Unfortunately, those O-rings became quite brittle in very low temperature, and that was exactly the scenario they were talking about that morning. So when they did launch, they had very brittle O-rings.
As the launch occurs, there's a lot of flexing on the different components of the shuttle assembly, the solid rocket boosters, all sorts of things. There are different stressors that happen. The moving around with the O-rings being so brittle meant they didn't create that seal. So they had hot fuel gas blow by both the primary O-ring, and then the secondary O-ring failed as well. So they had this release of gas. Seventy-one seconds after launch, we saw — if anybody grew up watching space shuttle launches or has seen the images on the internet — there's the famous image of the jet plume of the main assembly as it flies into the sky, and then the moment of the explosion, and then the solid rocket boosters shoot off in different directions. We get this breaking away with the jet streams going in two different directions.
That occurred because of the O-ring failure, which the engineers were concerned about the very night before, but they couldn't prove that the O-rings were unsafe below a certain temperature. So they couldn't prove it was unsafe. When that blow-by of gas occurred, it resulted in an explosion that destroyed the entire shuttle and caused that infamous image. I can still see it in my mind as a kid being fascinated by space travel and having grown up watching shuttle launches on TV from Australia.
Safety Burden of Proof
Traci: It truly is one of the first memories of a catastrophic disaster that I remember. I was in physics class, and we were watching it live. I can't think of O-rings without thinking about the Challenger disaster and that image. It's just super powerful, and we're going to unpack it today and try to learn from it so we can prevent further disasters at our facilities. How did the burden of proof get reversed where engineers had to prove it was unsafe rather than managers proving it was safe?
Trish: I think that's a fundamental psychological challenge as humans because it's easier to say, "Well, tell me why I can't do it" rather than "Tell me why I can do it." We get lulled into a false sense of security when nothing has gone wrong before. There had been many shuttle launches before this particular launch, and none of them had resulted in an explosion on launch like this did. They had launched in very cold temperatures before, and it had survived. So because there's this learned behavior of everything being OK, nothing's gone wrong yet — well, if nothing's gone wrong before, why would it go wrong now? We get lulled into this false sense of security. It's almost like the anecdotal fallacy where we base all of our knowledge and thoughts only on our experience, rather than looking more broadly.
I think that was perhaps one of the issues that occurred here. Well, nothing's gone wrong. The secondary O-ring is there. The secondary O-ring has held every single time. Yes, we've had blow-by on the primary O-ring. That had occurred multiple times because what they would do after a launch is recover the solid rocket boosters and inspect them. They get jettisoned back to Earth and fall into the ocean, they'd be recovered and inspected, and they saw clear evidence of blow-by on the primary O-rings on multiple launches, but the secondary O-rings always held. We get lulled into this false sense of security because it hasn't gone wrong yet. I think it's as simple as, "Well, tell me why it would go wrong now if it never has before?" I think it's part of the human condition, actually.
Simplify Safety Messages
Traci: Absolutely. Thinking back to that time, there were all the televised hearings on this — just fascinating how they walked through everything. They did demonstrate the O-rings' brittleness in ice water. They showed that so all the world could see what would happen. That simple demonstration can teach so much about the importance of making safety evidence visible and understandable to decision makers. How can we ensure we get this right in our facilities?
Trish: Yeah. That was an incredibly telling scene when Dr. Richard Feynman, I believe it was, who was on the investigation panel, stuck a piece of the O-ring in his glass of iced water and pretty much snapped it. I think that really shows the power of stories, which is why I asked if we could start today with a story. It shows the power of simplifying something so people can easily comprehend it. We can talk about the complexities. It wouldn't have been as compelling if Dr. Feynman had stood there and talked about the coefficient of stress in that particular piece of material, that rubber, and how it was impacted by low temperature and gone through a complicated calculation and demonstrated it by doing the mathematics. I'm sure he could have done it — it was Dr. Feynman. He could have done it, but he didn't. He chose to demonstrate in a very simple and almost visceral way to show us what happened that day.
That's one of the reasons why I love stories so much. I encourage people to think about, OK, what is your key message? What are you trying to convince someone of? Is there another way to do it rather than giving them the detailed scientific and theoretical breakdown or delivering it in a 52-slide PowerPoint deck that starts with a title page, then a contents page where you tell them what you'll tell them, then many slides telling them all these details, then a summary conclusion slide telling them what you told them, and then references? By the agenda page, everybody's asleep.
Think about how we can better communicate to make sure this information gets to the critical decision makers. If the engineers that night had said, "Hang on, look at what happens when they get brittle and cold" — snap — we probably would've had a very different situation. We probably never would've known about that meeting because the incident wouldn't have happened. Whereas they were trying to justify on the basis of all this data that, when you looked at it, was inconclusive. There were launches where they had blow-by on the primary O-ring where the temperatures were high. There were launches where they had blow-by where the temperatures were really low. So there was nothing to draw from that conclusion. But these engineers were sitting there going, "Oh, this is not right. This is not right, but I can't tell you why it's not right." Think about how we can better communicate, better demonstrate in ways that people can actually understand. I think that's a real key here.
Proving Safety
Traci: You bring up great points, and I guess I want to know how can we establish systems that require positive proof of safety before proceeding rather than allowing operations to continue until someone can prove danger?
Trish: Yeah, it's very hard to prove something is unsafe. It's also very hard to prove something is safe. It's probably easier to prove it's unsafe, which is why we fall into that trap, I think. But it's about being willing to look for the data, to look for the information, to look for the weak signals. The weak signals in this instance would've been that they had blow-by. So they knew they had an O-ring issue, but they couldn't put their finger on it consistently. Looking at, OK, why was the blow-by occurring? Nothing went wrong on the previous launches, but what caused that to happen? So coming back to my platypus philosophy of how can we delve in and see, is this really an issue we need to worry about? What are these weak signals telling us?
That was one of the reasons why I created the concept of the platypus philosophy — to give people a framework to look at their weak signals and pull them apart and see whether there is information in there that is connected in different ways that can help you see what the issue is, what else is going on at the same time. Because they may have found, had they looked in detail at the primary O-ring failure in high temperature, that there was significant twisting moment happening on that solid rocket booster that would've accounted for the O-ring failing in that instance because of the additional movement and twisting and stress the assembly was under. So pulling together all of this seemingly disparate information to see if it actually connects. I think we need to be delving into those weak signals and saying, "Is there something in this? Oh no, there's not. It's OK. We can do the calculation to prove this is safe. We can do the design. We can focus people's attention in this area." I think that's what we need to be doing more of — being curious and willing to look at the information.
Lessons Learned
Traci: And being able to tell the story, as you say, and dip the O-ring in the water and prove the point quickly and very effectively. What are some of the other important lessons from the Challenger that we can apply to our own facilities?
Trish: I think another key lesson is around the technical authority role. What I mean by that is the engineers were the people who had to give the go/no-go decision for the flight, which is how it should be — the technical authority. They had authority to overrule management in a decision from a technical perspective. In this instance, they didn't feel they could. In fact, the engineering manager — I think the actual quote was he was told to take his engineering hat off and put his management hat on and make a decision. The technical authority was put under enormous pressure to just make the decision everybody wanted rather than the decision everybody needed. It's important that we not only have these technical authorities established and structured, but that people in those roles do have the authority and the psychological safety to be able to exercise their authority.
We see this time and time again. We saw it with the Columbia space shuttle. The engineering team, the launch assessment team wanted to see the images of Shuttle Columbia in orbit to assess the damage after the foam strike. They had the authority to mandate the images, but they didn't use that authority. They tried to get the images through back channels. When the mission manager found out, they were canceled. They didn't then exercise their technical authority to get that information.
We've also seen it more recently in Boeing with the Boeing employees who are employed by Boeing, paid by Boeing, given bonuses by Boeing for production rates, but they are the people who are the FAA representatives and are responsible for essentially stopping production when something's not right. So they're put in an impossible situation. Everything around their role is encouraging them to release production, but the fundamental core of their role is to stop bad production. So they're torn. They're put in a position where, yes, they technically have the authority to stop something, but do they actually have the psychological safety and the ability to stop something? They're two very different things. That's another learning for us — to make sure that not only do we have documented technical authorities, but that they're actually real. They're not just on paper, and people have the ability to intervene when they need to.
Traci: Is there anything you want to add?
Trish: I think this was an interesting incident. As I said, I clearly remember the day it happened. You said you were watching it in physics class, and that might sound strange to some of the younger people. Why was Traci sitting there in high school in physics class watching this event? Well, there was a reason. Over many years the space shuttle program had been losing public support. It was very expensive. It had never delivered what it was meant to deliver. It wasn't the cheap, reliable, returnable shuttle to space everybody had anticipated. In an effort to make it more interesting and get more public support, they decided to put a schoolteacher on that shuttle. Christa McAuliffe was a schoolteacher chosen out of a highly competitive program to select a teacher from all over the United States. There were thousands of applicants, and she was selected. She was going to be live telecasting lessons from space as they orbited. That was the reason why everybody in high schools and primary schools all over the country in the U.S. was watching this particular launch. Everybody was watching to see the teacher launch into space. She was not a professional astronaut. She'd been trained to do it, but she was a schoolteacher.
I think to a certain extent, there's probably an enormous generation of adults now who were children at the time who are scarred from this because, you know, at school, her own class was watching her launch into space that day and literally watched the explosion occur. I think it's interesting how we go about trying to get public support or stakeholder support at times. Perhaps we need to rethink some of those things to make sure we make these decisions appropriately. The shuttle was never a fully operational vehicle. It always remained in its development stage because they were still constantly tweaking it. So thinking about as we start up our chemical plants or our refineries or our processing plants, even after a shutdown, do we have unnecessary people in the vicinity?
Something like Texas City Refinery — do we have people sitting in a work trailer at the base of a raffinate splitter tower as we're about to start it up who have nothing to do with that operation and just don't need to be there? How are we managing the riskier times of our operations, which are usually startup and shutdown? How are we managing those with people around? Are we making sure we implement appropriate controls so we don't have unnecessary people in those areas? I think that's an interesting take on it. Sometimes we get sidetracked by other things, other requirements, other demands on us, but we still need to make good decisions. Sometimes that can be hard, but we still need to make those decisions because, sadly, people's lives depend on them.
Traci: And you always are here to help us make those better decisions and to point out the power of simplifying the message to make your point better and to encourage psychological safety and make sure it goes from the top down that that psychological safety is there and is utilized. Unfortunate events happen all over the world, and we will be here to discuss and learn from them. Subscribe to this free podcast so you can stay on top of best practices. You can also visit us at chemicalprocessing.com for more tools and resources aimed at helping you run efficient and safe facilities. On behalf of Trish, I'm Traci, and this is Process Safety with Trish and Traci. Thanks again, Trish. Stay safe.
About the Author
Traci Purdum
Editor-in-Chief
Traci Purdum, an award-winning business journalist with extensive experience covering manufacturing and management issues, is a graduate of the Kent State University School of Journalism and Mass Communication, Kent, Ohio, and an alumnus of the Wharton Seminar for Business Journalists, Wharton School of Business, University of Pennsylvania, Philadelphia.
Trish Kerin, Stay Safe columnist
Director, Lead Like Kerin
Trish Kerin is an award-winning international expert and keynote speaker in process safety. She is the director of Lead Like Kerin Pty Ltd, and uses her unique story-telling skills to advance process safety practices at chemical facilities. Trish leverages her years of engineering and varied leadership experience to help organizations improve their process safety outcomes.
She has represented industry to many government bodies and has sat on the board of the Australian National Offshore Petroleum Safety and Environmental Management Authority. She is a Chartered Engineer, registered Professional Process Safety Engineer, Fellow of IChemE and Engineers Australia. Trish also holds a diploma in OHS, a master of leadership and is a graduate of the Australian Institute of Company Directors. Her recent book "The Platypus Philosophy" helps operators identify weak signals.
Her expertise has been recognized with the John A Brodie Medal (2015), the Trevor Kletz Merit Award (2018), Women in Safety Network’s Inaugural Leader of the Year (2022) and has been named a Superstar of STEM for 2023-2024 by Science and Technology Australia.


