Buncefield Explosion: 20 Years Later, Critical Lessons on Tank Storage Safety

The 2005 disaster resulted from redundant safety failures at a tank farm, highlighting the critical importance of monitoring weak signals.
Nov. 11, 2025
20 min read

Key Highlights

  • Double failure: Both the tank's level gauge and backup high-level switch malfunctioned, allowing gasoline to overflow undetected and create a massive vapor cloud.
  • Design flaw discovered: The safety switch's horizontal lever gradually slipped to vertical due to gravity, disabling the system. A rigged padlock was to blame. 
  • Culture matters: The incident reinforced that operators must recognize weak signals before they escalate into catastrophic failures.

The Buncefield explosion occurred when a gasoline storage tank overfilled after both its level gauge and independent high-level switch failed. Gasoline vapor formed a massive cloud that ignited, causing significant damage to surrounding business parks. Fortunately, the Sunday morning timing prevented fatalities, though 43 injuries occurred. The incident revealed critical gaps in safety control verification, testing procedures, and maintenance regimes. Twenty years later, the disaster emphasizes the importance of recognizing weak signals, maintaining bund integrity, and ensuring operators actively monitor tank filling operations rather than relying solely on automated systems.

Transcript

Welcome to Process Safety with Trish and Traci, the podcast that aims to share insights from past incidents to help avoid future events. Please subscribe to this free, award-winning podcast on your favorite platform so you can continue learning with Trish and me in this series.

I'm Traci Purdum, editor-in-chief of Chemical Processing, and joining me, as always, is Trish Kerin, director of Lead Like Kerin. Hey, Trish, how are you?

Trish: Hey, Traci. I'm doing really well. I've been traveling again. I was in New Zealand earlier this week talking about weak signals and the platypus philosophy and helping people understand how to apply that practically. And I'm off to Trinidad next week to do a similar workshop at a conference over there. So quite excited about it.

Traci: Wonderful. And in December, we're going to have a couple of bonus episodes of Process Safety with Trish and Traci, where you're actually reading a couple of the chapters out of the book. So that's something to look forward to for our listeners.

Trish: Yes, you can learn all about the amazing world of the platypus and why I am so excited to work with the platypus in its philosophy.

Traci: Well, I have the platypus you gave me, that little stuffed platypus when you came to Cleveland and we got to meet up. So he's my constant companion. I think of you every day, and I think of weak signals every day.

Trish: Wonderful.

Buncefield Explosion, Dec. 11, 2005

Traci: Well, in today's episode, we're going to be discussing the Buncefield explosion that occurred on Dec. 11, 2005. The event is coming up on its 20th anniversary, and it involved a major explosion and fire at an oil storage depot in England, resulting in 43 injuries and significant damage to the surrounding area. This incident changed the view on several critical issues, which we will dig into during this entire episode. So let's go ahead, and I'm going to ask you to walk us through the incident and how it changed the way that the industry views storage facilities and critical control verification.

Trish: Yes. So this particular facility, on the surface, it was quite a simple process. It was literally storage tanks. So product was pumped into these tanks from other facilities around the country, and then it either left via pipeline out of that facility — so pumping system — or it could be loaded into tank trucks as well. So a fairly straightforward, basic kind of tank storage facility. And typically, we tend to view these as a bit of a lower risk because they don't have the complex processing activities associated with them. But what had occurred on this particular day was one of the tanks was being filled overnight via a pipeline from another facility, a separate company. And during the course of the night, the level gauge failed on that tank, and so the operators could not see the level change on that tank. They did not realize that tank was being filled.

Now, the tank also had an additional high-level cutoff switch, which would shut the valve into the tank when it hit a certain level, but that failed as well. And so what then occurred was the other facility, unbeknownst to them that the tank was full, continued to pump into it. Product, which was gasoline, started to come out of the free vent at the top of the tank. So it was an atmospheric storage tank, so it had a free vent. So gasoline came above the floating blanket in that tank and rose out through that vent.

It flowed down the side of the roof and then the side of the tank. And the tank also had what we call a wind girder around it. So this is an external piece of steel welded around the circumference of the tank to actually provide some more stability for that particular tank. And what that wind girder actually did that night was, as the gasoline hit it, it started to flick off the side of the tank rather than run down the tank. And this was important as a differentiation because that flicking off the side of the tank actually started to aerosolize and vaporize the gasoline. Now, liquid gasoline doesn't burn; it's the vapor that actually burns. And because it started to vaporize, because of that action of splashing off the side of the tank, a massive vapor cloud was formed around that tank, and it moved outwards from the facility.

And then, as gasoline vapor wants to do, it found an ignition source, and there was actually a massive detonation that occurred, and it caused substantial damage to not only the tank farm but also to surrounding properties offsite, which were business parks — so office blocks and the like. Now, when that all happened, fortunately it was about 6 a.m. on a Sunday morning, so there was nobody in those business parks, which is why there were no fatalities in this particular incident. But this incident had the potential to have killed a lot of people because those buildings were so significantly damaged that people would have actually potentially died inside them.

So it's a very significant incident. We were lucky that there were only the injuries and not the fatalities associated with it. It also caused fires to burn for a number of days and unfortunately has led to some wide-scale environmental contamination because of the foam that was used to fight the fires, the gasoline itself and other hydrocarbon products — because multiple tanks, eventually there were 20 tanks that caught fire and burned in that particular facility. So significant water flow off the facility itself into the surrounding rivers also has created some longer-term environmental concerns there.

Faulty Gauge Causes Catastrophe

Traci: And this all started with a faulty gauge. Is that what I understand from this?

Trish: Yeah, yeah. So basically there were two parts here. So when we do operate high-hazard facilities, we like to have redundant systems to provide that extra level of security. And in theory, this facility did have that. It had a level gauge that had alarms that you could set the level for the alarms from the control room, and you could see on the control room computer that the level was rising in that tank or dropping in that tank. We would know product was going in. This is actually a little float that sits on top of the floating blanket, and it raises up and down as the level in the tank changes. And, you know, we use these sort of level gauges all over the world. They're very, very common. But this particular level gauge — and indeed other gauges on other tanks at this facility — had a history of getting stuck.

And when they get stuck, they stop reading the actual level because they either are stuck up in the air or they're stuck and sunk below the product. So it actually goes through the blanket and sits on top of the product level. And so if the gauge is stuck and the product level keeps moving in the tank, we don't know the level's changing in that tank. So there was that particular part of it. Now, the independent backup was actually an independent high-level switch, and this was a separate device entirely that again measured the level of the product in the tank, and it was connected to a switch on top of the tank that if the level hit a certain level — so if the float raised to a certain level — that sent a signal down to the inlet valve to shut it, and it was measuring off a separate level gauge, level indicator. So these two level devices were independent of each other, which is the important thing here. The problem we had is we had a failure of the level gauge and a failure of our independent high-level switch as well.

And so because that failed, there was no stopping this incident unless someone realized that that tank was actually being filled. But there was no indication anywhere that that tank was actually being filled.

Critical Control Testing

Traci: Now, would testing have helped this? And if so, what does effective testing look like in these types of scenarios? And obviously, this was a very critical control. So we have to understand what controls are very critical and then test them, correct?

Trish: Yeah, absolutely. So testing and maintenance regimes would have helped this incident. So the fact that we've got a level gauge itself that continually or repeatedly gets stuck says that there's something seriously wrong with that gauge. And either we need to change the maintenance regime so it stops getting stuck, or maybe it's not the right — maybe we need to replace it with a different gauge. So that's one thing to think about with the independent high-level switch. So yes, this is what we would call a safety-critical device or safety-critical equipment, and we need to make sure that every safety-critical piece of equipment we have in our facilities has what we call a performance standard. We need to know what it will do. So for example, the performance standard on something like an independent high-level switch should actually define at what level it will trigger the closing of the valve and then how long the valve will take to close even. So not only do we need to know that the switch will trigger, we need to know how long it will take for the valve to close as well, because that makes a difference on what level it's set at.

So we understand the performance requirement. We then need to test every element of that device to that performance requirement. One of the issues with this particular design of the independent high-level switch was there was actually a handle on it that you moved to a position to test the particular device. So there was a way to test that the device actually worked. But there was a — I would suggest — a challenge with the design in that particular valve. So if you can imagine, it was a little hand lever that if the hand lever was in the horizontal position, that meant that the valve was on, that the switch was online. So the switch was online and should work. When you want to test the service of the switch, you would actually take that lever and move it downwards to the vertical position. And so when it was in that position, it would not trigger because it's a test position; it's not an in-service position.

Now, I'll just repeat what I said there. We had a horizontal lever that we moved downwards to a vertical position. So that means that if we don't physically lock that horizontal handle in place, gravity might actually eventually bring it downwards. And there was actually a spot for a padlock to be put on that valve — or that switch, sorry — to hold the handle in place. But nobody realized that the reason for the padlock was to hold the lever in place. Everybody thought that was just to stop people tampering with it, and they figured no one was going to tamper with it on our own site. It's secure, so nobody put the padlock on after the test.

So because nobody realized that the padlock needed to be there to hold that handle in place, that handle eventually just slipped down and took that piece of equipment out of service. And so, unbeknownst to the operators, that independent high-level switch was never going to work in that position. So there's obviously a design issue with that particular switch in wanting to fight against gravity, perhaps. But there was also certainly a lack of understanding of the importance of the padlock in holding that lever in place if you're going to insist on continuing to use that particular device.

So we learned a lot around making sure we focus on our safety-critical controls to ensure that they will work when we need them to work. And that's a really key thing that came out of this. We need to have a performance standard, and it reinforces a performance standard. We need to test them. We need to know they're in service and know they're going to work.

Creating A Safety Culture

Traci: Now, how do you create the culture to raise the trigger, to raise your spidey sense that these degraded controls immediately need some attention and that, you know, understanding the MacGyver maneuvers don't really work with this type of stuff? How do you create the culture and make sure that these types of things don't happen?

Trish: Yes. So that's where you need to have that really positive culture of people being willing to report not only incidents but near misses and even then that next level down of weak signals. So the idea of looking at — you know, a near miss would be if our level gauge got stuck, then I would say, well, perhaps we should be reporting that our level gauge got stuck so that we can start to at least see if this is a trend that's happening and then take appropriate action so that we don't get caught up with that potentially happening in a critical situation.

What I mean by a weak signal is I would say that the level gauge actually being stuck would be a near miss. But if you notice that the level gauge kind of jumps a little bit at times, it's not quite stuck, but it's not smooth anymore, I'd say that's probably a weak signal. And so again, it's about creating this culture where people are seeing the weak signals and the near misses and being able to report them and being rewarded for reporting them. Now, when I say rewarded, I'm not suggesting, you know, they get given $50 or something for everything they report. What I am suggesting is we actually go, "Thanks for picking that up. That was really good. Here's what we've now done about it." That's often enough reward for people to actually know that they're doing a good job. And if someone does something and you go, "Gee, thanks for that. That was really good," they're more likely to do it again when they see it next time than if you say, "What are you wasting my time for? Nothing actually happened." Then they're not likely to tell you again, and you will miss these weak signals coming along. And when we miss these weak signals, eventually something will happen with them.

What Should You Ask About Critical Controls?

Traci: Absolutely. Being dismissive is just as dangerous. If you're a plant manager, what would you do tomorrow morning? You're listening to us talk about this. What would you do tomorrow morning, and what would you start asking about your critical controls in the facility?

Trish: The first thing is I'd go and look at my risk scenarios that I have and just pick one to start with. If you've got them on bow ties, that's fantastic. I love bow ties. They're a beautiful, simple way to show the threat line from what the threat is all the way through your top event, through your mitigation controls to your consequences. And so it's a visual way to see the process and see where it can go wrong. Pick one of your scenarios, one of your top events. Go and sit down with your operators and say, "Hey, these are our critical controls according to our bow tie here. How would we know if they were starting to fail? What would it look like?" Start to ask the question. Have the conversation with the people that actually operate the facility because they're the ones — and the maintainers — they're the ones that actually know if these things continue to fail or are unreliable or annoying or something. They're the ones that are seeing that every day. They have the information you need to know. So you need to sit down and genuinely ask some questions about the quality, the performance standard. Can you tell me — how's this control meant to work? What's its performance standard? OK, what would happen if it didn't perform that way? Oh, well, then we go through to our consequence.

Have the discussions about your risks, your key risks, your critical controls. How would you know they're working? What would it look like when they're starting to fail? Because they're the key things. Those weak signals when they're starting to fail, that's when you can intervene before the incident's ever happened and fix it so the incident doesn't ever happen.

Traci: Would it be beneficial — I'm just thinking about this lock, this whole scenario with the lock — would it be beneficial to have somebody in the facility that has no idea what's going on in a different part of the facility come and walk through a day in the life of that operator and then be the one to ask those questions like, "Well, why is this lock here?" because they're not privy to it. Do you know where I'm going with that?

Trish: Yeah. So we'd often see that in the form of an audit process that you would have. So, you know, you might have a — or a corporate audit process — where technical experts from one of your facilities go visit another facility and just go for a walkthrough and start to ask some questions and dig in. What you've really hit on there is the importance of independent — or independent from the site — auditing of the facility, which is actually a really important aspect of maintaining effective governance in our organizations.

So we often talk about three lines of defense in auditing and governance. And the first line is at the site itself. So these are your site representatives, your site safety advisors doing their particular activity, and they are there to advise and work with the site. Your next level, Level 2, is sort of your corporate oversight level. These are your corporate experts that come in and do auditing and make sure that we set the right standards. And then you have Level 3, which is your external governance-type audits that you do on top of that again, and it's a really effective tool to focus on what needs to be looked at and make sure you do get those fresh-eye views of things around your facility because we do become complacent inadvertently. If we see something all the time, we stop seeing it eventually, and we just don't notice it anymore. So the idea of bringing in some fresh eyes is always really helpful.

Lessons Learned 20 Years Later

Traci: Coming up on the 20-year anniversary, what has the industry learned, and where have we made progress?

Trish: So I think the industry's learned a lot in terms of really focusing — as I said, it was a reminder of our performance standards and our testing and making sure that we do keep an eye on that. I think it's also probably a good reminder around the conduct of operations and how we monitor filling of tanks, because we still need to be monitoring that what is happening is what we expect to happen when we're filling a tank.

So, you know, many, many, many years ago, when I first started my career, I worked in some shipping operations at a refinery, and I was trained that I actually had to do tank-fill calculations for my tanks every hour. So I actually had to look at where I thought my product was moving as well as where it appeared the product was actually moving and were any other tanks moving that shouldn't be. So you physically sat down and did some quick calculations every hour to see your fill rates, and you predicted your fill point, your fill time, so that as you were getting closer to it, you were monitoring it more closely. So things like making sure we've got appropriate conduct of operations and how we actually focus on doing that. I think Buncefield was a good reminder for that. You know, we've got these amazing control systems, but we actually still need to pay attention to them. We can't just let them go by themselves because sometimes they can break. So we need to make sure we are paying attention and doing those checks that we need to check. So I think they're — for me — some of the key things that we really focused on. Another key learning that did come out of Buncefield, though, was focusing on the integrity of our bunds that our tanks are in, because the bunds at Buncefield had a lot of cracks or penetrations through them that weren't adequately sealed, or indeed the sealing in them wasn't fireproof.

And so as the fire continued, that's why they had such significant environmental impact after this, because the bunds that were there to contain the firewater and the fuel itself did not contain; they just leaked, and they flowed into the environment. So whilst the environment is a secondary consideration to the human impact — you know, if we think about the response, we focus on people, environment, asset, reputation. We do people first, and then we move to environment. The environment here had a massive impact because our bunds weren't intact. And so I think Buncefield also was a big lesson around making sure we check the integrity of our bunds regularly.

Traci: Trish, is there anything you want to add?

Trish: Again, this is one of those incidents that didn't need to happen. There was a lot of discussion at the very start around the actual mechanism of how this became a detonation, not just a fire, not just a deflagration. But at the end of the day, you know, we know that gasoline is flammable. We know it can explode. That's why we use it. And sadly, following this, there was also the tank overfill in CAPECO that happened a few years after that as well in the Caribbean. So, you know, we've seen these sort of incidents before. Let's not see them again. Let's take these lessons, let's learn, let's monitor, let's look, let's learn to find our weak signals, understand what they could be and notice them when they happen so we can take action and we don't have the incident occur eventually, because that's what we really need to get to with this. We've seen incidents similar to Buncefield again since Buncefield happened. Let's try and work toward not having any more of them.

Traci: I think you saying that when we see things all the time, we stop seeing them is a very important message, and you always help us with the fresh eyes. You give us those fresh eyes that we need to make ourselves safer.

Unfortunate events happen all over the world, and we will be here to discuss and learn from them. Subscribe to this free podcast so you can stay on top of best practices. You can also visit us at ChemicalProcessing.com for more tools and resources aimed at helping you run efficient and safe facilities. On behalf of Trish, I'm Traci, and this is Process Safety with Trish and Traci. Thanks again, Trish.

Trish: Stay safe.

 

About the Author

Traci Purdum

Editor-in-Chief

Traci Purdum, an award-winning business journalist with extensive experience covering manufacturing and management issues, is a graduate of the Kent State University School of Journalism and Mass Communication, Kent, Ohio, and an alumnus of the Wharton Seminar for Business Journalists, Wharton School of Business, University of Pennsylvania, Philadelphia.

Trish Kerin, Stay Safe columnist

Director, Lead Like Kerin

Trish Kerin is an award-winning international expert and keynote speaker in process safety. She is the director of Lead Like Kerin Pty Ltd, and uses her unique story-telling skills to advance process safety practices at chemical facilities. Trish leverages her years of engineering and varied leadership experience to help organizations improve their process safety outcomes. 

She has represented industry to many government bodies and has sat on the board of the Australian National Offshore Petroleum Safety and Environmental Management Authority. She is a Chartered Engineer, registered Professional Process Safety Engineer, Fellow of IChemE and Engineers Australia. Trish also holds a diploma in OHS, a master of leadership and is a graduate of the Australian Institute of Company Directors. Her recent book "The Platypus Philosophy" helps operators identify weak signals. 

Her expertise has been recognized with the John A Brodie Medal (2015), the Trevor Kletz Merit Award (2018), Women in Safety Network’s Inaugural Leader of the Year (2022) and has been named a Superstar of STEM for 2023-2024 by Science and Technology Australia.

Sign up for our eNewsletters
Get the latest news and updates