2011/03/20

Fail Safe

Acts of God And Risk

I've been pondering the issue of nuclear power in light of the Fukushima crisis, and I have to say the problems of how things got to this point are so complicated that it's impossible to pin the blame on any one thing. The Fukushima plant was designed with earthquakes and tsunami threats in mind. They set the bar for earthquakes at 8.0 or so which was the magnitude of the quake in Tokyo, 1923. The tsunami threat was set for waves that were 6m high. In reality, what brought Fukushima undone was a 9.0 earthquake that unleashed 1500 times more energy than the 1995 Kobe quake, and a tsunami that was 12m high. In both instances, the spec the plant had to withstand was much larger than anything in recent record.

And there's crux number one. How far back would one have to go to fin the maximum threat, and how much more redundancy would one have to add on top? Even if they had added 20% to the largest threat recorded, the plant would have been built to 8.4 magnitude and 7.2m high waves. The unforeseen is by definition is unforeseen.

Subsequent to the two overpowering threats that manifested were 2 diesel generators for back up that failed, as well as a third battery system that also failed to kick in. All the fail-safes and redundancies were overwhelmed.  The fact that the system didn't go into immediate meltdown is some kind of testament to the engineering. Can all of this really be ascribed to the fault of the engineers? Isn't this more like what insurers call 'An Act of God'?

I'm not trying to redeem the nuclear power industry or Japanese engineering here, because at the end of the day there is an ongoing crisis that is threatening all of Fukushima prefecture and possible North-East Japan. What I do want to bring up is the nature of risk. Every time we drive a car we take a risk, every time we take the train we take a risk; society is actually built up on managed risks and underlying them all are assumptions on what is manageable and safe. For the better part of 40years, the nuclear plant in Fukushima ran without having a major hiccup until a series of overwhelming events. A rational person has to put that into the equation of benefits to balance against the risks.

So now I'm thinking aloud here, wondering if the case for nuclear is legitimately dead. Here's an intellectual exercise: Let's say the same plant was built in Australia and run with equal effectiveness. If it were away from the coast and rivers, chances are it's not going to be subjected to magnitude 9.0 earthquakes and tsunamis. What could possibly happen?

About the only thing I can think of off-hand are bushfires. But what if it had an exclusion zone around it? Again the question comes up as to what would be a legitimately safe exclusion zone to counter all bushfires? We could only figure out the worst bushfire and then calculate a 20% redundancy on top of the worst case scenario we know. But what if the disaster that strikes is twice the size of the previous worst?

The same problem is in fact what struck Brisbane and the Wivenhoe in the recent flood. The Wivenhoe dam was designed with the worst case on record in mind. The flood that eventually came had twice as much water as the previous recorded worst case.

I guess actuaries are busy doing their sums on this kind of thing, and doubtless there are mathematical formulae that address these things, and even then I'm guessing they will be found out to be faulty by the next disaster. Which leads me to think there's a structural problem with all of projection of worst case scenarios. Stock traders and commodity traders are always quick to point out that past performance is no promise of future performance. This is at its core the same problem with inductive reasoning.
The problem of induction is the philosophical question of whether inductive reasoning leads to knowledge. That is, what is the justification for either:

  • generalizing about the properties of a class of objects based on some number of observations of particular instances of that class (for example, the inference that "all swans we have seen are white, and therefore all swans are white," before the discovery of black swans) or

  • presupposing that a sequence of events in the future will occur as it always has in the past (for example, that the laws of physics will hold as they have always been observed to hold). Hume called this the Principle of Uniformity of Nature.


The problem calls into question all empirical claims made in everyday life or through the scientific method. Although the problem arguably dates back to the Pyrrhonism of ancient philosophy, David Hume introduced it in the mid-18th century, with the most notable response provided by Karl Popper two centuries later. A more recent, probability-based extension is the "no-free-lunch theorem for supervised learning" of Wolpert and Macready.

Philosopher John Vickers concluded that we should use induction, not because it yields certainties, like deduction, but because it is a method that can correct itself - and is thus more likely to bring us closer to truth than other methods.[1].

Now this ultimately goes to Karl Popper and falsifiablity:
According to Popper, the problem of induction as usually conceived is asking the wrong question: it is asking how to justify theories given they cannot be justified by induction. Popper argued that justification is not needed at all, and seeking justification "begs for an authoritarian answer". Instead, Popper said, what should be done is to look to find and correct errors.[20] Popper regarded theories that have survived criticism as better corroborated in proportion to the amount and stringency of the criticism, but, in sharp contrast to the inductivist theories of knowledge, emphatically as less[21] likely to be true. Popper held that seeking for theories with a high probability of being true was a false goal that is in conflict with the search for knowledge. Science should seek for theories that are most probably false on the one hand (which is the same as saying that they are highly falsifiable and so there are lots of ways that they could turn out to be wrong), but still all actual attempts to falsify them have failed so far (that they are highly corroborated).

That's all well and good for knowledge, but for design and engineering, this has practical, real world problems. And I don't mean this lightly. The main problem we're seeing is that the critique comes in the guise of out of control, super-scale disasters. We only get to see the failure point of designs when a sufficiently large disaster strikes. On that scale, the trials of the Fukushima plant are inordinate challenges. They are beyond what could have been reasonably expected; the unreasonable-ness if you like, of the challenge is in a sense falsifying the design.

Was it reasonable to design for 8.0magnitude earthquakes when there are 9.0 magnitude earthquakes? Was it reasonable to design for 6m tsunamis when 5m was the highest that had been seen? I'd like to see one argument that is mounted without being post-hoc. It's the post-hoc arguments that are clouding the issue as to whether we can realistically look at any piece of design and find it acceptable.

Anyway, I've been racking my brain in the aftermath of the quake off Sendai, and that's all I've got for you. Some of you are better philosophers. You tell me if we can actually learn anything useful about the risks for which we can and can't design. It seems to me, we're all at a loss, and the triumphal "told you so" heard from our Green friends about nuclear power are actually nothing much more than opportunistic cat calls. And I'm saying this as somebody who is not even pro-nuclear. I just want some truth.

5 comments:

jcurrill said...

"whatever can happen will happen" occasionally is termed "Murphy's Law"

For me this is as simple as Rock, Paper, Scissors - Japan sits on an earth quake fault line, Earth quake creates Tsunami, Tsunami creates wave, wave knocks out Power Station - cause and effect then come into evil play.

The cause of this Tsunami then effected the safety of the plant, people, planet, etc, etc.

We will learn as a species a harsh but obvious lesson, we will try our hardest to predict the outcome of bad stuff - because as Murphy says "whatever can happen, will happen" we just have to be sure we can identify, reconcile and live with the consequences.

Building a Nuclear Power plant so close to the sea in a region with known Earth quake activity - in hindsight (what a beautiful thing) was a dumb thing to do.

UNLESS, someone had turned the wheels of fortune and identified the risks, reconciled the potential outcome and made a decision the "risk" was worth running. Someone did I'm sure run that study - the decision was taken the risk was small - but it didn't make the risk go away.

Peace out!

artneuro said...

Well that's a post-hoc argument. Without disagreeing with it as such, I think you need to reconsider the fallacy in the structure of the argument.

The same argument could be laid at say, the Columbia disaster ' "building a machine with the most parts in the history of mankind, and flying it over 50 missions in and out of orbit through our atmosphere with 7 astronauts was bound to result in catastrophic failure. In hindsight what a dumb thing to do."
You'd never launch another space vessel ever again.
Did the Titanic disaster invalidate all sea travel?
Does 9/11 invalidate all air travel? Or does it invalidate skyscrapers?

As it stands, nobody's died from Fukushima as of yet. 1,800 people died from a collapsed hydroelectric dam from the same quake. If we're going by those results, hydroelectric dams are much worse than nuclear power plants. And yet, they're not dumb to build, even if they have risks- known and unknown.

So back to the nuclear power plant question. It seems to me that once you commit to anything, you carry structural risks, in which case you're open to all Murphy's Law/Black Swan incidents regardless of the tech. In other words, the issue *isn't* that there's risk, but just how much that risk is and how much of a problem it presents. The qualitative assessment of that risk is actually the heart of everybody's fear, but they keep on saying by eliminating the risk, it eliminates the problem - but as you can see from the examples above, you wouldn't be able to do anything.

Similarly, I think the people pronouncing nuclear power dead on the basis of Fukushima are not actually addressing the questions raised by Fukushima.

jcurrill said...

I agree with your argument, the risk involved cannot be eliminated - ever!

However, there are some risk mitigation strategies that can be employed to simply do that - mitigate (but never eliminate) risk!

for instance in Tokyo skyscrapers are built with earthquake technology built in a system of rollers that allow the building to flex and move and hopefully not fall down.

This was sensible, they took known variables (Fault lines, history, height of building to name but a very few) and then applied good old common sense. Lets build them on a system that hopefully will protect the asset and human life bearing in mind what we know to be true - there will in all probability be another earthquake.

The engineers knew the risks, they planned for the worst and hoped that their mitigation strategy worked and it did. Earthquakes in this region are not new, Japan has a small land mass and therefore "space" is a premium.

Is Japan an ideal place to build super skyscraper cities - no!

Is their a commercial appetite to try? - Yes!

Therefore, humans deemed the risk of collateral damage, asset damage and potential discontinuity of commercial activity a viable risk to take and go ahead and build skyscrapers as all known precautions had been taken.

The challenge with Fukushima is that it doesn't take a genius to figure out that a Nuclear Power Station so close to shore, on a fault line, in a known earthquake activity region would suffer potential catastrophic failure if hit by an earthquake and or subsequent Tsunami which it did.

I don't have any real argument here at all, I think Nuclear Power is a valid form of energy generation its just unhappy coincidence or foolhardy planning that Fukushima happened to be in the way of an natural disaster.

Humans are quite clever, if a car crashes we look at how it was used and try to engineer safety features to prevent its potential misuse or driver error again.

At no point after one car crash would or could anyone say cars should be outlawed and the same holds true for air travel, space travel and even walking.

There just seems to be some fundamental flaws in where the Fukushima plant was sited. OR the powers that be knew about the fundamental flaw, did a risk mitigation strategy and decided in their ultimate wisdom that the humanitarian and environmental risk to Japan was good to go!

It would appear that next time - maybe countries should ask the rest of us - as the environmental issues as we all know post Chernobyl are bad and far reaching given a following wind!

The real question is WHY was Fukushima built there and if it had to be built there, WHAT scenarios were run in the event of a natural disaster?















A Tsunami that happened to hit a Nuclear Power Station does not as you say invalidate the future building of Power stations. It simply means that we have to give more thought to the sighting of such stations

artneuro said...

Actually, the recent Eastern Japan quake (like the one in Wellington) occurred on a previously unknown fault line. So the people who placed the Fukushima plant where it is, couldn't have known of that risk until it happened. Like the waves that were twice as high as the worst case scenario imagined, it simply wasn't in the design parameters.

The other thing is that after 2-3 weeks of this saga since the earthquake and tsunami hit, the Fukushima plant hasn't gone anywhere near the scale of Chernobyl.

This link was sent in by a reader 'Jelvis':

http://xkcd.com/radiation/

On it you will see Chernobyl as the yellow block of disaster in the bottom left but so far Fukushima is in the green range of events, on the top right, with about 3/4 the big block of green where the Fukushima 50 are toiling. It's not pleasant, but at the same time, having weathered much worse disasters than planned, you'd have to say the design has done much better than in Chernobyl.

Now, as to the question of WHY it was built there, it was built there to avoid civilian populations as much as it could back in 1971. Since then, urban sprawl has meant habitation has encroached upon the site. That's not a nuclear design issue, that's an urban planning issue.
Again, as you do understand my point, risk managers don't get the luxury of assessing risks with full knowledge or hindsight. We can all chalk this up to experience, but it casts great questions upon how we assess what we think we have as 'knowledge'. The Fukushima plant is a perfect example of how we can't plan for every contingency.

Fail Safe Part II | The Art Neuro Weblog said...

[...] I’ve been arguing a lot about the inferences that can be drawn from the Fukushima plant going into meltdown and a category 6 disaster in the wake of 12m high waves as well as a magnitude 9.0 earthquake. What I find most annoying is the group of anti-nuclear types citing that given the failure of Fukushima, it conclusively proves nuclear power should never be used. I’ve pointed out that there is an epistemological problem with risk management in my previous post here. [...]

Blog Archive