The idea with building safe systems is not to imagine what could cause subsystem X to fail, but to figure out how to cope when subsystem X fails.
The most obvious design failure at Fukushima was not "how big a tsunami we should protect against" but not asking "how will we cope when the seawall fails".
If the critical backup generators had been in a bunker designed to protect them from a seawall breach, the disaster would not have happened.
Although it's been a very long time since I did engineering that could actually kill people, this idea that you focus on what can go wrong and how you mitigate it has stuck with me and has proved useful in lots of things.
Having said that, the next questions are usually: what is the likelihood of it going wrong and what is the cost of mitigation? I don't envy the people making those decisions on something like a nuclear reactor, with or without hindsight.
At some point, you do assign a probability of systemic failure threshold, as nothing is perfect.
The idea behind orthogonal backups, however, is that since they are independent, very high reliability can be achieved with low reliable components. For example, if you've got a main with 90% reliability, and a backup with 90% reliability, the combined reliability is 99%. This can be a lot easier and less expensive to achieve than making one component 99%.
The backup generators could have a cheap extra seawall built around them, or could have simply raised them up on a 10 foot platform, or built them with snorkels like a jeep designed to cross streams.
Building a heftier main seawall would have been an order of magnitude or two more expensive.
The most obvious design failure at Fukushima was not "how big a tsunami we should protect against" but not asking "how will we cope when the seawall fails".
If the critical backup generators had been in a bunker designed to protect them from a seawall breach, the disaster would not have happened.