One of the more ridiculous scenes in Star Wars: A New Hope is when one measly torpedo hitting a thermal exhaust sets off a chain reaction which destroys the entire Death Star. Even if the Empire doesn't consider a small one-man fighter to be any threat, and even though the shot is ostensibly "one in a million," buying into the whole fiasco requires some boyish naïveté.
But outside of Hollywood, it's surprising how many systems behave similarly. Designs built to maintain function despite large perturbations of a certain type are often highly vulnerable to perturbations from a different angle. It seems that optimizing for robustness to expected deviations generally comes at the expense of increasing fragility to unexpected deviations. Examples:
- Forest buffer zones that are designed to prevent against particular types of fires can be superseded by unexpected types of fires, that, for example, come from a different direction. (see diagram here)
- A Boeing 777 has complicated chips that can account for variation in the distribution of cargo or atmospheric conditions, but it is vulnerable to an electrical outage or computer error in a way that a simpler plane would not be. (see here)
- Genetic regulatory networks are designed to sense and maintain function in a variety of environments, but a mutation that changes the internal connections of this regulatory network is almost always lethal. (see here)
On the other hand, this trade-off is a fairly recent idea, it's not particularly well-defined, and it will be important to see what consensus develops towards it before we draw too many implications. Still, as far as this committee is concerned, robustness vs fragility is indeed canonical.
(Photo comes from flickr user Scott Beale)