Attempting to Manage Large Risks
I finished reading the main text of "Antifragile" by Taleb. Deeply flawed, deeply brilliant, hugely important, a brain changer. The take home message:
- Things fail more often and bigger than you can predict or prevent. Rather than predict away the risk, or even make yourself risk-proof, learn to benefit from randomness.
This caused me to withdraw an abstract from a conference (http://launchloop.com/PowerLoop) because the system can be fragile: when a 2x sized system is 4x as productive as two 1x sized systems (continuing up to planetary scale), the economic driver is to make one gigantic system. That makes a huge amount of money until it fails, and then it brings down the global economy.
After pondering this for a few days, I just came up with a robust rule, with general applicability. It's legal, I need to figure out how to make it a self-motivating business practice:
- No large system can be deployed in practice until you build and break three systems of the same design.
This is accepted practice for small engineered systems - for one very small integrated circuit design, I built 1.6 million of them and tested them, most to destruction. Now they are deployed by the billions, but I have a slightly better idea of what breaks them.
Imagine you want to build a boiling water nuclear reactor on the coast of Japan. Before you can operate and sell power, you must destroy three as if by unplanned accident. You get to destroy the first one yourself in the most fiendish way you can imagine --- and make sure the worst results are confined. Next, your most hated competitor gets to destroy one. Lastly, a consortium of all the opposition groups get to destroy one. And you (and your re-insurers) are legally and financially responsible for the aftermath. Do a good job!
You get to use the lessons learned for improvement and better design, perhaps even patent them. But the "destructors" of phase two and phase three get to patent what THEY learned, and before you deploy you must license their learnings from them. As the original designer, your incentive is to learn one whole hell of a lot from the first test, and to make sure the worst case results don't add additional costs.
I lack imagination - the only way I can see to enforce this is legally. Perhaps there are ways to arrange the legal incentives so companies are eager to do this. Perhaps these steps must be taken before something like Price-Anderson nuclear indemnity limits apply. But this should apply to more than nuclear - any large and vital constructed object, like a jumbo jet or a bridge, must undergo this kind of type testing before it is released for public use.
Back to my original problem, nobody will be incentivized to build one big system when they have to build four at four times the cost (three to break), when they can build seven (three to break) at 3.5 times the cost. The economics now favors deploying many moderate sized units, until the market becomes so big that it makes sense to build many at the next size up.
Still a pretty weak idea, but I'm sure I can improve it with help.