Original

At present, if one person chooses to, they can kill a few dozen to a few hundred people. As we discover new technologies, that number is, most likely, only going to go up – to the point where any given individual has the power to kill millions. (And this isn’t a very distant future, either; it’s entirely possible to put together a basement biology lab of sufficient quality to create smallpox for just a few thousand dollars.)

If we want to avoid human extinction, I can think of two general approaches. One starts by assume that humans are generally untrustworthy, and involves trying to keep any such technology out of peoples’ hands, no matter what other possible benefits such knowledge may offer. This method has a number of flaws, the most obvious being the difficulty in keeping such secrets contained, another being the classic “who watches the watchers?” problem.

The other doesn’t start with that assumption – and, instead, is to try to figure out what it takes to keep people from /wanting/ to kill large numbers of other people… a sort of “Friendly Human Problem”. For example, we might start with a set of societies in which every individual has the power to kill any other at any moment, seeing what particular social norms allow people to at least generally get along with each other, and then encouraging those norms as the basis for when those people gain increasingly potentially-lethal knowledge.

Most likely, there will be (or already are) some people who try the first approach, and some who try the second – which seems very likely to cause friction when they rub against each other.

In the medium-to-long term, if we do establish viable off-Earth colonies, an important factor to consider is that once you’re in Earth orbit, you’re halfway to anywhere in the solar system; including to asteroids which can be nudged into Earth orbit to mine… or nudged to crash into Earth itself. Any individual who has the power to move around the solar system, such as to create a new self-sufficient colony somewhere (which, I’ve previously established to my own satisfaction, is the only way for humanity to survive a variety of extinction-level events), will have the power to kill billions. If sapience is to survive, we will /have/ to deal with people having lethal power undreamt of by today’s worst tyrannical regimes – which would seem to make the first approach /entirely/ unviable.

 

Once people have such lethal power, I’ve been able to think of two stable end-points. The obvious one is that everyone ends up dead – a rather suboptimal result. The other… is if everyone who has such power is very careful to never be the /first/ one to use force against anyone else, thus avoiding escalation. In game theory terms, this means all the remaining strategies have to be ‘nice’; in political terms, this is summed up as the libertarian “Non-Aggression Principle”.

I think I need to think a bit more about some of the other lessons of game theory’s Tit-for-Tat, such as the qualities of being retaliating, forgiving, and non-envious, and whether variations of the basic Tit-for-Tat, such as “Tit for two Tats” or “Tit for Tat with Forgiveness” would be better models. For example, the level of forgiveness that serves best might depend on the number of people who are still willing to initiate force compared to the number of people who try not to but occasionally make mistakes.

I’m also rather suspicious that my thinking on this particular issue leads me to a political conclusion that’s reasonably close to (though not precisely) my existing beliefs; I know that I don’t have enough practice with true rationality to be able to figure out whether this means that I’ve come to a correct result from different directions, or that my thoughts are biased to come to that conclusion whatever the input. I’d appreciate any suggestions on techniques for differentiating between the two.

Leave a Reply