Archive for the ‘Uncategorized’ Category

Original

Politics is the mind-killer; but rationality is the science of /winning/, even when dealing with political issues.

I’ve been trying to apply LessWrong and Bayesian methods to the premises and favored issues of a particular political group. (Their most basic premise is roughly equivalent to declaring that Iterated Prisoner’s Dilemma programs should be ‘nice’.) But, given how quickly my previous thread trying to explore this issue was downvoted into disappearing, and many of the comments I’ve received on similar threads, I may have a rather large blind spot preventing me from being able /to/ properly apply LW methods in this area.

So I’ll try a different approach – instead of giving it a go myself again, I’ll simply ask, what do /you/ think a good LW post about liberty, freedom, and fundamental human rights would look like?

Original, Original, Original, Original

… once you’ve grabbed yourself a kleenex, and you’re somewhere nobody can see you listening to it it.

Video: http://www.youtube.com/watch?v=Na-xvlYMGck
MP3: http://www.tomsmithonline.com/freestuff/oddio/BoyFrog.mp3

A webcomic about it: http://www.somethingpositive.net/sp08032005.shtml

Authour’s notes: http://www.tomsmithonline.com/lyrics/boy_frog.htm

… Holy hannah, it’s been over 20 years. And this is still one of the most bittersweet things I’ve ever heard. I just listened to it a bunch of times in a row to pick the best video link – and even though I’d just heard it, each time caught me the same as the first one.

Original

At present, if one person chooses to, they can kill a few dozen to a few hundred people. As we discover new technologies, that number is, most likely, only going to go up – to the point where any given individual has the power to kill millions. (And this isn’t a very distant future, either; it’s entirely possible to put together a basement biology lab of sufficient quality to create smallpox for just a few thousand dollars.)

If we want to avoid human extinction, I can think of two general approaches. One starts by assume that humans are generally untrustworthy, and involves trying to keep any such technology out of peoples’ hands, no matter what other possible benefits such knowledge may offer. This method has a number of flaws, the most obvious being the difficulty in keeping such secrets contained, another being the classic “who watches the watchers?” problem.

The other doesn’t start with that assumption – and, instead, is to try to figure out what it takes to keep people from /wanting/ to kill large numbers of other people… a sort of “Friendly Human Problem”. For example, we might start with a set of societies in which every individual has the power to kill any other at any moment, seeing what particular social norms allow people to at least generally get along with each other, and then encouraging those norms as the basis for when those people gain increasingly potentially-lethal knowledge.

Most likely, there will be (or already are) some people who try the first approach, and some who try the second – which seems very likely to cause friction when they rub against each other.

In the medium-to-long term, if we do establish viable off-Earth colonies, an important factor to consider is that once you’re in Earth orbit, you’re halfway to anywhere in the solar system; including to asteroids which can be nudged into Earth orbit to mine… or nudged to crash into Earth itself. Any individual who has the power to move around the solar system, such as to create a new self-sufficient colony somewhere (which, I’ve previously established to my own satisfaction, is the only way for humanity to survive a variety of extinction-level events), will have the power to kill billions. If sapience is to survive, we will /have/ to deal with people having lethal power undreamt of by today’s worst tyrannical regimes – which would seem to make the first approach /entirely/ unviable.

 

Once people have such lethal power, I’ve been able to think of two stable end-points. The obvious one is that everyone ends up dead – a rather suboptimal result. The other… is if everyone who has such power is very careful to never be the /first/ one to use force against anyone else, thus avoiding escalation. In game theory terms, this means all the remaining strategies have to be ‘nice’; in political terms, this is summed up as the libertarian “Non-Aggression Principle”.

I think I need to think a bit more about some of the other lessons of game theory’s Tit-for-Tat, such as the qualities of being retaliating, forgiving, and non-envious, and whether variations of the basic Tit-for-Tat, such as “Tit for two Tats” or “Tit for Tat with Forgiveness” would be better models. For example, the level of forgiveness that serves best might depend on the number of people who are still willing to initiate force compared to the number of people who try not to but occasionally make mistakes.

I’m also rather suspicious that my thinking on this particular issue leads me to a political conclusion that’s reasonably close to (though not precisely) my existing beliefs; I know that I don’t have enough practice with true rationality to be able to figure out whether this means that I’ve come to a correct result from different directions, or that my thoughts are biased to come to that conclusion whatever the input. I’d appreciate any suggestions on techniques for differentiating between the two.

Original, Original, Original, Original

I’ve been trying to take some of the important bits of some of the ideas I’ve generally come to understand, and to get them expressed in some way that other people would want to view or hear them, and to learn them. “Rationality Matters” is my first serious attempt – it has its good points, and its points that I could have done better, both of which I’m learning from for the future.

Oh, yes, and L. Freakin’ Neil Smith, noted libertarian SF authour, plugged it on his newsletter.

Original

I’m becoming more committed to producing a third Rationality Matters comic, and to make it the best as I can at what I want it to do: to persuade furries and libertarians to adopt LessWrong/Sequences ideas and become more rational.

I’m trying to use the idea from HP:MoR’s chapter 25 about how to go about doing that… and, as part of that, I’m trying to figure out which particular LessWrong ideas to promote – things that are more specific than rah-rahing rationality as ‘how to win’. Page-space is limited, and I doubt I’ll make a fourth “Rationality Matters” in the same format, so… which LessWrong thoughts would you most like to see mentioned? Which ones do you feel are most valuable, or important, or at the very least are easy to compress into a comprehensible soundbite?

Original

After we came out of the church, we stood talking for some time together of Bishop Berkeley’s ingenious sophistry to prove the non-existence of matter, and that every thing in the universe is merely ideal. I observed, that though we are satisfied his doctrine is not true, it is impossible to refute it. I never shall forget the alacrity with which Johnson answered, striking his foot with mighty force against a large stone, till he rebounded from it, ‘I refute it thus.’ — Boswell’s Life of Samuel Johnson

Sometimes, when discussing philosophy (or anything based on philosophy), the person you’re talking with will defend their point by taking refuge under the shield of the undisprovable – that there’s no way to prove the universe is real, or that you’re real, or that there’s any point in doing anything at all.

I’ve started using a shorthand argument against such positions, which I call the ‘Stick Test’. I simply start (virtually) thwapping them repeatedly on the head with a stick, until such time as they can offer a reason for me to stop, with the minor caveat that the reasoning they give can’t be self-annulling. For example, if their argument is that it is impossible to judge another culture’s activities as being ‘evil’, I offer up the idea that it’s part of my culture to repeatedly thwap people I disagree with on the head with a stick, and thus they have no justification for telling me to stop.

I’ve both had and inspired a few chuckles with this method… but I’m now throwing it in the fire – is it a *good* technique for pointing out that sort of flaw, or is it a poor tool which should be replaced by some *better* one? Assuming that it’s not totally useless, what can be done to apply it most effectively?

Original

Having read through the Sequences, Methods of Rationality, related blogs and books and so on, and having changed my mind a few times on at least a few ideas that I’d been fairly sure about… I feel that I finally have enough of a grasp of the basics of LessWrong-style rationality to start trying to introduce it to other people. And while the Sequences form a good set of basics, getting someone interested enough in rationality to start reading them is a step of its own… and, as best as I can tell, one that needs to be custom-tailored to a particular audience.

For my first attempt, I’ve focused on two online subcultures which I’m at least somewhat familiar with: furries and a certain subset of libertarians. For example, a large number of furry fans are fairly easy to please – give them a short comic to read involving a cute anthropomorphic animal, throw in a bit of sex appeal and maybe a message that’s compatible with tolerance of all people, and that comic will be happily read by a lot of them. Trying to avoid “politics is the mind-killer” derailment, the community of libertarians I’m aiming for tend to have their own quirks about what attracts their attention.

The result I came up with was the creation of Rationality Matters, a couple of comics pages that introduce some LW-type thoughts in an audience-compatible fashion without beating the readers’ heads with them. I’ve already received some positive feedback from members of both target groups, indicating that I’ve accomplished my goal with at least a few individuals… so now I’m posting the link here, for whatever feedback I can get that could improve the existing pages (mainly for the text, since re-doing the art at this stage is impractical), and to make any future pages (should I decide to create them) better than I would have made them without such help.

(And yes, I try to follow Crocker’s Rules.)

Original

I just dug up a couple of sidestories from the webcomic 21st Century Fox that I’ve never forgotten – even though I’d forgotten exactly which space-based furry webcomic had run them, or when. Anyone who’s actually reading this journal – just read ’em. And if you want to comment here, or tell anyone else to read ’em, go right ahead.

First story setup: http://www.hirezfox.com/21cf/d/20040719.html

http://www.hirezfox.com/21cf/d/20040723.html
http://www.hirezfox.com/21cf/d/20040726.html
http://www.hirezfox.com/21cf/d/20040728.html
http://www.hirezfox.com/21cf/d/20040730.html
http://www.hirezfox.com/21cf/d/20040802.html
http://www.hirezfox.com/21cf/d/20040804.html
http://www.hirezfox.com/21cf/d/20040806.html
http://www.hirezfox.com/21cf/d/20040809.html
http://www.hirezfox.com/21cf/d/20040811.html
http://www.hirezfox.com/21cf/d/20040813.html
http://www.hirezfox.com/21cf/d/20040816.html
http://www.hirezfox.com/21cf/d/20040818.html

Second story setup: http://www.hirezfox.com/21cf/d/20070129.html

http://www.hirezfox.com/21cf/d/20070205.html
http://www.hirezfox.com/21cf/d/20070212.html
http://www.hirezfox.com/21cf/d/20070219.html
http://www.hirezfox.com/21cf/d/20070226.html
http://www.hirezfox.com/21cf/d/20070305.html
http://www.hirezfox.com/21cf/d/20070312.html
http://www.hirezfox.com/21cf/d/20070319.html

Original

In case anyone’s wondering – I’m not trying to take away the pageviews of any artist by posting the pics of DataPacRat here. I’ve simply seen too many pages controlled by other people, such as artists’ galleries, taken offline over the years. So if I want to make sure these pictures stay online, the only way I can do that is by maintaining my own online copies. (That’s why I run my own website of archived material, too.)

Original

http://friendlyatheist.com/2011/05/20/draw-muhammad-day-2-a-compilation/