Archive for January, 2012

Original

I currently ‘have’ a minor SF setting, for which there exists a few short stories, comics, pictures, and the like, created by a variety of artists and authours. I’d like future contributions to be as rationalist as possible – for as many characters within the setting as possible to be Rationalist!Heroes, or even Rationalist!Villains. Do you have any advice that might help me try to nudge the various amateur artists and authours involved in that direction?

(In case you’re curious, the setting’s current reftext is here. The basic underlying question I’m basing it around is, “As technology advances, it becomes possible for smaller and smaller groups to kill more and more people. How can society survive, let alone develop, when individuals have the power to kill billions?” So far, most of the creations have been developing the overall background and characters, rather than focusing on that question directly.)

Original

Having just re-read “Money: The Unit of Caring”, I noticed that the general methods proposed therein make some assumptions which don’t seem to apply to me, and I’m trying to figure out how the conclusions therein change therefrom.

Avoiding certain personal details, I’m on a fixed income; I get a monthly deposit in my bank account every month. I don’t expect this to change in the foreseeable future; and at least in the general sense of ‘job’, it’s unlikely I’ll be able to acquire one. In sum – I don’t have any easy way to convert my time into additional money.

However, I still want to get the occasional warm fuzzy from causing the most possible good from what I can do – even if that involves my volunteering to spend some hours of my life doing things that would be inefficient for someone else. For example, donating blood, or taking an overnight shift keeping an eye on things at the local ‘out of the cold’ program; and using givewell.org as a guide for what money I am able to funnel into direct donating.

 

So – does anyone have any advice? (Or questions that would help better advice be given?)

Original

What quick-and-easy rules of thumb to gauge how rational someone else is do you tend to use? How accurate do you think those rules are, and can you think of any way they might be improved?

 

For some examples of what I mean, one of the benchmarks I use is the basic skeptics’ list: astrology, chiropractic, little green men abducting cattle and performing anal probes, Nessie. Another is the denialist checklist: holocaust denial, moon landing denial, global warming denial. Another is supernaturalism in general: creationism, intercessory prayer, magick, psychics, curses, ghosts, and such. If I find out that anyone I know believes in any of that, then my estimation of how well they can consider things rationally goes down. Theism… well, I’ve gotten used to pretty much everyone around me being theistic, so that’s kind of the baseline I assume; when I learn someone is an atheist, my estimation of their rationality tends to go /up/.

Do you have any items which make you think someone is even further along the path of rationality than simply not believing logical fallacies?

Original

I regularly seek inspiration by taking long solo walks; and during my most recent such, considering what practical consequences (if any) there would be of the universe I know being a simulation – something flipped in my head, and I thought to myself, “Screw the simulators. If I’m the first copy of me, I should make it as hard as possible for any simulation of me to keep up with me – and if I’m a simulation, I’m going to try to do even better than my original did.”

Ignoring the impracticality of trying to out-do myself, is there anything that someone living in an ‘original’ universe can do that would make it harder for a future simulator to reproduce them? And, mirror-wise, is there anything someone in a simulated universe could do to differentiate themselves from their original? And, if the answer to either question is ‘yes’… would it be a good or bad idea to try?

 

(And is there any way to gather any actual data that might support the answers to such questions, instead of merely making guesses of a similar nature to classic college/stoner “Our whole universe could be, like, an /atom/, man” musings?)

Original

A short science-fictional scene I just wrote, after reading about some real and actual scientific research. I’d love to turn this, or something like it, into an actual scene in Dee’s life story, I just can’t think of a good enough story to insert it in, and so I present it on its own for your amusement, even if it does mean I’m likely to lose more karma than I gained from my last post…

 

 

Not your grandfather’s science fiction.
A scene from Dee’s life

We join our heroine, Dee, and her plucky-yet-sarcastic sidekick holed up in a hotel room.

“Well, this is another fine mess you’ve gotten us into. Got any great ideas for getting us out of it?”

“No – but I know how to have one. Since I lost my visor and vest, including my nootropics and transcranial stimulator… I’m going to need a syringe, sixty millilitres of icewater, a barf bag, and a video camera.”

“I don’t know what you’re planning, but I’m not sure I want to have any part in it.”

“Start MacGuyvering as much as we can now from the mini-bar, I’ll explain as we go. Without a camera, and with our time pressure, I’m going to need your help to get this to work, and you need to understand some of this or else you’ll be really confused later. Physically, all I’m going to do is squirt water into my left ear.”

“… and this will help us, how exactly?”

“By shocking my vestibular system, which causes all sorts of interesting effects. One of the unfortunate ones is that when done right, it induces immediate vomiting.”

“Ew.”

“Yes, well, that’s just a side-effect. The main point is… well, really complicated. In layman’s terms, there’s a part of the brain that’s responsible for triggering the creation of profound, revolutionary ideas, and another part that makes you create rationalizations to explain away just about anything, and usually, these two parts of the brain kind of balance each other out. This vestibular trick happens to hyper-stimulate the revolutionary part for about ten minutes, allowing me to realize things I normally wouldn’t, and to see them as being obvious that I don’t know why I didn’t think of them before.”

“well… okay, even if that’s so, why haven’t I seen you do it before?”

“For one, I don’t want to risk some sort of long-term adaptation which might reduce its effect. But there’s more complications to it than that.”

“Of course there are.”

“The thing is, after it’s been hyper-stimulated, the revolutionary part gets tuckered out, and then the rationalizing part effectively kicks into overdrive – and I pretty much forget everything I thought of during those ten minutes, and even crazier-sounding, I won’t be able to accept the idea that I said any of what I said. I literally won’t believe that those ideas came from my mouth.”

“‘Crazier-sounding’ sounds right.”

“Which is why I’m going to need you to remember whatever it is I come up with – and then tell me what the best ideas were, but not tell me that I came up with them. At least until my brain’s gotten back into balance again. I’m now precommitting myself to do whatever it is you tell me to do – even if I don’t understand it, even if I think it’s a bad or stupid or useless idea. Do you think you can handle that level of responsibility?”

“I… think so. And this really works? How the cuss did you ever come up with this, anyway?”

“I once noticed that when I was in a certain state of mind, my head kept twitching to the left every time I thought of something, showing there was a link between idea-generation and the vestibular system. Later I read up about some experiments with people with anosognosia, people who aren’t aware of being paralyzed or blind… are you done with that straw yet?”

“As much as I’ll ever be, I guess.”

“Alright. Hand me the bucket, and squirt the water in my ear – my left ear. It only works in the left ear. Except for left-handed people.”

“I’m beginning to wonder if it’s just the idea that’s crazy.”

“We’ll soon find out. Remember, being the only right person in the room doesn’t mean you feel like the cool guy wearing black, it feels like you’re the only one wearing a clown suit. I did that once, just to try. Now, here we <hralph!>”

 

Original

“Do not walk to the truth, but dance. On each and every step of that dance your foot comes down in exactly the right spot. Each piece of evidence shifts your beliefs by exactly the right amount, neither more nor less. What is exactly the right amount? To calculate this you must study probability theory. Even if you cannot do the math, knowing that the math exists tells you that the dance step is precise and has no room in it for your whims.” — from “Twelve Virtues of Rationality”, by Eliezer Yudkowsky

One of the more useful mental tools I’ve found is the language Lojban ( http://www.lojban.org/tiki/Learning ), which makes explicit many of the implicit assumptions in languages. (There’s also a sub-language based on Lojban, called Cniglic ( http://www.datapacrat.com/cniglic/ ), which can be added to most existing languages to offer some additional functionality.)

One of the things Lojban (and Cniglic) has are ‘evidentials’, words which can be used to tag other words and sentences to explain how the speaker knows them: “ja’o”, meaning “I conclude”, “za’a” meaning “I observe”, “pe’i” meaning “It’s my opinion”, and more. However, there hasn’t been any easy and explicit way to use this system to express Bayesian reasoning…

… until today.

Lojban not only allows for, but encourages, “experimental” words of certain sorts; and using that system, I have now created the word “bei’e” (pronounced BAY-heh), which allows a speaker to tag a word or sentence with how confident they are, in the Bayesian sense, of its truth. Taking an idea from the foundational text by E.T. Jaynes, “bei’e” is measured in decibels of logarithmic probability. This sounds complicated, but in many cases, is actually much easier to use than simple odds or probability; adding 10 decibels multiplies the odds by a factor of 10.

The current reftext for “bei’e” is at http://www.lojban.org/tiki/bei%27e , which basically amounts to adding Lojbannic digits to the front of the word:

ni’uci’ibei’e -oo 0% 1:oo complete disbelief, paradox
ni’upabei’e -1 44.3% 4:5
ni’ubei’e <0 <50% <1:1 less than even odds, less likely than so
nobei’e 0 50% 1:1 neither belief nor disbelief, agnosticism
ma’ubei’e >0 >50% >1:1 greater than even odds, more likely than not
pabei’e 1 55.7% 5:4 preponderance of the evidence
rebei’e 2 61.3% 3:2
cibei’e 3 66.6% 2:1 clear and convincing evidence
vobei’e 4 71.5% 5:2
mubei’e 5 76.0% 3:1 beyond a reasonable doubt
xabei’e 6 80.0% 4:1
zebei’e 7 83.3% 5:1
bibei’e 8 86.3% 6:1
sobei’e 9 88.8% 8:1
panobei’e 10 90.9% 10:1
pacibei’e 13 95.2% 20:1
xarebei’e 62 99.99994% 1,500,000:1 5 standard deviations
ci’ibei’e oo 100% oo:1 complete belief, tautology
xobei’e ? ?% ?:? question, asking listener their level of belief

By having this explicit mental tool, even if I don’t use it aloud, I’m finding it much easier to remember to gauge how confident I am in any given proposition. If anyone else finds use in this idea, so much the better; and if anyone can come up with an even better mental tool after seeing this one, that would be better still.

.uo .ua .uisai .oinairo’e

Original, Original, Original, Original

“Do not walk to the truth, but dance. On each and every step of that dance your foot comes down in exactly the right spot. Each piece of evidence shifts your beliefs by exactly the right amount, neither more nor less. What is exactly the right amount? To calculate this you must study probability theory. Even if you cannot do the math, knowing that the math exists tells you that the dance step is precise and has no room in it for your whims.” — from “Twelve Virtues of Rationality”, by Eliezer Yudkowsky

One of the more useful mental tools I’ve found is the language Lojban ( http://www.lojban.org/tiki/Learning ), which makes explicit many of the implicit assumptions in languages. (There’s also a sub-language based on Lojban, called Cniglic ( http://www.datapacrat.com/cniglic/ ), which can be added to most existing languages to offer some additional functionality.)

One of the things Lojban (and Cniglic) has are ‘evidentials’, words which can be used to tag other words and sentences to explain how the speaker knows them: “ja’o”, meaning “I conclude”, “za’a” meaning “I observe”, “pe’i” meaning “It’s my opinion”, and more. However, there hasn’t been any easy and explicit way to use this system to express Bayesian reasoning…

… until today.

Lojban not only allows for, but encourages, “experimental” words of certain sorts; and using that system, I have now created the word “bei’e” (pronounced BAY-heh), which allows a speaker to tag a word or sentence with how confident they are, in the Bayesian sense, of its truth. Taking an idea from the foundational text by E.T. Jaynes, “bei’e” is measured in decibels of logarithmic probability. This sounds complicated, but in many cases, is actually much easier to use than simple odds or probability; adding 10 decibels multiplies the odds by a factor of 10.

The current reftext for “bei’e” is at http://www.lojban.org/tiki/bei%27e , which amounts to adding Lojbannic digits to the front of the word:

ni’uci’ibei’e -oo 0% 1:oo complete disbelief, paradox
ni’upabei’e -1 44.3% 4:5 10 times less likely than sobei’e
ni’ubei’e <0 <50% <1:1 less than even odds, less likely than so
nobei’e 0 50% 1:1 neither belief nor disbelief, agnosticism
ma’ubei’e >0 >50% >1:1 greater than even odds, more likely than not
pabei’e 1 55.7% 5:4 preponderance of the evidence
rebei’e 2 61.3% 3:2
cibei’e 3 66.6% 2:1 clear and convincing evidence
vobei’e 4 71.5% 5:2
mubei’e 5 76.0% 3:1 beyond a reasonable doubt
xabei’e 6 80.0% 4:1
zebei’e 7 83.3% 5:1
bibei’e 8 86.3% 6:1
sobei’e 9 88.8% 8:1
panobei’e 10 90.9% 10:1
pacibei’e 13 95.2% 20:1 10 times as likely as cibei’e
xarebei’e 62 99.99994% 1,500,000:1 5 standard deviations
ci’ibei’e oo 100% oo:1 complete belief, tautology
xobei’e ? ?% ?:? question, asking listener their level of belief

By having this explicit mental tool, even if I don’t use it aloud, I’m finding it much easier to remember to gauge how confident I am in any given proposition. If anyone else finds use in this idea, so much the better; and if anyone can come up with an even better mental tool after seeing this one, that would be better still.

.uo .ua .uisai .oinairo’e

Original

Libertarian Law: Competence and the Common Defense
by DataPacRat
datapacrat@datapacrat.com

 

Attribute to L. Neil Smith’s The Libertarian Enterprise

“If tempted by something that feels “altruistic,” examine your motives and root out that self-deception. Then, if you still want to do it, wallow in it!”
—Heinlein

A literalist interpretation of the Zero-Aggression Principle can imply that pushing someone out of the way of a falling piano involves the initiation of force against them, and is thus immoral—a result which goes so far out of common sense and empathy that it can be used by anti-libertarians to disparage the ZAP. The trouble with trying to refute this idea is that it’s actually true—pushing someone against their will is an initiation of force against them.

Thus, technically, it’s within someone’s right of self-defense to retaliate against such a rescuer for trying to save their life; which means that there’s a certain risk involved in trying to save a libertarian’s life. However, as a person containing some amount of compassion, I’d like to have the option of trying to save people from such unknowing doom. And, were a piano falling towards me without my knowledge, I’d want somebody to try to save me, which is much less likely to happen if they have to worry about my using defensive force against them.

Fortunately for everyone, there’s at least one way in the spirit of the ZAP can be preserved, if not necessarily a particular literalist reading thereof, which allows for people to use reasonable amounts of force in the saving of other people, thus allowing more such rescues to take place, which benefits us all. (If for no other reason than that every living person is a potential partner with whom to engage in voluntary positive-sum trades.) And all it requires is taking a somewhat nuanced view of the issue of competency.

In the usual view, competence is treated as pure binary: a person is either competent to handle all their own affairs; or they’re incompetent, due to youth or mental disorder. However, if we were to treat it as more of a per-issue thing, that a person may have the mental capacity and understanding to handle some parts of their life but not necessarily all of them, then that gives us just enough wiggle-room to deal with the issue at hand.

If a piano were falling towards me, and I was unaware of it, then I would lack the information required to make a true decision about whether or not I wanted to let it hit me; and so, if someone nearby had the facts I lacked, they would be able to make a choice about whether to act on my behalf, in what they guessed to be my best interests, until such time as I had sufficient information to decide for myself. Looked at from the other way, if I saw someone unwittingly face their doom, then I would have the option—not necessarily the obligation—to act on their behalf, using the minimal amount of force needed to save their life until they let me know whether or not they wanted to suicide.

Unfortunately, taking this nuanced view isn’t without its risks. A would-be tyrant could try to seize hold of the “in their own best interests” idea as justification for doing all sorts of unpleasant things to them; it’s happened all-too-often in the past. The only counter to this that I’ve come up with so far is that such interventions can only be moral if, and only if, the intervenor is trying to help the other person to become competent, as quickly as possible. An intervention which shows no sign of allowing the target to take control of their own lives, of helping them to understand the world around them, seems unlikely to be moral. (While this seems consistent with the principles of raising children, this does seem to leave those people with permanent incurable mental disabilities somewhat in the lurch; the only rationale I can suggest is that medical technology continues to advance, and what is incurable today may be curable tomorrow.)

Another potential problem is the “Chinese obligation” resulting from such an intervention—if I take it upon myself to use force against someone’s will to save their life, then that force can only be moral if and only if I proceed to save their life, and to help them reach the point where they can decide whether or not to save their own life. Which may require rather more effort than was initially believed to be the case.

Those risks seem reasonably manageable; and taking those risks seems to solve a minor philosophical paradox, and allows for the saving of lives that would otherwise be lost. I’m sure someone will tell me I’m wrong; but until someone manages to demonstrate the logical flaws, I’m going to try to act in line with these ideas. So, if someone seems to be initiating force against me, I’m going to try to use the minimal amount of force I can until I can determine whether or not they’re acting in my best interests; by making this announcement, I’m hoping to make it more likely that more people will be willing to try to save my life, thus increasing my odds of survival. Also, if I see that someone is unaware of an approaching doom, and if I decide to intervene, I will try to do so in a way that will allow them to take control of their own lives as soon as possible; by announcing this as my intention, I hope to reduce the force used against me during the course of such interventions, again increasing my odds of survival.

And by describing the logic involved, I am encouraging others to take similar stances—which, again, increases the odds of my living a longer, happier life.

And isn’t that what libertarianism is supposed to do for us all?

“Do not confuse “duty” with what other people expect of you; they are utterly different. Duty is a debt you owe to yourself to fulfill obligations you have assumed voluntarily. Paying that debt can entail anything from years of patient work to instant willingness to die. Difficult it may be, but the reward is self-respect.”
—Heinlein

Thank you for your time,

Original

Welcome to WordPress.com. After you read this, you should delete and write your own post, with a new title above. Or hit Add New on the left (of the admin dashboard) to start a fresh post.

Here are some suggestions for your first post.

  1. You can find new ideas for what to blog about by reading the Daily Post.
  2. Add PressThis to your browser. It creates a new blog post for you about any interesting  page you read on the web.
  3. Make some changes to this page, and then hit preview on the right. You can always preview any post or edit it before you share it to the world.

Original

If We’re so Smart, Why Haven’t we Won?
by DataPacRat
datapacrat@datapacrat.com

 

Attribute to L. Neil Smith’s The Libertarian Enterprise

“Liberty is not a cruise ship full of pampered passengers. Liberty is a man-of-war, and we are all crew.”
—Kenneth W. Royce

If life without a tyrannical government is so obviously so much better than life with one, why aren’t we already living in a libertopian paradise? Why does anyone at all support a government with the authority to do nasty things to them?

These aren’t rhetorical questions; if we really do want to push our societies closer to the libertarian principles we profess, then we need to have an accurate understanding of what stands in our way, so that we can figure out which methods will and won’t advance our cause.

At least one part of the answer may come from the part of game theory surrounding the “Prisoner’s Dilemma”. This is a classic thought experiment; in one version, you and someone else are both arrested and facing a jail term. There is only minor evidence against each of you, so you’re facing 1 year in jail. You have two choices; you can ‘cooperate’ with your fellow arrestee and stay silent; or you can ‘defect’, which will reduce your sentence by 1 year but add 2 years to his. And he faces the same choice; and neither of you can communicate with each other until you’ve both made your decision. The paradox comes from the fact that both of you would prefer that the both of you cooperate, resulting in a 1-year sentence each, than both of you defecting, resulting in a 2-year sentence each; but that no matter what the other person does, you get a better result by defecting than you do by cooperating. Similar sorts of problems, where you have the choice of improving your own situation at the expense of someone else, crop up in a wide variety of contexts. For one example, I might face a decision as to whether or not to steal from you, and you might face a similar decision from me. If lots of people decide to steal from other people, then society becomes a rather unpleasant place to be—especially compared to what it would be if most people decided to not steal.

True Prisoner’s Dilemma problems are actually quite rare in real life, because people have found several ways to get around the basic form of the problem, resulting in different payoff results. And, intriguingly, the main forms of these solutions correspond closely to the groups whom El Neil referred to in his recent “Political Geometry” article as paternalistic, maternalistic, individualistic, and fascistic; what I referred to in “Revisiting Meade” as Reds, Whites, Blues, and Greens; and what popular media might call Gryffindors, Hufflepuffs, Ravenclaws, and Slytherins.

One solution is that if there’s someone who’ll punish anyone who defects, then the costs of defecting will tend to rise and outweigh the benefits, thus discouraging anyone from defecting. In fact, if you tilt your head and squint, then from a certain perspective, having such a punishment system in place is to the benefit of everyone involved, since they don’t expect to face the costs of punishment themselves, and having the system in place means that everyone is more likely to cooperate than defect. When the Dilemma in question is about whether to steal or not, the punisher takes the form of what we know as government—and this is the solution favored by the Greens. Even if a government goes beyond this role and causes all sorts of mischief, from the Green perspective, having a bad government is better than having no government at all.

Another solution is that if you can predict that the other person in the Dilemma will tend to act the same way you act, then you are safe in cooperating, since you know that the other person will make the same choice. In political Dilemmas, this takes the form of having a code of honor, and is favored by Reds.

The third solution is that if you actually care a good deal about what happens to the other person, then you won’t want them to be harmed by your own defection. Such compassion is an attribute of Whites.

Finally, if news about whether you’re the sort of person to cooperate or defect can be spread widely, allowing other people to predict the choices you’ll make, then your desire to get a good result in future Dilemmas can override your desire to get the best score in this particular Dilemma. And, as you might guess, truth-loving Blues often enjoy calculating the details of such reputation-based systems.

And all of the above is a pleasant diversion of an analysis; but even if it’s a good model, what good does it do in bringing us closer to libertopia?

Part of the answer comes from the notion that power-hungry Greens gain part of their support by convincing people with other values to help them. For example, they might use the explanation “to protect the children” to gain White support for a law, or say something about “an affront to our national honour” to gain Red support for a law… and once the Greens have the support of the Whites and Reds, they don’t even need to try to get the support of the Blues, though they’ll certainly grab hold of any Blue supporters they can get.

Therefore, if we want to reduce the support of Green-style government, one tactic might be to try to convince the Reds, Whites, and Blues to withdraw their support—by showing them that the Green way isn’t the only way there is to prevent the troubles of mass ‘defections’. You could try pointing out to motherly Whites that it’s possible to strangle someone with too-tight apron-strings, and that it’s really in the long-term best interests of the people they care about to let them grow up and make their own mistakes. For a Red, you could try figuring out which codes of honor are most compatible with libertarian-style property rights and least compatible with Green government, such as Robin Hood style redistribution, or that certain military orders should be disobeyed; and promote those forms of honor as being superior to whatever code the Red currently follows. For Blues, it might be worth pointing out the advantages of everyone being held to the same standards, and getting them interested in whistleblowing programs.

A number of other similar actions can be taken, depending on what it is a person values most highly. Unfortunately, no single such persuasion attempt is going to completely transform society into a paradise. Fortunately, every supporter taken away from the Greens; every person convinced that options other than authoritarian government can allow them to live their lives in peace; every individual who becomes, if not a full supporter of our political goals, then at least a fellow traveler; will make it that much easier to do whatever is necessary to remove unpleasant tyrants from power.

Not to mention, it’s something that any of us can do, if nothing else to help pass the time until a better plan comes along.

Thank you for your time,