Archive for February, 2012

Original

This is one of those sleep-deprived middle-of-the-night ideas which I’m reasonably likely to regret posting in the morning once I really wake up – but which, at least at the moment, thinking on my more-corrupted-than-standard hardware, seems like a cool idea.

Most role-playing games have a system for determining whether or not certain actions are successful or not. Most of the time, these can be described as setting a target number, and rolling one or more dice, with various modifiers – eg, you might have to roll a 13 or higher on a twenty-sided dice to correctly answer the sphinx’s riddle, and having your handy Book of Ancient Puzzles to refer to may give you a +3 bonus to your die-roll.

How insane and awful an idea would it be to have an RPG system whose core mechanic wasn’t based on linear probabilities like that… but, instead, on decibels of Bayesian probability? For example, instead of a bonus adding a straight +3 to a d20, or increasing your odds by 15% no matter how easy the task or how skilled you are, the bonus adds +3 decibels: changing your odds from 50% to 66% if you started out with a middling chance, but only increasing it from 90% to 95% if you’re already very skilled.

 

(And now, back to sleep, and to see how much karma I’ve lost come the morning…)

Original

This mentions some of the limitations of eyewitness testimony; does anybody here have any references giving any hard numbers about how reliable eyewitness accounts are, under any given circumstances?

I’d like to be more conscious about my Bayesian-type updates of my beliefs based on general accounts of what people say. So far, I’ve started using a rule-of-thumb that somebody telling me something is so is worth approximately 1 decibel of belief (1/3rd of a bit); evidence, but about the weakest evidence possible, nulled by any opposing accounts, and countered by any more substansive evidence.

If possible, I’d like to know exactly how reliable such testimony tends to be in one particular set of circumstances – time since the thing being reported, level of emotional involvement, etc – to use as a baseline, and at least roughly how strongly such factors change that. (I’ll actually be very surprised if this particular set of data currently exists in ready form – but I’ll be satisfied if I can get even order-of-magnitude approximations, so that I know whether or not the rules-of-thumb I end up using are at least within plausible spitting distance.)

Original

One of the standard methods of science-fiction world-building is to take a current trend and extrapolate it into the future, and see what comes out. One trend I’ve observed is that over the last century or so, people have kept coming up with clever new ways to find answers to important questions – that is, developing new methods of rationality.

 

So, given what we do currently know about the overall shape of such methods, from Godel’s Incompleteness Theory to Kolmogorov Complexity to the various ways to get around Prisoner’s Dilemmas… Then, at least in a general science-fictional world-building sense, what might we be able to guess or say about what rationalists will be like in, oh, 50-100 years?