This is a sort of preface, but I feel it’s relevant; I’ve had the occasional discussion which started at an end-point and had to work, post-by-post, back to these foundational assumptions. It’s likely to save time to just write it all down at once, and then, hopefully, discuss any particular areas of disagreement.

I currently use two general axioms to support all my other thinking. One, that math and logic are going to remain consistent: 1+1 isn’t going to stop equaling 2 any time soon. And two, that the evidence of my senses contain at least some partially accurate connection to an objective universe existing outside my own mind. I haven’t been able to figure out any way to think useful thoughts if either of these aren’t assumed to be true; and assuming just these two things lets me work out, step by step, just about everything most people take for granted as ‘axiomatic’.

Which leads me to: The truth is important. Knowing true things lets you travel to the moon, cure diseases, and communicate across the world in moments.

There are different ways to figure out what’s true and what’s not. Some ways are better than others. It’s possible to figure out which ways do better, by trying them out and seeing how well their statements fare against actual observations.

After looking at a /lot/ of such ways, the best way I’ve been able to find to identify true things with the greatest accuracy is something called ‘Solomonoff Induction’, or occasionally ‘Kolmogorov Complexity’ or ‘Minimum Message Length’. A very rough description of it is a mathematical formalization of Occam’s Razor. About the closest that we as humans can approach to this method of reasoning is Bayesian induction; or, more usually, a qualitative approximation of Bayesianism, such as the various social guards of the scientific method.

What this all leads up to is that using the best available reasoning on the best available evidence, the best conclusion that seems to be reachable is that certain things that are claimed to be true, are so likely to be untrue that it’s safe to just call them ‘false’. A couple of short lists of these not-true things can be found in the recently-memed Venn Diagram at http://crispian-jago.blogspot.co.uk/2013/03/the-venn-diagram-of-irrational-nonsense.html , or at http://whatstheharm.net/ .

A particular subdetail of this is that there seems to be no significant support for the hypothesis that selfhood has a non-material component. That is: there’s no such thing as a ‘soul’. Mind is what brain does; damaging certain areas of the brain leads to generally predictable deficits in cognition. Self-hood seems to be inherent in the various patterns of connections within a brain.

Thus, it seems plausible that if a given pattern of connections can be reproduced, in a format that updates in the same way that another set was patterned and updated, then there is reasonable grounds to believe that the reproduction will have the same sense of self-hood as the original did.

The practical consequences about which versions of any given ‘self’ are the ones which are obligated to pay a debt are currently a philosophical or science-fictional entertainment. (Well, I find them entertaining, at least.)

 

If any of the above roughly-described reasoning doesn’t seem to add up, please let me know, in as much detail as possible, so I can try to figure out where the error is.

Leave a Reply