Author Archive

Original

What plans could a prospective cryonicist try out, beyond simply signing up, that could increase the odds of eventually having a pleasant re-animation experience?

To show what I mean, here are the main ideas I’ve managed to come up with so far. None of these particular ideas are a standard part of a cryonics preservation package. Some are easier to implement than others, some are more likely to have an effect than others, some have potentially greater effect than others.

* Arranging for as much information about oneself (photo albums, emails, grade school report cards, etc) as possible to be placed on archival media and stored along with one’s body. Reasoning: If the cryo-preservation procedure causes brain damage, and technology advances sufficiently before re-animation, then this information potentially allows for that damage to be at least partially reconstructed.

* Requesting additional data about the cryo-preservation procedure used on oneself be archived. Eg, requesting that, to whatever degree doesn’t interfere with the procedure, it be videoed.

* Making arrangements for an animal body to be cryo-preserved with the same procedure one’s own body was preserved with. A lab chimp would be ideal, but difficult to arrange for a number of reasons; more likely, a more common animal of around human mass would be feasible, such as a dog or goat. Even a few lab-rats might help. Reasoning: It gives future re-animators an additional opportunity to experiment with re-animation techniques, before attempting to re-animate a person.

* Noting down one’s preferences and requests for future re-animators. Eg, from “I’d appreciate having a cat nearby to pet and calm down as I wake up” to “If you have to rebuild my body from scratch anyway, and it’s within cultural norms, I would appreciate being gender _____” to “If you create a digital/electronic/computer/data copy of my mind, I would like a copy of that to be placed in offline, air-gapped storage, so that if every active copy of my mind is destroyed, there will always be that original backup available to re-instantiate myself.” Or just more general ideas, such as, “My goal is to live forever, and I would prefer whatever means most likely lead to that happening to be tried.”

I’m not nearly as creative as I wish I could be; so I’m hoping that the local group-mind here might be able to offer further ideas, or improvements or refinements to the above ones.

So: What extras can you think of?

The Thousand Year Romance Of Clover The Clever, by Benman

Clover the Clever has found a love so pure and true that no pony can stop it. But even the greatest love cannot conquer death itself.

So Clover will keep looking until she finds something that can.

Original

Last week’s “Off the Hook” mentioned the difficulties of having lost a PGP private key; I sent a message about how signed vCards can be used for key revocation, which was read on-air. 🙂

The first draft of the Internet Draft for “Signed vCards” is now in the pipeline, at https://datatracker.ietf.org/doc/draft-boese-vcarddav-signedvcard/ , and depending on how things go, just might manage to eventually get turned into an official RFC and an actual part of the infrastructure of the internet.

It may not seem like much, but it’s designed to improve how people can assert their identities, which could improve all sorts of aspects of certificate authorities, encrypted email, and suchlike. It’s the most important thing I think I’ve been able to work on in the last little while, and I hope the editing process goes well.

Original

Today’s lesson: when the humidex is 35+, don’t walk to go donate blood. I usually bleed well; today I had to stop at 400 ml instead of 480.

AA

I have studied scientific knowledge of the nature of mind; but I am not a Probationer.

I have studied the powers of mind; but I am not a Neophyte.

I have studied the foundation of mind; but I am not a Zelator.

I have studied my mind; but I am not a Practicus.

I have studied morality and innate preferences; but I am not a Philosophus.

I have obtained control over my ultimate aims; but I am not a Dominus Liminis.

I have discovered those parts of my self which are usually silent; but I am not an Adeptus Minor.

I have achieved self-reliance; but I am not an Adeptus Major.

I have set forth my knowledge and proposals for the welfare and progress of the universe; but I am not an Adeptus Exemptus.

I comprehend the nature of the universe; but I am not a Magister Templi.

I have created a new Word; but I am not a Magus.

I am not an Ipsissimus.

I am, now and hopefully forevermore… a Student.

 

(If that doesn’t make any sense to you, then don’t worry about it. In fact, I’ll be surprised if anyone at all who ever reads this understands it. It’s not like this blog is read by very many people, and of those who do read it, this post’s subject matter is sufficiently orthogonal to the blog’s usual topics that it’s effectively a piece of nonsense beat poetry.)

Temperature: 35 Celsius

Humidity: 58%

Humidex: 47 = “Dangerous, possible heat stroke”

Finally gave in to safety reasoning and bought an air conditioner.

After chatting with the nice people at Uri-review, I’ve accepted their suggestion that nym, or something like it, would be better constructed more along the lines of a filetype, or even an end-to-end cryptosystem application, than a URI. Thus, I’ve started chatting with the nice people at vcarddav, pkix, and apps-discuss about just that. Specifically, by creating a “signed vCard” extension to the existing vCard standard; and working out an easy-to-use-as-possible set of ways to use such cards.

The name ‘nym’ seems to have been lost entirely in the transition. (Unless it turns out a name is needed for the whole system, but that’s speculative.)

Original

In order to try to evoke whatever constructive criticism might be
found here, I'll try one last time to offer an explanation of why I
think nym has useful potential. Should no such suggestions be
forthcoming, well, I'll have at least given my best shot here.

The problem:

The Certificate Authority (CA) model of authentication on the web is
broken, in ways both several and serious. There are far too many CAs
who are supposed to be trustworthy but aren't; fraudulent certificates
are known to have been issued; man-in-the-middle attacks have been
done. One of the main reasons for such problems seems to stem from a
fundamental assumption of the security model used: a CA is either
trusted, or it isn't.

The web-of-trust model offered by PGP/GnuPG improves on those
assumptions slightly, by offering an additional level of moderate
trust - if enough moderately trusted authorities all support a
certificate or key as being connected to a digital identity, then that
key is assumed to be accurate. Statistical analysis of a large
population of keys allows for somewhat more complicated key
verification, but tends to be impractical for the individual user.

A possible solution, or at least potential improvement:

There's a whole host of mathematics to support the idea that when
faced with an incomplete set of evidence about any fact (such as
whether a key is tied to an individual), the best possible solution is
to use Bayesian analysis. This involves measuring confidences, and
updating them as new evidence is learned, in a particular fashion. (
http://yudkowsky.net/rational/bayes is one introduction to this math.)

The purpose of nym is to leverage as many of the available and
existing technologies as possible, in order to allow a user to apply
Bayesian reasoning to identity verification, as easily as possible;
without being tied to any particular piece of software. The output of
one set of Bayesian reasoning, asserted by a particular authority, can
be used as the input for anyone else's Bayesian analysis. Thus,
instead of the mere two levels of 'trusted' or 'untrusted' used by
CAs, or the three levels used by PGP, users can use an infinite number
of shades of gray to describe exactly how likely it really is that a
given key represents a given individual.

I've run through a few drafts, adding and deleting details; but I
think the above covers all the core points without getting bogged
down.

I'm unaware of any existing solution to the above problem that meets
the described requirements. It would be reasonably simple to cobble
together a piece of software to, for example, replace GnuPG's
web-of-trust model with a Bayesian function; but that would only solve
a small piece of the problem, for one particular group of
keys/identities. However, putting together a URI, which is designed to
point to the abstract identity pointed at by a particular email
address or social media profile, seems to be at least as within the
spirit of URIs in general as tag: is; and offers the potential for
interacting with any form of encryption software, existing or
yet-to-be-written (in much the way that ftp: and telnet: did when
http: came along).

If you feel that the core problem isn't important enough for a URI to
be used as a solution, that's one possible discussion. If you feel
that using a URI is an inappropriate way to solve it, that's another
possible discussion. And if you feel that some URI may be a good idea,
but my initial ideas for nym: are bad, that's yet another possible
discussion. But if you do reply, I would greatly appreciate if you
would, at the very least, let me know at which point you feel nym:
fails, instead of simply offering a generic 'it's a bad idea' without
any specifics. The former sort of response offers something to build
upon, even if it's to build an entirely different solution; the latter
is hard to distinguish from a personal opinion which may or may not be
relevant to the issue at hand.

I look forward to my ideas being torn apart in as much detail as possible.

Thank you for your time,
--
DataPacRat
"May accuracy triumph over victory."

Original

A passing thought: “… it’s beneath my dignity as a human being to be scared of anything that isn’t smarter than I am” (– HJPEV) likely applies equally well to superintelligences. Similarly, “It really made you appreciate what millions of years of hominids trying to outwit each other – an evolutionary arms race without limit – had led to in the way of increased mental capacity.” (– ditto) suggests that one of the stronger spurs for superintelligences becoming as super-intelligent as possible could very well be the competition as they try to outwit each other.

Thus, instead of ancestor simulations being implemented simply out of historical curiosity, a larger portion of such simulations may arise as one super-intelligence tries to figure out another by working out how its competitor arose in the first place. This casts a somewhat different light on how such simulations would be built and treated, then the usual suggestion of university researchers or over-powered child-gods playing Civilization-3^^^3.

 

* Assume for a moment that you’re in the original, real (to whatever degree that word has meaning) universe, and you’re considering the vast numbers of copies of yourself that are going to be instantiated over future eons. Is there anything that the original you can do, think, or be which could improve your future copies’ lives? Eg, is there some pre-commitment you could make, privately or publicly?

* Assume for a moment that you’re in one of the simulated universes. Is there anything you can do that would make your subjective experience any different from what your original experienced?

* Assume for a moment that you’re a super-intelligence, or at least a proto-super-intelligence, considering running something that includes an ancestor simulation. Is there anything which the original people, or the simulated versions, could do or have done, which would change your mind about how to treat the simulated people?

* Assume for a moment that you’re in one of the simulated universes… and due to battle damage to a super-intelligence, you accidentally are given root access and control over your whole universe. Taking into account Reedspacer’s Lower Bound, and assuming an upper bound of not being able to noticeably affect the super-battle, what would you do with your universe?