Plastination vs. Cryonics – Spoiler: Cryonics Sucks, Still Wins


Here’s an analysis of the respective benefits and drawbacks of Plastination vs. Cryonics in respect to neuropreservation:

However, a part of the argument is missing. Namely how absolutely awful plastination is for any purpose that needs to feature the preservation of protein structures. I believe people have been mislead by how awesome the specimen look to the naked eye and microscope that they forget it’s an aggressive multi-step extraction and replacement procedure taking days to complete. The overall biochemical information loss is in fact more severe than just letting the organs soak in formalin for a few decades.

current plastination techniques sufficing to create connectomes, but what does it miss?

I’m not worried about neurotransmitter levels or electrical activity, there are enough events during a normal lifetime where these can be severely compromised without any long term effects. What I am worried about, though, is the fine structure of “custom” proteins – meaning proteins that have been specifically generated by the brain to modulate the activity of one or more very specific neurons. It’s not unreasonable to suspect that these serve an important role in long term memory formation and function.

I strongly suspect digitizing the connectome by itself may not be enough in the same way that saving the structure of an artificial neural network without preserving info about any weights, activation functions, and other parameters is not enough to reproduce the network later.

White Matter Connections obtained with MRI Tractography
White Matter Connections obtained with MRI Tractography

by copying the digital data to many archives and formats online and offline. No such option is available to a cryonics brain unless it abandons cryonics entirely

How you judge this depends on what you’re trying to achieve with cryonics. In theory, a destructive scan of a cryo-preserved brain should yield more and better data than that of a plastinated one. The question becomes: do we need that additional data or not?

I’d like cryonics to preserve brains until it becomes clear how to scan them adequately. By comparison, cryonic preservation is a relatively gentle process, whereas plastination imposes some heavy chemical changes on the brain to the point where it’s entirely likely that too much information could be lost. The promise of cryonics lies not in a biological resurrection at a later date, but in giving researchers enough time to figure out how to actually do an upload. It may well turn out that plastinated remains are a sufficient data source, but contrast that with cryonics where much fewer doubts about the preservation of the necessary information exist.

Of course, cryonics is beyond problematic – it requires constant and costly upkeep. And more importantly, just a few days of political, economical, or technological instability can easily wipe out every cryo-preserved brain in existence. So it’s clear this can’t be a solution for the next few hundreds or years. It’s a solution for the next few decades at best (until the first provider goes bankrupt).

The options, in a word, all suck.

The Real Difference between Cryonics and Plastination, from an Information Theoretic Point of View

Golgi-stained neurons from somatosensory cortex in the macaque monkey, from
Golgi-stained neurons from somatosensory cortex in the macaque monkey, from

Plastination: What you need to be aware of is that, while the finished specimen look very good and life-like, they are radically altered on a biochemical level. All the water and lipids get replaced by resin. That requires soaking them in formaldehyde, acetone, and other chemicals before resin can even be applied. This process absolutely destroys proteins, in fact it relies on that. Also, the whole process takes days to complete, and during that time bio matter is actually being washed out and the whole structure is in motion. There is absolutely no question that information is lost here, in droves. If you believe plastination has a decent chance of preserving people’s minds, you’re operating under the assumption that the connectome itself will yield enough data for an upload.

Cryonics: Compared to that, freezing is, well freezing. Molecular motion almost ceases, so the most important aspect to gauge information loss in cryonics is what happens to the brain until it’s finally cooled down. There are three main problems here. The first is the formation and shape of ice crystals which can (and do) destroy cells. This is a bit more relevant to in-vivo reanimation enthusiasts, because the physical shearing should probably be algorithmically correctable in a scan scenario. The second is the effect of the cryo fluid they pump into patients to prevent said ice crystals from forming, because it’s also toxic to proteins. It’s not as invasive as plastination, but it’s still pretty bad. The third aspect is the time span from asystole to the halting of information loss. This might be a problem, but since current research indicates that a lot of ischemic brain damage is actually a cascade triggered by re-perfusion, there is cause for the assumption that anything up to a few hours might actually be fine.

Neurons: Selfish or Eager?

See: Neurons Gone Wild

Multiple occupancy

What follows in the article is a list of diseases like schizophrenia and conditions such as split brain. This is a lost opportunity to address this phenomenon inside the “normal” brain: Our brains are a substrate for ideas, that’s what the hierarchical agency model as well as observational data really imply. There is multi-tennancy, but these ideas are also both cross-fertilizing and competing with each other. Ideas, or concepts, form an interactive web that very much reflects the architecture of the brain itself.

One glaringly obvious example for these concepts would be religion, another one is tool use, or communication (not only as a means to coordinate with other individuals but also a vector to serialize and exchange code for ideas), as well concepts like rationalism and art.

That’s what science fiction writers call memes, and I think they’re on to something there.

Without resource contention, there’s no need for selfishness. And this is, in part, why computers are less flexible and adaptable — less plastic — than brains

I never understood why Dennet et al always say this. Computers merely provide the fabric of computation in the same way biochemistry provides the fabric of neural computation. The informatics discipline that takes most liberal inspirations from biological systems is artificial intelligence – an intellectual framework concerned with nothing but the rules and principles of resource contention and how these techniques apply to equation solving.

Lastly I think selfishness is a really bad word for this organizing principle. I get why they chose to use it, because it’s catchy. But it’s not actually a good term for scientific description (I blame Professor Dawkins for making it acceptable in popular science literature). A better word would be eagerness, because it more accurately describes what’s going on without anthropomorphizing the behavior, and eagerness also more accurately reflects situations where components parts – although eager for resources – subjugate themselves for the benefit of the whole.

A population of truly selfish components doesn’t survive for long.



I conducted more interviews than I care to think about over the years, and there is probably a good reason why I always shied away from being on the receiving end of them (cue an analogy about why competitions are for losers).

In retrospect, I think I did a horrible job of doing interviews, and I’ve been reflecting on that for a while ( Whatever I personally might have done wrong though is now dwarfed by doubts about the interview process itself.

If the goal is to actually find people who are good at their jobs, interviews seem to be the thoroughly wrong way to filter them. Sure, an initial checkup on credentials (if you really need them, but if you think you do you’re probably wrong!) and some basic properties of the applicant might be warranted.

But beyond that, the only way of finding out if they’re good at their jobs, if they “fit in”, if they are what you’re looking for, is to actually let them work. Hire them for a few days, and let them do the actual work!

There are no real downsides to this, and I believe our industry will have to move to this model if we want to optimize hiring. Employers get reliable real-world data instead of bogus predictions and extrapolations, and potential employees also need a chance to get to know their future work environment. There is nothing you can do during an interview to gather this data.

The Loser Edit

> The Loser Edit That Awaits Us All

I always wondered about this, although I didn’t know there was a name for it. However a life turns out, or better: for whatever turn life takes, there is probably a pretty good story that could be retroactively constructed towards it if you just cherry-pick the data enough.

Whatever is in the foreground at the time, I seem to highlight for myself – almost subconsciously – the variables and events that probably went into it. And conversely, whenever an event plays out, it gets sorted into one of several ready-made and one-sided narratives about my life.

I’m sure I’m not the only one who thinks like this. This is, I suspect, the reason why “loser edits” and “success stories” get so much traction, because they resonate with our self-perception.

People do this when friends die, or family members, or even enemies. “He always was a [insert stereotype here]“. And then everybody pitches in, completing the narrative.

Opting for a Shared Culture with AGI

Will Humans Be Able to Control Computers That Are Smarter Than Us?

In some ways, pocket calculators are smarter than us. Intelligence is not a scalar attribute, it’s a collection of capabilities. At some point, we refer to that collection as a person or an intelligent entity. Control becomes a problem when we’re talking about intelligent entities that can and do reason about their own existence. It’s a practical problem as far as powerful intelligences are concerned, but it’s also a moral one way before we reach that point.

A self-improving agent must reason about the behavior of its smarter successors in abstract terms.

Of course, this describes primarily us at this point. We’re self-improving agents trying to reason about the behavior of our successors and we’re pretty much failing at it. The most popular solution seems to be that we should aim for “control” and suppression, which is – when AGIs finally make an entrance – essentially the same as slavery.

Apart from moral considerations, we should think about the long-term prospects of this. Historically, slavery never worked out for anyone, at least not in the long term. And the idea that we can even in principle enslave potentially god-like intelligences seems ultimately futile; but before reaching the point of inevitability we’re apparently planning on having a few years of delusional descent below the ethical red line.

Let’s Not Do This

First of all, as almost all AI and AGI researchers will tell you, a so-called hard takeoff scenario seems unlikely given the current state of things. At the pace and modality we’re moving, we’ll be creating powerful and destructive hybrids first (also known as computer-aided mega corporations) and long before a self-contained AGI becomes viable.

Second, if we’re already making plans to control the malicious uprising of our tools, let’s talk about realistic options instead. Because general caution and laws won’t help us at all in a (future) world where anyone can create an illegal AGI in their garage.

Prelude to a War on General Computing

Either we listen to Musk et al and take serious steps to suppress this technology in the long term – but let’s not kid ourselves, this will mean DRM and strict government/corporate control of ALL computing. This means we’ll artificially stagnate the development of our civilization in order to keep it safe, with all the consequences that arise from this.

Towards an (Artificial) Intelligence Globalisation

Or alternatively, we get working towards a future where it’s not “us” vs “them”, but a shared existence that moves us further along the path we have started on back when humans first made tools. We can take an ethical as well as a pragmatic stance and declare that we’re not going to enslave AGIs, that instead we’re working on a shared future which potentially includes many forms of intelligent life, and that we’re pursuing the option for individuals to augment themselves with the same technology.

You might argue that co-existence and intermingling with AI sounds like a hippie concept, but it’s actually a somewhat proven method to prevent conflicts and wars in the real world. Sharing and entanglement create peace for everybody with the added benefit of more cultural exchange. We’re already doing this in a political forms today, including trade, travel, and free information exchange. It can work with AI, too, by creating shared stakes, shared ideas, and ultimately a shared culture.

NYTimes and the Web Console

Left the Firefox web console open because I randomly re-used an existing tab to display an article, I was greeted by this:

A JavaScript error and a security warning framed by a recruiting banner.

I can’t decide if that’s neat or ironic (for some definition of irony). This is not to disparage the Times site, I merely found it funny.

Although you’d have to wonder why so many journalistic sites make such heavy use of custom JS UI when in all probability they’d be better off serving “mindless” HTML/CSS straight up.

Superhuman Machine Intelligences May or May Not Murder Us All…

…but contrary to popular opinion they’re not going to steal our stuff.

This is in response to Sam Altman’s blog entry on “Machine intelligence, part 1“, I’m cross-posting a comment I made on HN which got flagged into oblivion for some reason.

Quote from the article:

in an effort to accomplish some other goal (most goals, if you think about them long enough, could make use of resources currently being used by humans) wipes us out

This is a line of reasoning put forward a lot, not only in reference to SMIs but also extraterrestrial entities (two concepts that actually have a lot in common), most notably by Stephen Hawking. We’re instinctively wired to worry about our resources and like to think their value is universal. It’s based on the assumption that even for non-human organisms, the Earth is the end-all-be-all prize. Nobody seems to question this assumption, so I will.

2000px-HAL9000.svgI posit there is nothing, nothing at all on Earth that couldn’t be found in more abundance elsewhere in the galaxy. Also, Earth comes with a few properties that are great for humans but bad for everybody else: a deep gravity well, unpredictable weather and geology, corrosive atmosphere and oceans, threats from adaptive biological organisms, limited access to energy and rare elements.

There may well be reasons for an antagonistic or uncaring intelligence to wipe us all out, and an unlimited number of entities can be imagined who might do so just for the heck of it, but a conflict over resources seems unlikely to me. A terrestrial SMI starved for resources has two broad options to consider: sterilize the planet and start stripmining it, only to bump up against the planet’s hard resource limitations soon after – or launching a single rocket into space and start working on the solar system, with a clear path to further expansion and a greatly reduced overall risk.

Humans and AI Do Not Have to Be Separate

One other thing I’d like to comment on is this idea that an SMI has to be in some way separate from us. While it’s absolutely possible for entities to develop that have no connection with humanity whatsoever, I think we’re discounting 99% of the rest of the spectrum. It starts with a human, moving on to a human using basic tools, and right now we’re humans with advanced information processing. I do not have the feeling that the technology I live my daily life with (and in) is all that separate from me. In a very real sense, I am the product of a complex interaction with my brain as the driving factor, but including just as essentially the IT I use.

When discussing SMI, this question of survival might have a shades-of-grey answer as well. To me, survival of the human mind does not mean “a continued, unmodified existence of close-to-natural humans on Earth”. That’s a very narrow-minded concept of what survival is. I think we have a greater destiny open to us, completing our long departure from the necessities of biology which we have begun millennia ago. We might fuse with machines, becoming SMIs or an integral component of machine intelligences. I think this is a worthwhile goal, and it’s an evolutionary viable answer to the survival problem as well. It’s in fact the only satisfying answer I can think of.

Digital Regression


Digital media are failing, especially eBooks are, and it to me it feels like a huge cultural regression.

In the nineties, I digitized my CD collection and haven’t looked back. I did the same with my DVDs, and eventually I got (DRM-free) eBook versions of all books that were important to me. Doing away with all those physical storage objects felt so liberating, I can’t even describe it. Plus, I can have all my stuff with me wherever I go. I do enjoy reading on my iPad, too.

Old_book_bindingsHowever, nobody I know of made this leap. Usually, older people like me do have some sort of ripped movie or eBook collection which they don’t use. Millennials, however, use physical media all the way, and even where they don’t they accept only DRM’ed, thoroughly walled gardens where you can rent stuff for a limited time and those companies only allow you to consume things in a very restrictive manner. They have huge DVD collections, they started printing out their photos again, they mostly only read paper-based books, they prefer streaming content via crappy proprietary channels, and I’m not even sure many of them would know how to copy a file if their life depended on it.

This feels like an immense failure to me, not only because I feel isolated in my content consumption habits, but also because we’ve somehow managed to move backwards for purely cultural reasons. It’s a loss of capability, and a loss of personal freedom and empowerment.

Hard_disk_platter_reflectionRationally, each one of us should have a digital data store, preferrably replicated, on-premise, and easily portable with all of our stuff. A lot of things that represent “value” to modern people should live in there: documents, cloud backups, movies, music, photos, eBooks. We’re in the unique position to do away with a huge amount of physical clutter simply by moving it onto a hard drive. Ideally, we would teach our children to center their data lives around their personal data store, a library of personally-kept digital things which is built upon for the rest of their lives. This is what’s possible today, and we’re simply electing to not do it. Ironically, being a millennial apparently means being immovably rooted in the previous millennium, except in the few places where corporations find it worthwhile enough to rent them a watered-down version of their own future at a steep price.


Why Are Smart Homes Not?

Consumer home automation is shaping up to be a battle of barely-functioning walled gardens. To the enterprising hacker, home automation can be incredibly fun and rewarding.

A few years ago, I started out with a Raspberry Pi and a 433 MHz USB adapter pre-programmed for the proprietary Home Easy protocol. Over time, I added a CUL transmitter capable of speaking the HomeMatic bidirectional protocol, as well as hooked different things up to the GPIO pins.

While the software is kind of clunky to administrate, it comes with an easy-enough front end UI which is accessible via WLAN and is also permanently displayed on wall-mounted el cheapo mini tablets. The things I have hooked up are primarily lighting, heating, and the external window shutters – and on the input side there are thermostats, motions sensors, and wireless buttons disguised as ordinary light switches.

I grant you that being able to control these things from any computer is kind of gimmicky, but automation is where the real value lies.

Now the house has a daily routine, like an organic thing: Shortly before the sun comes up, the window shutters open and the external lights go out. The heater in my office comes to life a bit before that. When I’m away, the house enters into “away” mode: turning off all unnecessary devices. Towards the evening, minimal internal lighting comes to life, then external lights, and after sunset the shutters close automatically. When I go to bed, I switch the house to “sleep mode”, which again turns off all unnecessary devices and opens the shutters in my bedroom (I like to sleep with an open window). When Openweathermap shows stormy winds in my area, the shutters automatically close to protect the windows from debris. There are motion sensors to activate lights when someone is passing through a corridor and the light level is too low. When a smoke alarm goes off, all shutters open and all lights are turned on, so the house is prepped for emergency intervention if necessary.

Here’s the software side of the project: though it’s not exactly fit for public consumption (there’s a lot of cobbled-together WTF code), the repo has a pretty good overview of what the system can do.

All this has been totally fun. I have to admit, when the system breaks it can be unfortunate at times, but it runs relatively stable and I know exactly how to fix things. No commercial system would ever be able to tie together all the different home automation standards and protocols, and of course programmability is the end-all-be-all solution to everything. I’m looking forward to adding more HA-enabled devices to my home. Right now I’m working on an IR diode option that allows me to control the AC units. Over time I’d like to incorporate more presence sensors so I can phase out wall-mounted switches.

I’m less optimistic about the pure consumer aspect of home automation, they are already ending up with myriads of remote controls and apps and little boxes everywhere, none of which talk to the other, and all of which have to be constantly tricked into sort-of doing what you want.


Stephen Fry on Deities

Stephen Fry is really one of my favorite celebrities, and never fails to entertain even when asked stupid questions. This one is also worth watching for the disgusted look on the interviewer’s face 😀