Neurons: Selfish or Eager?

See: Neurons Gone Wild

Multiple occupancy

What follows in the article is a list of diseases like schizophrenia and conditions such as split brain. This is a lost opportunity to address this phenomenon inside the “normal” brain: Our brains are a substrate for ideas, that’s what the hierarchical agency model as well as observational data really imply. There is multi-tennancy, but these ideas are also both cross-fertilizing and competing with each other. Ideas, or concepts, form an interactive web that very much reflects the architecture of the brain itself.

One glaringly obvious example for these concepts would be religion, another one is tool use, or communication (not only as a means to coordinate with other individuals but also a vector to serialize and exchange code for ideas), as well concepts like rationalism and art.

That’s what science fiction writers call memes, and I think they’re on to something there.

Without resource contention, there’s no need for selfishness. And this is, in part, why computers are less flexible and adaptable — less plastic — than brains

I never understood why Dennet et al always say this. Computers merely provide the fabric of computation in the same way biochemistry provides the fabric of neural computation. The informatics discipline that takes most liberal inspirations from biological systems is artificial intelligence – an intellectual framework concerned with nothing but the rules and principles of resource contention and how these techniques apply to equation solving.

Lastly I think selfishness is a really bad word for this organizing principle. I get why they chose to use it, because it’s catchy. But it’s not actually a good term for scientific description (I blame Professor Dawkins for making it acceptable in popular science literature). A better word would be eagerness, because it more accurately describes what’s going on without anthropomorphizing the behavior, and eagerness also more accurately reflects situations where components parts – although eager for resources – subjugate themselves for the benefit of the whole.

A population of truly selfish components doesn’t survive for long.



I conducted more interviews than I care to think about over the years, and there is probably a good reason why I always shied away from being on the receiving end of them (cue an analogy about why competitions are for losers).

In retrospect, I think I did a horrible job of doing interviews, and I’ve been reflecting on that for a while ( Whatever I personally might have done wrong though is now dwarfed by doubts about the interview process itself.

If the goal is to actually find people who are good at their jobs, interviews seem to be the thoroughly wrong way to filter them. Sure, an initial checkup on credentials (if you really need them, but if you think you do you’re probably wrong!) and some basic properties of the applicant might be warranted.

But beyond that, the only way of finding out if they’re good at their jobs, if they “fit in”, if they are what you’re looking for, is to actually let them work. Hire them for a few days, and let them do the actual work!

There are no real downsides to this, and I believe our industry will have to move to this model if we want to optimize hiring. Employers get reliable real-world data instead of bogus predictions and extrapolations, and potential employees also need a chance to get to know their future work environment. There is nothing you can do during an interview to gather this data.

The Loser Edit

> The Loser Edit That Awaits Us All

I always wondered about this, although I didn’t know there was a name for it. However a life turns out, or better: for whatever turn life takes, there is probably a pretty good story that could be retroactively constructed towards it if you just cherry-pick the data enough.

Whatever is in the foreground at the time, I seem to highlight for myself – almost subconsciously – the variables and events that probably went into it. And conversely, whenever an event plays out, it gets sorted into one of several ready-made and one-sided narratives about my life.

I’m sure I’m not the only one who thinks like this. This is, I suspect, the reason why “loser edits” and “success stories” get so much traction, because they resonate with our self-perception.

People do this when friends die, or family members, or even enemies. “He always was a [insert stereotype here]“. And then everybody pitches in, completing the narrative.

Opting for a Shared Culture with AGI

Will Humans Be Able to Control Computers That Are Smarter Than Us?

In some ways, pocket calculators are smarter than us. Intelligence is not a scalar attribute, it’s a collection of capabilities. At some point, we refer to that collection as a person or an intelligent entity. Control becomes a problem when we’re talking about intelligent entities that can and do reason about their own existence. It’s a practical problem as far as powerful intelligences are concerned, but it’s also a moral one way before we reach that point.

A self-improving agent must reason about the behavior of its smarter successors in abstract terms.

Of course, this describes primarily us at this point. We’re self-improving agents trying to reason about the behavior of our successors and we’re pretty much failing at it. The most popular solution seems to be that we should aim for “control” and suppression, which is – when AGIs finally make an entrance – essentially the same as slavery.

Apart from moral considerations, we should think about the long-term prospects of this. Historically, slavery never worked out for anyone, at least not in the long term. And the idea that we can even in principle enslave potentially god-like intelligences seems ultimately futile; but before reaching the point of inevitability we’re apparently planning on having a few years of delusional descent below the ethical red line.

Let’s Not Do This

First of all, as almost all AI and AGI researchers will tell you, a so-called hard takeoff scenario seems unlikely given the current state of things. At the pace and modality we’re moving, we’ll be creating powerful and destructive hybrids first (also known as computer-aided mega corporations) and long before a self-contained AGI becomes viable.

Second, if we’re already making plans to control the malicious uprising of our tools, let’s talk about realistic options instead. Because general caution and laws won’t help us at all in a (future) world where anyone can create an illegal AGI in their garage.

Prelude to a War on General Computing

Either we listen to Musk et al and take serious steps to suppress this technology in the long term – but let’s not kid ourselves, this will mean DRM and strict government/corporate control of ALL computing. This means we’ll artificially stagnate the development of our civilization in order to keep it safe, with all the consequences that arise from this.

Towards an (Artificial) Intelligence Globalisation

Or alternatively, we get working towards a future where it’s not “us” vs “them”, but a shared existence that moves us further along the path we have started on back when humans first made tools. We can take an ethical as well as a pragmatic stance and declare that we’re not going to enslave AGIs, that instead we’re working on a shared future which potentially includes many forms of intelligent life, and that we’re pursuing the option for individuals to augment themselves with the same technology.

You might argue that co-existence and intermingling with AI sounds like a hippie concept, but it’s actually a somewhat proven method to prevent conflicts and wars in the real world. Sharing and entanglement create peace for everybody with the added benefit of more cultural exchange. We’re already doing this in a political forms today, including trade, travel, and free information exchange. It can work with AI, too, by creating shared stakes, shared ideas, and ultimately a shared culture.

NYTimes and the Web Console

Left the Firefox web console open because I randomly re-used an existing tab to display an article, I was greeted by this:

A JavaScript error and a security warning framed by a recruiting banner.

I can’t decide if that’s neat or ironic (for some definition of irony). This is not to disparage the Times site, I merely found it funny.

Although you’d have to wonder why so many journalistic sites make such heavy use of custom JS UI when in all probability they’d be better off serving “mindless” HTML/CSS straight up.

Superhuman Machine Intelligences May or May Not Murder Us All…

…but contrary to popular opinion they’re not going to steal our stuff.

This is in response to Sam Altman’s blog entry on “Machine intelligence, part 1“, I’m cross-posting a comment I made on HN which got flagged into oblivion for some reason.

Quote from the article:

in an effort to accomplish some other goal (most goals, if you think about them long enough, could make use of resources currently being used by humans) wipes us out

This is a line of reasoning put forward a lot, not only in reference to SMIs but also extraterrestrial entities (two concepts that actually have a lot in common), most notably by Stephen Hawking. We’re instinctively wired to worry about our resources and like to think their value is universal. It’s based on the assumption that even for non-human organisms, the Earth is the end-all-be-all prize. Nobody seems to question this assumption, so I will.

2000px-HAL9000.svgI posit there is nothing, nothing at all on Earth that couldn’t be found in more abundance elsewhere in the galaxy. Also, Earth comes with a few properties that are great for humans but bad for everybody else: a deep gravity well, unpredictable weather and geology, corrosive atmosphere and oceans, threats from adaptive biological organisms, limited access to energy and rare elements.

There may well be reasons for an antagonistic or uncaring intelligence to wipe us all out, and an unlimited number of entities can be imagined who might do so just for the heck of it, but a conflict over resources seems unlikely to me. A terrestrial SMI starved for resources has two broad options to consider: sterilize the planet and start stripmining it, only to bump up against the planet’s hard resource limitations soon after – or launching a single rocket into space and start working on the solar system, with a clear path to further expansion and a greatly reduced overall risk.

Humans and AI Do Not Have to Be Separate

One other thing I’d like to comment on is this idea that an SMI has to be in some way separate from us. While it’s absolutely possible for entities to develop that have no connection with humanity whatsoever, I think we’re discounting 99% of the rest of the spectrum. It starts with a human, moving on to a human using basic tools, and right now we’re humans with advanced information processing. I do not have the feeling that the technology I live my daily life with (and in) is all that separate from me. In a very real sense, I am the product of a complex interaction with my brain as the driving factor, but including just as essentially the IT I use.

When discussing SMI, this question of survival might have a shades-of-grey answer as well. To me, survival of the human mind does not mean “a continued, unmodified existence of close-to-natural humans on Earth”. That’s a very narrow-minded concept of what survival is. I think we have a greater destiny open to us, completing our long departure from the necessities of biology which we have begun millennia ago. We might fuse with machines, becoming SMIs or an integral component of machine intelligences. I think this is a worthwhile goal, and it’s an evolutionary viable answer to the survival problem as well. It’s in fact the only satisfying answer I can think of.

Digital Regression


Digital media are failing, especially eBooks are, and it to me it feels like a huge cultural regression.

In the nineties, I digitized my CD collection and haven’t looked back. I did the same with my DVDs, and eventually I got (DRM-free) eBook versions of all books that were important to me. Doing away with all those physical storage objects felt so liberating, I can’t even describe it. Plus, I can have all my stuff with me wherever I go. I do enjoy reading on my iPad, too.

Old_book_bindingsHowever, nobody I know of made this leap. Usually, older people like me do have some sort of ripped movie or eBook collection which they don’t use. Millennials, however, use physical media all the way, and even where they don’t they accept only DRM’ed, thoroughly walled gardens where you can rent stuff for a limited time and those companies only allow you to consume things in a very restrictive manner. They have huge DVD collections, they started printing out their photos again, they mostly only read paper-based books, they prefer streaming content via crappy proprietary channels, and I’m not even sure many of them would know how to copy a file if their life depended on it.

This feels like an immense failure to me, not only because I feel isolated in my content consumption habits, but also because we’ve somehow managed to move backwards for purely cultural reasons. It’s a loss of capability, and a loss of personal freedom and empowerment.

Hard_disk_platter_reflectionRationally, each one of us should have a digital data store, preferrably replicated, on-premise, and easily portable with all of our stuff. A lot of things that represent “value” to modern people should live in there: documents, cloud backups, movies, music, photos, eBooks. We’re in the unique position to do away with a huge amount of physical clutter simply by moving it onto a hard drive. Ideally, we would teach our children to center their data lives around their personal data store, a library of personally-kept digital things which is built upon for the rest of their lives. This is what’s possible today, and we’re simply electing to not do it. Ironically, being a millennial apparently means being immovably rooted in the previous millennium, except in the few places where corporations find it worthwhile enough to rent them a watered-down version of their own future at a steep price.


Why Are Smart Homes Not?

Consumer home automation is shaping up to be a battle of barely-functioning walled gardens. To the enterprising hacker, home automation can be incredibly fun and rewarding.

A few years ago, I started out with a Raspberry Pi and a 433 MHz USB adapter pre-programmed for the proprietary Home Easy protocol. Over time, I added a CUL transmitter capable of speaking the HomeMatic bidirectional protocol, as well as hooked different things up to the GPIO pins.

While the software is kind of clunky to administrate, it comes with an easy-enough front end UI which is accessible via WLAN and is also permanently displayed on wall-mounted el cheapo mini tablets. The things I have hooked up are primarily lighting, heating, and the external window shutters – and on the input side there are thermostats, motions sensors, and wireless buttons disguised as ordinary light switches.

I grant you that being able to control these things from any computer is kind of gimmicky, but automation is where the real value lies.

Now the house has a daily routine, like an organic thing: Shortly before the sun comes up, the window shutters open and the external lights go out. The heater in my office comes to life a bit before that. When I’m away, the house enters into “away” mode: turning off all unnecessary devices. Towards the evening, minimal internal lighting comes to life, then external lights, and after sunset the shutters close automatically. When I go to bed, I switch the house to “sleep mode”, which again turns off all unnecessary devices and opens the shutters in my bedroom (I like to sleep with an open window). When Openweathermap shows stormy winds in my area, the shutters automatically close to protect the windows from debris. There are motion sensors to activate lights when someone is passing through a corridor and the light level is too low. When a smoke alarm goes off, all shutters open and all lights are turned on, so the house is prepped for emergency intervention if necessary.

Here’s the software side of the project: though it’s not exactly fit for public consumption (there’s a lot of cobbled-together WTF code), the repo has a pretty good overview of what the system can do.

All this has been totally fun. I have to admit, when the system breaks it can be unfortunate at times, but it runs relatively stable and I know exactly how to fix things. No commercial system would ever be able to tie together all the different home automation standards and protocols, and of course programmability is the end-all-be-all solution to everything. I’m looking forward to adding more HA-enabled devices to my home. Right now I’m working on an IR diode option that allows me to control the AC units. Over time I’d like to incorporate more presence sensors so I can phase out wall-mounted switches.

I’m less optimistic about the pure consumer aspect of home automation, they are already ending up with myriads of remote controls and apps and little boxes everywhere, none of which talk to the other, and all of which have to be constantly tricked into sort-of doing what you want.


Stephen Fry on Deities

Stephen Fry is really one of my favorite celebrities, and never fails to entertain even when asked stupid questions. This one is also worth watching for the disgusted look on the interviewer’s face :D

Anniversary of the Challenger Accident

Challenger Mission Badge
Challenger Mission Badge

Where were you on Jan 28th 1986 when the space shuttle exploded over Cape Canaveral? It’s odd, isn’t it, that we do remember bad things so clearly. My theory is that’s because catastrophes are usually point events, allowing the brain to create a lot of referential context around that single point in space and time, whereas positive things tend to happen over a longer time with much fuzzier starts and endings. And whenever a good event happens as a singular point, we do remember it just in the same way. Where were you when your child took her first steps?

On the surface it also seems strange that we memorialize the death of seven people some 29 years ago, when millions of people died in the meantime, many of them also in traumatic accidents. The answer is of course: symbolism. And not the empty kind of quasi-patriotic symbolism that gets invoked daily in the news.

That morning when Commander Scobee, Captain Smith, Dr Resnik, Colonel Onizuka, Dr McNair, payload specialist Mr Jarvis, and the teacher Mrs McAuliffe climbed onboard a giant rocket they did so as explorers and ambassadors of an entire civilization.

My ten-year-old self stood in front of the TV, watching a live stream (I think) of the launch. I was a massive fan of the shuttle – when I was even younger it often featured in my childish drawings. I didn’t yet know how horrendously incapable and unsafe the vehicle was in reality. At the time, complexity was often portrayed as a good thing, and it sometimes still is. I didn’t yet know that the shuttle program was single-handedly responsible for the failure of all our space ambitions. I just knew that there were heroic people on board who were going TO SPACE!

But beyond tragic heroism, beyond the inspiration anyone can hope to infuse themselves with from watching astronauts go out there and put their lives on the line, the Challenger disaster carries a lesson about organizational incompetence which should likewise not be forgotten. In the BBC movie The Challenger Disaster we can follow along the last act in the life of Richard Feynman as he investigates the crash. It’s well worth watching.