Superhuman Machine Intelligences May or May Not Murder Us All…

…but contrary to popular opinion they’re not going to steal our stuff.

This is in response to Sam Altman’s blog entry on “Machine intelligence, part 1“, I’m cross-posting a comment I made on HN which got flagged into oblivion for some reason.

Quote from the article:

in an effort to accomplish some other goal (most goals, if you think about them long enough, could make use of resources currently being used by humans) wipes us out

This is a line of reasoning put forward a lot, not only in reference to SMIs but also extraterrestrial entities (two concepts that actually have a lot in common), most notably by Stephen Hawking. We’re instinctively wired to worry about our resources and like to think their value is universal. It’s based on the assumption that even for non-human organisms, the Earth is the end-all-be-all prize. Nobody seems to question this assumption, so I will.

2000px-HAL9000.svgI posit there is nothing, nothing at all on Earth that couldn’t be found in more abundance elsewhere in the galaxy. Also, Earth comes with a few properties that are great for humans but bad for everybody else: a deep gravity well, unpredictable weather and geology, corrosive atmosphere and oceans, threats from adaptive biological organisms, limited access to energy and rare elements.

There may well be reasons for an antagonistic or uncaring intelligence to wipe us all out, and an unlimited number of entities can be imagined who might do so just for the heck of it, but a conflict over resources seems unlikely to me. A terrestrial SMI starved for resources has two broad options to consider: sterilize the planet and start stripmining it, only to bump up against the planet’s hard resource limitations soon after – or launching a single rocket into space and start working on the solar system, with a clear path to further expansion and a greatly reduced overall risk.

Humans and AI Do Not Have to Be Separate

One other thing I’d like to comment on is this idea that an SMI has to be in some way separate from us. While it’s absolutely possible for entities to develop that have no connection with humanity whatsoever, I think we’re discounting 99% of the rest of the spectrum. It starts with a human, moving on to a human using basic tools, and right now we’re humans with advanced information processing. I do not have the feeling that the technology I live my daily life with (and in) is all that separate from me. In a very real sense, I am the product of a complex interaction with my brain as the driving factor, but including just as essentially the IT I use.

When discussing SMI, this question of survival might have a shades-of-grey answer as well. To me, survival of the human mind does not mean “a continued, unmodified existence of close-to-natural humans on Earth”. That’s a very narrow-minded concept of what survival is. I think we have a greater destiny open to us, completing our long departure from the necessities of biology which we have begun millennia ago. We might fuse with machines, becoming SMIs or an integral component of machine intelligences. I think this is a worthwhile goal, and it’s an evolutionary viable answer to the survival problem as well. It’s in fact the only satisfying answer I can think of.

Digital Regression

Timeless_Books

Digital media are failing, especially eBooks are, and it to me it feels like a huge cultural regression.

In the nineties, I digitized my CD collection and haven’t looked back. I did the same with my DVDs, and eventually I got (DRM-free) eBook versions of all books that were important to me. Doing away with all those physical storage objects felt so liberating, I can’t even describe it. Plus, I can have all my stuff with me wherever I go. I do enjoy reading on my iPad, too.

Old_book_bindingsHowever, nobody I know of made this leap. Usually, older people like me do have some sort of ripped movie or eBook collection which they don’t use. Millennials, however, use physical media all the way, and even where they don’t they accept only DRM’ed, thoroughly walled gardens where you can rent stuff for a limited time and those companies only allow you to consume things in a very restrictive manner. They have huge DVD collections, they started printing out their photos again, they mostly only read paper-based books, they prefer streaming content via crappy proprietary channels, and I’m not even sure many of them would know how to copy a file if their life depended on it.

This feels like an immense failure to me, not only because I feel isolated in my content consumption habits, but also because we’ve somehow managed to move backwards for purely cultural reasons. It’s a loss of capability, and a loss of personal freedom and empowerment.

Hard_disk_platter_reflectionRationally, each one of us should have a digital data store, preferrably replicated, on-premise, and easily portable with all of our stuff. A lot of things that represent “value” to modern people should live in there: documents, cloud backups, movies, music, photos, eBooks. We’re in the unique position to do away with a huge amount of physical clutter simply by moving it onto a hard drive. Ideally, we would teach our children to center their data lives around their personal data store, a library of personally-kept digital things which is built upon for the rest of their lives. This is what’s possible today, and we’re simply electing to not do it. Ironically, being a millennial apparently means being immovably rooted in the previous millennium, except in the few places where corporations find it worthwhile enough to rent them a watered-down version of their own future at a steep price.

 

Why Are Smart Homes Not?

Consumer home automation is shaping up to be a battle of barely-functioning walled gardens. To the enterprising hacker, home automation can be incredibly fun and rewarding.

A few years ago, I started out with a Raspberry Pi and a 433 MHz USB adapter pre-programmed for the proprietary Home Easy protocol. Over time, I added a CUL transmitter capable of speaking the HomeMatic bidirectional protocol, as well as hooked different things up to the GPIO pins.

While the software is kind of clunky to administrate, it comes with an easy-enough front end UI which is accessible via WLAN and is also permanently displayed on wall-mounted el cheapo mini tablets. The things I have hooked up are primarily lighting, heating, and the external window shutters – and on the input side there are thermostats, motions sensors, and wireless buttons disguised as ordinary light switches.

I grant you that being able to control these things from any computer is kind of gimmicky, but automation is where the real value lies.

Now the house has a daily routine, like an organic thing: Shortly before the sun comes up, the window shutters open and the external lights go out. The heater in my office comes to life a bit before that. When I’m away, the house enters into “away” mode: turning off all unnecessary devices. Towards the evening, minimal internal lighting comes to life, then external lights, and after sunset the shutters close automatically. When I go to bed, I switch the house to “sleep mode”, which again turns off all unnecessary devices and opens the shutters in my bedroom (I like to sleep with an open window). When Openweathermap shows stormy winds in my area, the shutters automatically close to protect the windows from debris. There are motion sensors to activate lights when someone is passing through a corridor and the light level is too low. When a smoke alarm goes off, all shutters open and all lights are turned on, so the house is prepped for emergency intervention if necessary.

Here’s the software side of the project: https://github.com/Udo/HomeOverlord though it’s not exactly fit for public consumption (there’s a lot of cobbled-together WTF code), the repo has a pretty good overview of what the system can do.

All this has been totally fun. I have to admit, when the system breaks it can be unfortunate at times, but it runs relatively stable and I know exactly how to fix things. No commercial system would ever be able to tie together all the different home automation standards and protocols, and of course programmability is the end-all-be-all solution to everything. I’m looking forward to adding more HA-enabled devices to my home. Right now I’m working on an IR diode option that allows me to control the AC units. Over time I’d like to incorporate more presence sensors so I can phase out wall-mounted switches.

I’m less optimistic about the pure consumer aspect of home automation, they are already ending up with myriads of remote controls and apps and little boxes everywhere, none of which talk to the other, and all of which have to be constantly tricked into sort-of doing what you want.

 

Stephen Fry on Deities

Stephen Fry is really one of my favorite celebrities, and never fails to entertain even when asked stupid questions. This one is also worth watching for the disgusted look on the interviewer’s face :D

Anniversary of the Challenger Accident

Challenger Mission Badge
Challenger Mission Badge

Where were you on Jan 28th 1986 when the space shuttle exploded over Cape Canaveral? It’s odd, isn’t it, that we do remember bad things so clearly. My theory is that’s because catastrophes are usually point events, allowing the brain to create a lot of referential context around that single point in space and time, whereas positive things tend to happen over a longer time with much fuzzier starts and endings. And whenever a good event happens as a singular point, we do remember it just in the same way. Where were you when your child took her first steps?

On the surface it also seems strange that we memorialize the death of seven people some 29 years ago, when millions of people died in the meantime, many of them also in traumatic accidents. The answer is of course: symbolism. And not the empty kind of quasi-patriotic symbolism that gets invoked daily in the news.

That morning when Commander Scobee, Captain Smith, Dr Resnik, Colonel Onizuka, Dr McNair, payload specialist Mr Jarvis, and the teacher Mrs McAuliffe climbed onboard a giant rocket they did so as explorers and ambassadors of an entire civilization.

My ten-year-old self stood in front of the TV, watching a live stream (I think) of the launch. I was a massive fan of the shuttle – when I was even younger it often featured in my childish drawings. I didn’t yet know how horrendously incapable and unsafe the vehicle was in reality. At the time, complexity was often portrayed as a good thing, and it sometimes still is. I didn’t yet know that the shuttle program was single-handedly responsible for the failure of all our space ambitions. I just knew that there were heroic people on board who were going TO SPACE!

But beyond tragic heroism, beyond the inspiration anyone can hope to infuse themselves with from watching astronauts go out there and put their lives on the line, the Challenger disaster carries a lesson about organizational incompetence which should likewise not be forgotten. In the BBC movie The Challenger Disaster we can follow along the last act in the life of Richard Feynman as he investigates the crash. It’s well worth watching.

Growing Old

SONY DSC

In the words of Captain Picard: I have come to the point where there are more days behind me than in front of me. Personally, the experience doesn’t seem particularly long, but then again, the cliche that life is short exists for a good reason. I’ve been alive for longer than there are years between the end of WW2 and my birth. In the interval from 1945 and 1975, both my country of origin and the world at large changed in huge and uncountable ways, both politically and more importantly: in science and technology.

It’s harder to see a similar development in knowledge and engineering counting from 1975 to 2015. We have made some incremental improvements, sometimes even huge ones, on existing ideas – computing and bio engineering for example. We have largely stagnated or regressed in other areas, such as space exploration or walking the last few miles towards a non-religious society.

Twenty years ago, my standard reaction to people getting terminal diseases or losing limbs was: things will get dramatically better during your lifetime. In twenty years, surely, most cancers will be treatable – and not by crippling surgery either, but by developing methods to weed out cancerous cells in a targeted fashion. In twenty years there will be cyber limbs, or better yet, biological replacements for pretty much any organ. In twenty years, surely we won’t burn oil to drive our cars, or have a raging poverty gap, or stopped treating animals cruelly on an industrial scale.

Still, in 2015, none of these problems show any sign of easing up. We even managed to pile on some more: exploding religious fanaticism, a ubiquitous surveillance society, and a world entirely dominated by huge corporations.

My biggest disillusionment is with medicine, however: no longer do I expect functional limb or organ replacements to arrive within 20 years. There is no path to a pharmacology revolution towards a more individualized and scientifically rigorous treatment which would be needed to target cancer as a disease group. Reigned in by insurmountable powers using laws, financial pressure, and sheer aversion for change, the entire progress of our medical capabilities will continue to be largely confined to misleading news blurbs designed to paint a futuristic image in an effort to divert attention from the stagnation of everyday medical reality – a status quo induced by short-sighted profiteering and conservative philosophy.

Twenty years ago physicians told people with hearing loss to only get one cochlear implant, because a more advanced solution would surely be just around the corner and patients would be well advised to preserve one ear for this future technology. These advancements never arrived, and in fact, physicians have stopped giving that particular advice. This form of stagnation is omnipresent throughout medicine today.

It has become utterly unfashionable, even in medical circles, to stand up for a desire to meaningfully extend the human life span. The public has even been schooled to respond to transhumanistic ideas with reflexive Frankenstein analogies and strawman arguments based on the already grave perils of an aging society. This behavior is on display, not only on mainstreet as they say, but in our class rooms, and even in groups of supposed futurists. Visit io9 for some prime examples of culturally conservative journalism posing under a thin veneer of futurology.

This piece of Pseudo-Frankenstein garbage depicting life extension as an affront to a diety, the scientist as the villain, the re-animated as a monster, and the religious scholar as the voice of reason is from 1942 – and it reflects what people still believe today, more than 70 years later:

Gradually abandoning the progress made during the Age of Enlightenment, public opinion has largely shifted to a conviction that people should procreate early and plentifully, instill their offspring with the same values, and then die as quickly as possible. How far we have moved from the humanist idea that the mind is an immeasurably valuable entity all by itself!

I am 40 years old now, mind uploading and human augmentation were my generation’s flying car in a sense. I always knew the possibility of leaving – or at least mastering – our biological substrate might happen just outside my lifetime, but at least it was a comfortable thought about how we are moving in this general direction. Today, this future looks questionable at best. It now is conceivable our technology might plateau early in some areas, and we’re doing it largely by choice, because we feel we’re doing “good enough” and have lost the ability to dream big. Our governments have also put massive brakes on technology innovation, especially in pharmacology, biology, and medicine, in a successful effort to freeze progress.

Maybe this is a factor in the Fermi Paradox, too. Some civilizations might just stop eventually, failing to take the steps necessary to ascend to the stars – because it’s scary, or inconceivable, or simply because not enough people see any value in doing so.

Our OS’ Defective Access Control

There is a bug in Linux Steam that causes every single file owned by a user account to be removed from the system.

permissionsThis sounds familiar, in 2007 I wrote a post about OS security compartments and the defective reasoning of per-user file access security in operating systems. For years, especially in the Linux/Unix world, we have been treating desktop user files as if they have no value when in fact they are among the most important digital assets that need to be protected. That’s why I called for a per-app file access model that requires explicit feedback from users before their files can be read or changed by an app.

OS X has actually been moving in the right direction here with applications that are installed via the Apple App Store for the Mac. Regrettably though, App Store apps don’t allow for finer-grained security privileges that are user controllable.

Per-app access control for user files would not even have to be intrusive, since most of consent-requiring actions could be coupled to open and save file dialogs. Operating system vendors just need to do it.

The State of Whole Brain Emulation in 2015

When viewed most fundamentally, the brain is an information processing device. Human brains excel not only at performing higher tasks, but they do so by employing a myriad information processing techniques that we are only just discovering right now with the second cultural advent of Machine Learning. Organized clumps of neurons perform a lot of computation using comparatively little energy, too: the typical human brain uses between 20 and 40 Watts of energy to do all of its information processing.

3D reconstruction of the brain and eyes from CT scanned DICOM images. by Dale Mahalko
3D reconstruction of the brain and eyes from CT scanned DICOM images. (source: Dale Mahalko)

Yet for all its capabilities, owning a biological brain comes at a steep cost. There are countless ways in which parts of a brain, and at some point inevitably the whole brain, cease to function – this is what we call death; and for patients with brain injuries such as strokes, death indeed comes in episodes, or even gradually as is the case in neuro-degenerative diseases such as Alzheimer’s.

The nature of biological death is two-fold: first, the hardware ceases to function, so information processing stops. If only parts of the brain stop functioning, you might experience loss of sensory information, motor control, memory. Every single function making up a person can fail in this fashion. There is a spectrum of failures clinically observable, ranging from no noticeable outage up to complete loss of consciousness.

The second aspect of death is the destruction of the apparatus that contains and processes information. In CS terms, not only does information processing stop, but the infrastructure necessary to run these processes is lost. Contrary to classical information technology, hardware and software are not entirely separate in neurobiology.

CT-scan of the brain with a middle cerebral artery infarct, region of cell death appears darker than healthy tissue. (source: http://commons.wikimedia.org/wiki/File:MCA_Territory_Infarct.svg)
CT-scan of the brain with a massive middle cerebral artery infarct, region of cell death appears darker than healthy tissue. (http://commons.wikimedia.org/wiki/File:MCA_Territory_Infarct.svg)

In a lot of ways, the hardware offered to us by biochemistry is capable of amazing feats. Our neuronal architecture is excellent at performing statistical data processing, which incidentally is a big portion of what’s required to make sense of the world around us. In contrast, silicon-based computers excel at running deterministic operations, such as calculations and stringent logical reasoning. Both architectures can emulate the other, though. Human brains are Turing-complete and can perform any action that can be performed by a computer. We may not be as good or as fast, but we can do it in principle. Likewise, computers can perform the types of operations predominant in our brains, but again not as quickly as a blob of living matter might. The important point is that these two architectures are compatible in principle.

Given the capabilities and drawbacks of each, biological and synthetic information processing, it makes sense to aim for a fusion product of both. What if we could transpose our minds onto a less tenuous non-biological substrate? The idea to combine classical and biological computing in order to overcome the limitations of both is not new. The benefits would be immeasurable and instantaneous: gaining the ability to make backups of minds, and an untold potential for further growth and development.

So, given the obvious advantage of cheating death, why are we not living in silico by now?

Step 1 – Extracting the Information

Golgi-stained neurons from somatosensory cortex in the macaque monkey, from brainmaps.org
Golgi-stained neurons from somatosensory cortex in the macaque monkey, from brainmaps.org

← This is what the hardware of the brain looks like, at the neuron level. You might be tempted to think of a neuron as the biological equivalent of a transistor or a memory circuit, and it certainly does have some of these properties, but the most important difference to recognize is that there is a huge multitude of different neurons. They come in a lot of different shapes and models, and each neuron is configured differently.

In a classical computer, the information it contains, the software that processes the information, and the hardware that enables the programs to run, are all separate facilities. In the brain, however, all these are linked. The information stored in a single neuron is linked to its working configuration.

In order to transition a mind from working in vivo to a virtual substrate, we need to copy its essence from the biological clump of matter. This means extracting all of the structure, the neuronal configuration in its entirety. Each neuron has connections to other neurons, so we need to capture those connections. Neurons operate on different chemical models, so we need to get the neuron type as well. Furthermore, neuronal behavior is often modified individually by complex proteins, we need to know these too. Oh, and by the way, the cells surrounding neurons (such as astrocytes) perform computing tasks as well, need to scan them as well.

Pyramidal neuron from the hippocampus, stained for green fluorescent protein - Wei-Chung Allen Lee, Hayden Huang, Guoping Feng, Joshua R. Sanes, Emery N. Brown, Peter T. So, Elly Nedivi - Dynamic Remodeling of Dendritic Arbors in GABAergic Interneurons of Adult Visual Cortex. Lee WCA, Huang H, Feng G, Sanes JR, Brown EN, et al. PLoS Biology Vol. 4, No. 2, e29. doi:10.1371/journal.pbio.0040029, Figure 6f, slightly altered (plus scalebar, minus letter "f".)
Pyramidal neuron from the hippocampus, stained for green fluorescent protein

You can see getting all this information, in some cases down to the molecule, is extraordinarily difficult. In the cortex slide presented above, you can just about make out the connections between the neurons. Given thin-enough slices of the entire brain, we might just be able to reconstruct those connections into a computer model with today’s technology. However, we are far from getting the other information I mentioned. Identifying patterns in optical microscopy requires the use of staining agents, and there is a limit to the number of useful staining that can be applied to a given sample – so this is never going to be detailed enough. Electron microscopy might do it, but we’d need some serious post-processing to identify the presence of important proteins in a cell. On top of that, whole-brain EM scans would be a logistical impossibility considering today’s hardware.

 Serial sectioning of a brain - http://en.wikipedia.org/wiki/File:User-FastFission-brain.gif
Large-scale scan of human brain

Right now we are certainly nowhere near the point where we can make usable electron microscope slides of an entire human brain. This will probably change as we make progress in image processing AI. Ideally, this process would be an automated destructive scan where are brain is placed in a machine that sequentially ablates layers of cells and takes high resolution EM pictures of each layer.

Ideally, it would include not only the neocortex but the whole brain as well as the medulla – or even the whole body if feasible. While we are primarily interested in capturing the higher level functions of the neocortex, we also need knowledge about the wiring at the periphery. Gathering a whole body picture will enable us to make sense of the circuitry more easily, even if we end up throwing most of the data away. It is likely sufficient to use ordinary LM scans in order to capture body data. I am not aware of any project aimed at creating a cybernetic simulation of physiological systems from whole-body microtomes, but it seems like this would be a necessary prerequisite for brain emulation.

So how are we doing on this front, in 2015? We are now routinely using microscopic imaging to make neural models, but since we are still in the basic research phase we’re only doing it for generalized cases. At this point, I am not aware of any effort to capture the configuration of a specific brain for the purpose of emulating its contents. The Whole Brain Project has put out the Whole Brain Catalog, an open source large-scale catalog of the mouse brain – but detailed information about neuronal connections is hard to come by. We are still working on a map of a generic Drosophila connectome, so capturing a mammalian brain’s configuration seems as far off as ever. On the other hand, proactive patients are already generating 3D models of cancerous masses obtained by MRIs, so there is certainly hope that technological convergence will speed up this kind of data gathering and modeling in the near future.

Step 2 – Making Sense of the Information

Suppose we managed to extract all the pertinent structural and chemical information out of a brain, and we are now saddled with a big heap of data from that scan. What we need to do with in order to make that mind “run” on a virtual platform depends largely on the type of emulation we have in mind.

It’s all about detail. There are simulations in biology that aim to accurately depict what goes on in a cell at a molecular level. Here, interactions between single proteins are simulated on a supercomputer, requiring massive amounts of memory and processing power. If we were to “plug in” detailed brain scan data, we could do so relatively easily without a lot of conversion: for every molecule identified in the scan, we’d simply put its virtual counterpart into the simulation. However, simulating even a few single neurons in this fashion would quickly take up all the processing power of a whole supercomputing facility. This is obviously not practical.

Investigation of the Josephin Domain Protein-Protein Interaction by Molecular Dynamics
Investigation of the Josephin Domain Protein-Protein Interaction by Molecular Dynamics – detailing a process in Sinocerebellar ataxia (SCA)

The solution is to look at the outcome of those molecular interactions. It turns out, the products of chemical processes are relatively regular and dependable. Given the right conditions, 2 H2 and 1 O2 will always combine to form 2 H2O. We can use that observational knowledge of chemical processes and make a straight-forward mathematical model of the expected behavior of a neuron – and then we can run that simplified model on a computer very easily.

This means we can solve the computing power issue by using smarter mathematical stand-ins for chemical processes. But now we have two problems: how much can we simplify neuronal behavior and still get enough fidelity to run a human mind without any perceptible loss? And how do we translate the data from our scan into a representation that is faithful to the original yet yields itself to relatively efficient computability?

The best answer from the view point of today’s knowledge about neuronal information processing may be that we should choose a detail level that emulates the behavior of cortical columns plus maybe some carefully-chosen single neurons. Cortical columns are great to emulate because they provide units of functionality with an abstraction level high enough to be easily computable yet still low enough to reflect rich detail, although it is presently true that given an EM scan of a single column element (or neuron for that matter) we would not have enough knowledge about its individual function to accurately translate it into a digital representation. But we’re working on it.

Cajal Blue Brain: Magerit supercomputer (CeSViMa)
Cajal Blue Brain: Magerit supercomputer (CeSViMa) – The system maintains the cluster architecture with 245 PS702 nodes, each one with 16 cores in two 64-bit processors POWER7 (eight cores each) 3.0 GHz, 32 GB of RAM and 300 GB of local hard disk. Each core provides 18.38 Gflops.

The Blue Brain Project aims to reverse-engineer mammalian brains and then simulate them at a molecular level. This momentous effort has yielded a lot of detailed knowledge about how neurons and cortical columns work, and how they can be simulated. However, the project is occupied with basic research and simulates cellular processes in high detail. While the results generated by it are essential, this is not an effort that allows us to meaningfully run entire minds on a computer – something to keep in mind when reading press reports about the Blue Brain Project.

Step 3 – Running Minds in silico

So we have found a way to digitize brains, translate the information from that scan into a representation that can run efficiently on a classical computer, what happens when we actually execute that code?

Irmgard D. Dietzel, Sivaraj Mohanasundaram, Vanessa Niederkinkhaus, Gerd Hoffmann, Jens W. Meyer, Christoph Reiners, Christiana Blasl and Katharina Bohr (2012). Thyroid Hormone Effects on Sensory Perception, Mental Speed, Neuronal Excitability and Ion Channel Regulation, Thyroid Hormone, Dr. N.K. Agrawal (Ed.), ISBN: 978-953-51-0678-4, InTech, DOI: 10.5772/48310. Available from: http://www.intechopen.com/books/thyroid-hormone/thyroid-hormone-effects-on-sensory-perception-mental-speed-neuronal-excitability-and-ion-channel-reg
Thyroid Hormone Effects on Sensory Perception, Mental Speed, Neuronal Excitability and Ion Channel Regulation

Compared to the steps before, this one is relatively easy. Once we found a good and efficient model framework that can run a digital representation of a brain efficiently, this functional core needs to be executed in a digital milieu that provides connectivity to (emulated) peripheral sensory and motor neurons, as well as a simulated body chemistry. In order to run a brain, we’ll need a functioning endocrine system as well. While we know how to do this in principle from cybernetic models, there are of course still some knowledge gaps to fill as to the management and representation of a virtual body’s state.

Discussions still rage about the feasibility of mind uploading. From my perspective, there are massive technological and scientific impediments still to overcome but nothing in particular seems to prevent this development from playing out.

Some researchers dismissively address the prohibitive computational loads required to run a full-scale simulation of a brain, but the verdict is still not in about methods that emulate higher-level structures such as cortical columns efficiently. It seems to me that once basic research provides useful mathematical abstractions about the behavior of brain components, there is no reason why biology and classical information processing could not meet half way at a point where computation does become feasible at scale.

Moving Forward

We are at an interesting junction in our technological and scientific development. Computational resources are comparatively cheap, we are in the midst of a new wave of AI algorithms allowing for more sophisticated data processing, and there are a lot of interdisciplinary scientists and engineers who could work on this.

However, there is a big problem. Aside from a few laudable exceptions, research data is not available to the public at large. Heck, it’s not even available to competing research institutions. Considering how the internet was once envisioned as a medium for publishing and interlinking research data, this is still one of its unfulfilled promises. Press releases about discoveries made by well-funded projects often lure us as a civilization into a false sense of accomplishment, because more often than not the specifics of those discoveries remain inaccessible.

It is easy to fall victim to the misconception that whenever, say, the Blue Brain Project puts out another press release, we are moving closer to moving our brains into silicon. This is not true. Access to basic research data is tremendously restricted and, no matter how press releases are worded, the scientists mentioned rarely actually work on or towards this specific goal. For the most part, veiled allusions to mind uploading are merely used as convenient science fiction references to generate public buy-in. Pharmacology is what pays the bills, not pie-in-the-sky mind uploading.

Liberating Research Data

It is easy to see that we could be on the threshold of a golden age of citizen science, potentially increasing our overall science and engineering output in an unprecedented way. Access to cheap high tech, 3D printing and modeling, and the infrastructure for rapid information interchange is in place. All we need now is access to the actual body of human knowledge. Not the summarized form that’s in Wikipedia, but actual research data, including both free access to papers and publications, but also – and this might be an even more problematic selling point – access to the raw data as well.

If we could convince a critical mass of research groups to go fully open source, humanity as a whole stands to make the next big leaps. However, if this open sourcing does not happen, research will remain in walled gardens and it will move along the very predictable paths of carefully incremented progress – enough to get a competitive edge in pharma, but insufficient to upset the status quo.

And make no mistake, brain emulation, as any other radical endeavor, is all about upsetting the status quo. Because of this fringe component, progress in this area will likely come from outside of big-budget research facilities. It may even make progress based on the efforts of hobbyists – such as biomedical researchers engaging in side projects. The question becomes first and foremost, what can we do to enable them?


Attribution

  • Wei-Chung Allen Lee, Hayden Huang, Guoping Feng, Joshua R. Sanes, Emery N. Brown, Peter T. So, Elly NediviDynamic Remodeling of Dendritic Arbors in GABAergic Interneurons of Adult Visual Cortex. Lee WCA, Huang H, Feng G, Sanes JR, Brown EN, et al. PLoS Biology Vol. 4, No. 2, e29. doi:10.1371/journal.pbio.0040029, Figure 6f, slightly altered (plus scalebar, minus letter “f”.)
  • Irmgard D. Dietzel, Sivaraj Mohanasundaram, Vanessa Niederkinkhaus, Gerd Hoffmann, Jens W. Meyer, Christoph Reiners, Christiana Blasl and Katharina Bohr (2012). Thyroid Hormone Effects on Sensory Perception, Mental Speed, Neuronal Excitability and Ion Channel Regulation, Thyroid Hormone, Dr. N.K. Agrawal (Ed.), ISBN: 978-953-51-0678-4, InTech, DOI: 10.5772/48310. Available from: http://www.intechopen.com/books/thyroid-hormone/thyroid-hormone-effects-on-sensory-perception-mental-speed-neuronal-excitability-and-ion-channel-reg
  • Power of a Human Brain – The Physics Factbook Edited by Glenn Elert — Written by his students – http://hypertextbook.com/facts/2001/JacquelineLing.shtml
  • Investigation of the Josephin Domain Protein-Protein Interaction by Molecular Dynamics – from Deriu M, Grasso G, Licandro G, Danani A, Gallo D, Tuszynski J, Morbiducci U (2014). “Investigation of the Josephin Domain Protein-Protein Interaction by Molecular Dynamics”. PLOS ONE. DOI:10.1371/journal.pone.0108677. PMID 25268243. PMC: 4182536.

Getting an SSL Certificate: SSLS.com vs. StartCom

I’m switching rolz.org from a polling based “realtime” interface to Websockets, like I should have done a long time ago. Recently, Cloudflare added a free SSL terminator option to their offering, and I jumped onto that with Rolz – but CF doesn’t do Websockets in the free tier, which is understandable. Since Rolz can be kind of high traffic, and I do want to go SSL on every web project that has user accounts, dropping SSL and/or Cloudflare was not really an option.

So the solution is to serve Websockets connections from a subdomain, but that means I’ll have to get my own SSL certificate for the WS server as well. In the past I dabbled with SSL certificates, but inevitably gave up because managing, configuring, and renewing them was always such a hassle.

I do not want to support companies who make money from charging outrageous sums for SSL certificates, so I turned to StartCom early on. Their UI is basic, but essentially fool-proof and it works. This is what I was going to do this time as well, but I ran into trouble straight away. My account was locked and under review, as happens so often in today’s artificially localized internet when you’re using IP addresses from different countries a few times in a row. Yes, I’m looking at you Facebook, Google, Twitter…

Looks suspicious, but works pretty well
Looks suspicious, but works pretty well

Anyway, jumping through not one but two hoops at StartCom was not as bad if they didn’t require you to wait until your account gets approved by a human, not once, but twice. Waiting periods are where users jump ship. And so did I.

One of the options for reasonably priced SSL certificates is SSLS.com, a meta-sales site that sounds and looks extremely shady, but turns out to be legitimate.

I jumped on one of the basic SSL offerings, which gives you a certificate for the root domain and a subdomain, through an ordering process where they pass on users through to the actual company issuing the certificate. You can grab such a certificate for about 8 bucks, which I did. The ordering and admin process was minimal and went without a hitch. However, I should have read the fine print, because the one subdomain they sign for is automatically “www”. This was useless to me, which was a bit frustrating. Still, not a bad user experience on the whole, just my own stupidity for ordering something that didn’t work for me.

not as pretty, but very useful
not as pretty, but very useful

Back to StartCom! In the mean time they approved my account (again), and they do let people decide what the one included subdomain should be. Very clever and useful. Good job, StartCom, on being thoughtful about this.

In case you’re wondering, installing a custom certificate with Nginx is extremely straight forward. Just put your CSR key file some place safe, and along side it create a single file by chaining your own certificate and any of your CA certificates together. You can then refer to both files from your /etc/nginx/nginx.conf, which looks like this if you’re using NginX as an SSL terminating proxy that passes requests along to the actual Websockets server:

server {
      listen      443 ssl;
      server_name <subdomain.domain.com>;
      ssl_certificate <your certificate>;
      ssl_certificate_key <your CSR key>;
      location /<your WS location>/ {
        proxy_pass http://127.0.0.1:<your internal WS server port>;
        include <your standard websocket config>;
      }
      include <your standard server config>;
    }