Rule ūüėĖ of the Internet

youre-going-to-have-a-bad-timeMaybe you’re thriving on drama, or need it to sell books, or drumming up conflict where none must necessarily exist because it’s¬†something which¬†deeply fulfills you.¬†Being a griefer is¬†completely fine if that’s your thing, and you engage in your hobby with other willing subjects in a safe environment, but consider not everybody you meet online gets the same kick out of it as you do.

I’m writing this because¬†there are people who engage in griefing feedback cycles, maybe without consciously planning to do so, and who are then horrified by the results. Sometimes, a person is purely on the receiving end of aggression, but in most cases nobody involved is entirely blameless. And talking about blame: we’re going to live with the history of our online interactions for a long time, so if¬†admitting to and/or forgiving mistakes is not your thing, you might want to reconsider your stance.

Online discussions have created an ecosystem of two co-dependent mindsets: being easily offended, and deriving pleasure from causing offense. Again, if you don’t belong to either or both of these groups, don’t mirror¬†their behavior. They are bad role models.

Everybody who doesn’t want to have a bad time, and everybody who doesn’t want other people to have a bad time on the internet, should probably apply some common decency rules to all interactions:

  • Be charitable¬†when parsing what another person said.
  • There is a real human being at the other end of that comment.
  • More often than not, your adversary is not the embodiment of all that is wrong with humanity.
  • Before your assume malice,¬†try to interpret a statement in other contexts. Maybe it was a stupid joke, maybe it was ignorance or carelessness. Maybe they just expressed themselves awkwardly.¬†These reasons might not make it okay, but be mindful not to escalate the issue beyond what it actually is.
  • Instead of writing people off and/or trying to pulverize them publicly, try to educate¬†them about your position – and if possible, try to¬†win them over.
  • Even if a statement or opinion fits a pattern that triggers your buttons, do¬†not expand it beyond what was actually said.
  • In a discussion, concentrate purely on the issue at hand. Do not concern yourself with questionable projections, such as¬†what type of human being your adversary might be.
  • Sometimes, there is no common ground, for whatever reason. That’s okay. It is okay to walk away from a discussion. The internet is large and you get to pick whom you interact with.
  • More often than not, there is some common ground. Find it and work from that upwards.
  • Someone who agrees with you on many but not all things does not deserve the same level of¬†scorn¬†compared to a person whom you disagree with about almost everything.
  • Every piece of content you put on the internet shapes the mood and mode of future interactions. Be mindful that you¬†may be having an impact beyond the currently apparent¬†spatial and temporal perimeter.

I have personally violated all of these rules at some point.¬†It’s probably inevitable to make these mistakes. But this is the codex I strive¬†to live by, and overall I think the internet would be a better place if¬†more people tried this (or some version of it).

People Who Will Use Windows Don’t Have Viable Alternatives

There’s been a lot of furore about the abysmal privacy policies and practices of the new Windows 10 OS, which includes ToS legalese that unashamedly tells users beforehand that their data will be used against them.

Windows 10 is provided to users free of cost, because it’s essentially just a trojan¬†data-gathering framework which¬†happens to have¬†an operating system strapped onto it.

However, I would like to remind you that those who use Windows 10 don’t really have any other option other than sticking with it. Sure,¬†during the first few months there will be a plethora of tools (some of which will be malware) that promise to reconfigure your operating system back into something resembling a trustworthy environment, but Microsoft will simply stay the course and in the end nobody will care anymore. It’s in fact already difficult to make people care.

But the most pertinent point is that Windows 10 users will, statistically, always be Windows 10 users. There is nothing anyone can do for them, it’s just going to be the way it¬†works. And Apple will have to follow suit, in time, too. People are using Windows because

  1. They need¬†it. There’s no switching these guys to anything else. They have apps that only work on Windows, which will in most instances be AAA games.¬†These people have absolutely no reason to ever downgrade their computer into something they can’t really use – and that’s why they will by necessity stick with every and any plan Microsoft has for them.
  2. They don’t know what an operating system even is. You can’t make those users switch to anything else, either, because they’re simply so unsophisticated they don’t know how to use anything else and they don’t care about any of this privacy nonsense which goes over their heads anyway. This category may well be the largest group of Windows users, by the way.
  3. They like it.¬†Although my Windows days are pretty far in the past I have to say that¬†it’s not really a bad operating system. It’s super stable and crazily¬†backwards compatible. It’s ABI-compatible with itself, meaning there’s a plethora of interesting software and malware which you can download and it just works.¬†Contrary to OS X, which runs like an injured dog nowadays, Windows is still pretty fast. Contrary to Linux, there is no chance you will be stranded in text-only mode after your graphics driver updates. Running Windows has unquestionable advantages no other OS can really compete with.

So all of this makes replacing Windows unfeasible for a large group of the population. While it’s good that people in the short term are getting riled up against what Microsoft shoves down their throats, we also need to recognize this company¬†basically just dictated the way computing works in the future. In ten years, this will be the status quo.

If There Was an Internet Driver’s License…

…this would be mandatory material. There would be tests on this:

Warning: this may sound familiar if you read Snow Crash or anything similar.

Hello Internet.

Thoughts compete for space in your brain: cat photos, news stories, beliefs structures, funny GIFs, educational videos, not-so-educational videos and your thinking inventory is limited. A thought without a brain to think it, dies.

Now we can treat thoughts as though they’re alive. Specifically alive like germs. That might sound weird but stick with me.

Take jokes. Jokes are thought germs that live in your brain — and when you tell the joke to another brain, you help it reproduce.

Just like when you have the flu and sneeze to help it reproduce. This germ gets into its host by snot through the mouth and this one by words through the ear but it’s reproduction either way.

Logging on to your social media then, is exposing yourself to everyone’s mental sneezes. Each post a glob of snot with an thought germ trying to get in your brain — if not for permanent residence then at least long enough to get you to press the share button and sneeze it with everyone you know.

In this analogy then, a funny cat photo with a perfect caption is a super-flu.

Now just as germs exploit weak points in your immune system, so do thought germs exploit weak points in your brain. A.K.A. emotions.

Once inside, thought germs that press emotional buttons get their hosts to spread them more — measurably more. Well, except sadness, sad thought germs don’t get very far. Awe is pretty good which is why websites that construct thought germs like biological weapons arm them with them titles like “7 whatevers that will blow your mind” or “The Shocking Secret behind… this thing”

But anger is the ultimate edge for a thought germ. Anger, bypasses your mental immune system, and compels you to share it.

Being aware of your brain’s weak spots is necessary for good mental hygiene — like knowing how to wash your hands. Because even without intentional construction, any thought germ on the Internet can, on its own, grow more infections as it spreads. To talk about why, lets forget anger for a moment and go back to that cat photo.

Every photo ever taken is a thought germ, and most die a quick death like the bazillion cat photos (or baby photos) posted on The Internet that are never shared. But a mildly funny cat photo can grow into so much more, because just as transatlantic flights were the best thing to happen to germ germs, so the Internet is the best thing to happen to thought germs.

For once on-board, that cat photo is a thought germ that can leap into other brains. And those brains might share it, and here’s the key point, occasionally, change it — a Photoshop here, a tweaked caption there.

Most changes are terrible, but some make the thought germ even funnier, getting brains to share it more. Which results in more changes and a shot at super-stardom. A thus a lowly cat photo can achieve global brain domination. At least for a few hours.

The Internet, with its unparalleled ability to share and randomly change thought germs can’t help but help make them stronger.

With jokes, that’s awesome — but with angry germs not always so awesome. No.

Angry germs, the more they’re shared undergo the same process, changing and distorting to be more aggravating. These have a better chance of spreading than their more accurate but more boring rivals.

But like plagues, thought germs can burn though a population too quickly. Just watch your favorite meme generating machine for a week and you’ll see the life-cycle fly by.

However some thought germs have found a way around burnout. Now, I must warn you, depending on which thought germs live in your head and which you fight for, the next section might soundhorrifying. So please keep in mind, we’re going to talk about what makes some thought germs, particularly angry ones, successful and not how good or bad they are.

OK? Deep breath: calm.

Though germs can burn out because once everyone agrees, it’s hard to keep talking and thus thinking about them.

But if there’s an opposing thought germ, an argument, then the thinking never stops. Disagreement doesn’t have to be angry, but again, angry helps. The more visible an argument gets the more bystanders it draws in which makes it more visible is why every group from the most innocuous internet forum to The National Conversation can turn into a double rage storm across the sky in no time.

Wait, these though germs aren’t competing, they’re co-operating. Working together they reach more brains and hold their thoughts longer than they could alone. Thought germs on opposite sides of an argument can be symbiotic.

One tool symbiotic anger germs in particular can employ is your-with-us-or-against-us. Whatever thought germ just leaped to the front of your brain, push it back. This video isn’t about that. We’re just talking about the tool, and this one makes it hard, for neutral brains to resist and its divisiveness also grows its symbiotic partner.

This explains why, in some arguments gaining more allies also gains more enemies. Because though the participants think they’re involved in a firey battle to the death from the anger germs perspective one side is a field of flowers and the other a flock of butterflies. Of course planting more flowers will get you more butterflies and getting more butterflies will pollinate more flowers.

If there is some argument that splits the population and lasts forever that even the most neutral people find difficult to avoid, you just might be looking at a super successful pair of symbiotic anger germs that have reached ecological stability

Now, one final depressing though. Uhhhh‚Ķ I mean one more Awe inspiring point, that will reveal the secrets of, ahhh — actually no it’s just depressing.

When opposing groups get big they don’t really argue with each other, they mostly argue with themselves about how angry the othergroup makes them. We can actually graph fights on the Internet to see this in action. Each becomes its own quasi isolated internet, sharing thoughts about the other.

You see where this is going, right?

Each group becomes a breeding ground for thought germs about the other — and as before the most enraging — but not necessarily the most accurate — spread fastest. A group almost can’t help but construct a totem of the other so enraging they talk about it all the time — which, now that you know how though germs grow, is exactly what make the totem always perfectly maddening.

Now, all this isn’t to say that there’s no point in arguing. (That’s a different video). Or that the Internet isn’t amazing, or that there aren’t things worth trying to change peoples’ minds about. And thought germs of all kinds come and go.

But it’s useful to be aware of how thought can use our emotions to spread and how the more rapidly a thought is able to spread the more chances it has to become even better at spreading through random changes made to it. Sometimes that’s great, sometimes it’s terrible.

But if you want to maintain a healthy brain it pays to be cautious of thoughts that have passed through a lot of other brains and that poke you where you are weakest.

It’s your brain — be hygienic with it.

So, of course this video is a thought germ, one constructed very intentionally over time to spread a thought germ about thought germs — exposing their secrets — one could say. But I tried as hard as possible, not to have, this video attack your brain through emotions, so it could use a little help spreading. Please be a good germ vector and click the share buttons to sneeze this at your friends. Your coworkers. Your family. Infect them all.

You shared the video, right? Well if you’re still here, you really got infected hard. Only thing left is to click onscreen and sign up to the email list which will get you exposed to many more thought germs in stick figure video form.

Suicide Taboos

The tech and science community is plagued by a high incidence of suicides, but nobody wants to talk about it.¬†Almost every time you hear about a community figure suddenly dying without a cause of death being stated, it’s a suicide. Everyone knows it, but you’re not even allowed to mention it. Friends and families of the deceased come after those who dare say¬†the S word on forums and social media, mostly because they are concerned of soiling the image of their loved ones, but the effect of reflexively putting up this righteous wall of silence is not helping address the problem.

Suicide and self-harm are rarely rational choices. They are made by desperate people. Some of them have a long history of psychiatric issues, while others are simply overwhelmed with a recent chain of events in their lives. There is no shame in any of these things! They happen.

But the good news is there is also help available. By getting the message out instead of covering up these tragic losses under a blanket of shame, winking, and nudging, would send a strong signal to people in similar situations, and it would send a signal to their loved ones as well.

This is not a time to be silent. This is not a time to hide behind social or religious conventions and taboos. This is a time to break these taboos and make the issues leading to these deaths a matter of public awareness.

Your silence and false sense of shame is costing lives. Stop isolating people with depression or those who need help with overwhelming personal problems. Acknowledge that there is help available, for yourself and for your friends, if you need it. Then train yourself and others to recognize this need if it arises.

It’s also in my opinion no longer valid to justify the adherence to these taboos by professing to be concerned about Suicide Contagion. While it’s a real effect, you can’t really stop it by covering incidences with a blanket of silence.

Leibniz’s Philosophy of Mind

That’s why philosophy is in a crisis right now, because it’s in fundamental dissonance with scientific observation. You may disagree that this is a problem, but at that point you’re implicitly arguing that the intents of philosophy no longer contain a desire to describe and reflect the actual world.

For most of human history, philosophy was expressly designed to perform discovery – albeit speculatively – on universal principles without the invocation of religion. It was understood that any concrete scientific understanding of an area would supersede whatever philosophical construct had covered the same subject before. Now we are at a point where this is apparently no longer the case, and philosophers who refuse to consider current scientific understanding are no longer in a position to claim a strong barrier between religion and philosophy.

But the struggle to deeply accept materialism, intellectually¬†and culturally, is – I hope – only transient. It’s to be expected. For thousands of years, we have speculated about it, without actually knowing, and competing explanations could run rampant with equally unsubstantiated claims. It was only during the last 100 years that we really developed the capacity to make a qualified judgement on this. Materialism won,¬†but it’s going to take a while until we reach the acceptance stage, even among a subset of scientists who invested parts of their lives in old-school philosophy or, say, quantum consciousness quackery.

Deep Learning Hysteria VS AGI Denial

Here’s one in favor of AGI Denial: by Tim Dettmers.

And a few good counter-points by jcannell over at Reddit.

There is some reasonable scientific middle ground between being an unreflected Deep Learning fanatic at one end of the spectrum and being an AGI denier at the other end. In typical internet fashion, we’re mostly exposed to extreme and exaggerated viewpoints that are not above distorting a few things in order to get their message across – and the¬†Dettmers¬†article is no exception.

For Example: Neuronal Firing Rates

Just to grab one strategy used in these articles (again, on both ends of the opinion spectrum): comparing apples to oranges. In this case, it’s the neuronal firing rate. The biological brain uses firing rate to encode values, but that’s rarely used in silico outside biochemical research because we have a better way of encoding values in computers.

One side (in my opinion, reasonably) asserts that this is an implementation detail where it makes sense to model a functional equivalent instead of strictly emulating nature. This view has gained credence from the fact that ANNs do work in practice, and even more importantly, several different ANN algorithms seem to be fit for the job. A lot of people believe this bodes well for the “functional equivalence” paradigm, not only as it pertains to AI, but also as it relates to the likelihood of intelligent life elsewhere in the universe.

The other side asserts that implementation details such as the neuronal firing rate are absolutely crucial and cannot be deviated from without invalidating the whole endeavor. They believe (and I’m trying to represent this view as fairly as I can here) that these are essential architectural details which must be preserved in order to preserve overall function. And since it’s not feasible to go this route in large-scale AI, the conclusion must be that AGI is impossible. A lot of influential people believe this, including Daniel Dennett if I recall correctly.

The Dettmers article is very close to the latter opinion, but it goes one step further in riding the firing rate example by not even acknowledging the underlying assumption and jumping straight to attacking the feasibility of replicating the mechanism.

Sir, Please Step Away from that Singularity

As an aside, I still think we need to either uncouple the transhumanist term Singularity from unbounded exponential growth, or get rid of the word altogether. While there have been some early successes in raising awareness, mostly championed by Kurzweil, the term now carries more factual and social baggage which is used as a handy shorthand by detractors to attack transhumanist ideas. We should probably lay it to rest.

Side Projects Die – It’s OK and Normal

This is a response to “Why Do Side Projects Die?” by by Raj Sheth, the insinuation of the article being this is an indicator of failure, and that failure should be avoided by¬†social means.

Everybody’s GitHub is littered with abandoned stuff, and something needs to be done, right? I disagree, profoundly.

There is a very real evolutionary component to side projects. They are usually born out of an idea, a will to experiment or just because they sound fun. And they are fun, not necessarily due to reaching a certain goal, but as a way to spend time. You could argue that time is wasted, but not every intellectual or emotional pursuit has to be oriented on externally viable goals. Time you spend on side projects could also be time you spend on playing a computer game, crafting something, or any other hobby-like activity. Even though the remnants of old projects can sometimes feel like tombstones or failure markers, for some reason we don’t feel the same way about, say, an old save game file.

But most importantly, I want to circle back to the evolutionary aspect: side projects are explorations. The subject of that exploration can vary hugely, but at the end there is a result. The result may be that the project is not viable or not fun. Or it might be a lesson about programming that will improve your performance further down the road. Occasionally, side projects survive and get to pass on their genes in the form of continued development, or partial re-use, or even a commercial spin-off.

All of this is fine, and it’s part of the process many programmers go through. I would like to invite people to embrace the impermanence and whimsical nature of side projects from the get go. Learn to love what you’re doing, instead of thinking about what the world will be like when you’re done with a project.

Web Design: the First 100 Years

I’m going to draw the ire of both supporters and detractors of this thesis here¬†(just kidding, nobody cares), but this is all wrong, and it’s a deplorable capitulation to technological apathy.

Software Bloat

It’s hard to disagree with the bloat argument in principle, however, I feel a lot of the bloat we see on the web today is exploratory. Right now we are in a meta phase where paradigms are implemented on top of strata of abstractions with staggering amounts of layers. But the intention is not to get away with something, it’s experimentation on programming principles. Over long enough periods, this process is strictly evolutionary.

Reaching Technological Limits

What the essay gets right is the fact that we are approaching the physical limit of certain technologies. This is not per se a bad thing, because it gives us static components to recon with, and it also forces innovation in areas where those limitations are unacceptable. All of this doesn’t mean information technology is “done”. Neither are airplanes, for that matter: the premises for which current airliners are optimized may well change in the future.

Singularity != Transhumanism

The central point I really take issue with is the author’s stance on AI and transhumanism. First off, the singularity (as popularized) does not need to actually happen in order for us to improve upon our biological substrate. And it can be improved, in countless ways. One of these ways, and still the most attractive long-term in my opinion, is a transfer to full simulation. The example of the simulated worm nervous system is a straw man though, because we can functionally emulate way more than a couple of hundreds of neurons – we just need to drop the requirement of accurately reflecting molecule interactions (and we do that, practice). It’s also worth noting that neuro simulations (or emulations as the case may be) can be massively parallelized.

Software is Transforming the World

Second, and the most disagreeable point, is the trinity of connecting, eating, and ending the world. These categories are simplistic distortions of what’s actually going on. It’s not at all clear that connecting and eating the world aren’t the same thing, and it’s all bleeding into the ending field as well. But transhumanists are not world enders, necessarily. It’s not necessary, nor desired, to transform the entirety of Earth towards this single thing, just as it’s not feasible to transition the entirety of humanity to a new substrate. In an evolving ecosystem, there is room for lots of new life forms.

Every time a bacterium or virus changes a gene, every time a plant or animal involves, it carries with it the risk of total world domination. In practice, however, this doesn’t happen – because this kind of unchecked exponential upheaval is unrealistic. I would argue that, exponential expectations in AI or biotech development are also unrealistic – and unnecessary.

Besides, you need to pick your stance: you can’t just point to the worm simulation and say transhumanists are delusional only to turn around a second later to say they’re dangerous because they could end the world.

Killer Whales

Are killer whales persons?

A more modern and future-proof definition of personhood might be based on functional characteristics of a lifeform instead of its outward morphology. But it’s still one big simplistic property, and that’s a bit problematic. Being categorized a person – in much the same way as being labeled intelligent – might more properly be represented by a multi-dimensional gradient rather than a single scalar value. It’s better than the current consensus which is based on a boolean variable though.

Ascribing personhood to a very big mammal is a relatively easy sell. However, no matter what characteristics we choose to define it with (complex social behavior, a sense of ethics, altruism, advanced problem solving, capability for human-like emotions, and so on), we’re liable to end up adding a lot of other mammals to that list as well, plus a few species of birds and possibly some other surprising lifeforms.

It seems to me that the overdue shift in thinking is not about whether or not we should carve out a personhood exception for the killer whale. We should rather rethink the value we ascribe to non-human intelligent life in general.