Getting an SSL Certificate: SSLS.com vs. StartCom

I’m switching rolz.org from a polling based “realtime” interface to Websockets, like I should have done a long time ago. Recently, Cloudflare added a free SSL terminator option to their offering, and I jumped onto that with Rolz – but CF doesn’t do Websockets in the free tier, which is understandable. Since Rolz can be kind of high traffic, and I do want to go SSL on every web project that has user accounts, dropping SSL and/or Cloudflare was not really an option.

So the solution is to serve Websockets connections from a subdomain, but that means I’ll have to get my own SSL certificate for the WS server as well. In the past I dabbled with SSL certificates, but inevitably gave up because managing, configuring, and renewing them was always such a hassle.

I do not want to support companies who make money from charging outrageous sums for SSL certificates, so I turned to StartCom early on. Their UI is basic, but essentially fool-proof and it works. This is what I was going to do this time as well, but I ran into trouble straight away. My account was locked and under review, as happens so often in today’s artificially localized internet when you’re using IP addresses from different countries a few times in a row. Yes, I’m looking at you Facebook, Google, Twitter…

Looks suspicious, but works pretty well

Looks suspicious, but works pretty well

Anyway, jumping through not one but two hoops at StartCom was not as bad if they didn’t require you to wait until your account gets approved by a human, not once, but twice. Waiting periods are where users jump ship. And so did I.

One of the options for reasonably priced SSL certificates is SSLS.com, a meta-sales site that sounds and looks extremely shady, but turns out to be legitimate.

I jumped on one of the basic SSL offerings, which gives you a certificate for the root domain and a subdomain, through an ordering process where they pass on users through to the actual company issuing the certificate. You can grab such a certificate for about 8 bucks, which I did. The ordering and admin process was minimal and went without a hitch. However, I should have read the fine print, because the one subdomain they sign for is automatically “www”. This was useless to me, which was a bit frustrating. Still, not a bad user experience on the whole, just my own stupidity for ordering something that didn’t work for me.

not as pretty, but very useful

not as pretty, but very useful

Back to StartCom! In the mean time they approved my account (again), and they do let people decide what the one included subdomain should be. Very clever and useful. Good job, StartCom, on being thoughtful about this.

In case you’re wondering, installing a custom certificate with Nginx is extremely straight forward. Just put your CSR key file some place safe, and along side it create a single file by chaining your own certificate and any of your CA certificates together. You can then refer to both files from your /etc/nginx/nginx.conf, which looks like this if you’re using NginX as an SSL terminating proxy that passes requests along to the actual Websockets server:

server {
      listen      443 ssl;
      server_name <subdomain.domain.com>;
      ssl_certificate <your certificate>;
      ssl_certificate_key <your CSR key>;
      location /<your WS location>/ {
        proxy_pass http://127.0.0.1:<your internal WS server port>;
        include <your standard websocket config>;
      }
      include <your standard server config>;
    }

Home Overlord

HomeOverlord

HomeOverlord is a simple web-based home automation interface for HomeEasy (HE853) and HomeMatic (CUL/Homegear)

This is (for the time being) the main screen of HomeOverlord, the panel where you can control devices directly. The UI switches automatically between a day and night color scheme. Beyond that, HomeOverlord provides a neat system of event triggers to make your little device minions do whatever you want, behind the scenes.

Beware!

At this stage, this is a project that runs for me, but it’s not really designed to be portable to other homes. In theory, it might work. It might not. The software is designed to work with the HomeEasy HE853 USB stick to address HomeEasy devices, and the CUL using the Homegear XMLRPC interface to communicate with HomeMatic devices. To be a full home automation solution, features are still missing. For example, right now you have to do HomeMatic pairing with the (albeit browser-based) command line interface. I run the software on a Raspberry Pi, in theory it should work well on pretty much any architecture that supports those USB devices. For reference, I included my current home configuration verbatim. Also, there is no installer. Continue reading

Bash Scripts for Making Screenshot Timelapse Movies on OS X

timelapse-scripts

Bash scripts for making screen shot time lapse movies on OS X.

Screen Capture

The script capture-screens.sh grabs the actual screen content. Open it in a text editor to change its settings. By default, it takes a JPG screenshot off the main screen every second and puts it into the folder Downloads/screencaps/.

You can stop the capture process any time by hitting CTRL-c, and resume by just starting the script again. The naming convention of the capture files contain the timestamp at the moment of start, so at the end the movie frames will be in order. Because of this, you can also combine captured frames from different computers for example if you alternated between your laptop and your desktop.

Caveat: please check that files are actually being produced in the target directory as the script is running. You don’t want to discover, after you’re done with everything, that nothing got recorded.

Preparing for Movie Generation

After recording the time-lapse, it’s time to generate a movie out of it. For this, you’ll need the ffmpeg command-line tool. Most Macs should already have it, but if you don’t you can just download it with Homebrew.

The movie generation has two steps: first sorting through all the captured frames, and finally encoding the movie. Start the sorting operation by launching the script capture-preparemovie.sh.

This will put a symbolic link to every frame into the folder ~/Downloads/screencaps_temp/.

Make the Movie

To launch the encode, start the script capture-makemovie.sh. You’ll see some updates on screen as the movie is being made. If you see any error messages, it’s likely you have captured images with different sizes (for example, if they come from different computers) – in that case, put the differing frames away and encode them later into a second movie.

At the end, a new movie file called ~/Downloads/screencaps.mp4 should appear. After a quick check that it came out OK you can delete the source folders ~/Downloads/screencaps/ and ~/Downloads/screencaps_temp/.

Websocket Message Broker Boilerplate

Web Sockets // HTML5 > Node.js > PHP // basic setup
https://github.com/Udo/WSBrokerBoilerplate
0 forks.
0 open issues.
Recent commits:

WSBrokerBoilerplate

Web Sockets PHP > node > PHP basic setup

What’s this?

This is a collection of boilerplate/example files to set up a node.js Websockets server that acts as a message passer between browser clients and a PHP backend. With it you can implement chat servers and other realtime applications. The broker is designed to be a minimal, dumb server component, allowing the PHP backend to implement whatever logic is necessary.

Model

The expected setup for this is a Javascript client application (client-page.php), which talks to the Websockets broker (broker.js), which in turn talks to the PHP server backend (server/index.php). Message objects sent from the client to the broker are expected to be in JSON format, and are passed along to the server backend where the type field is used to invoke a corresponding command handler from the server/commands/ directory. Any output added to the $result variable by the command handler will be passed down again to the client. The backend server can also initiate the data flow by itself, using the internalCommandServer facility which can be reached using the brokerRequest cURL function defined in server/lib.php. Commands supported by the internalCommandServer are by default send and kick, further commands can be added to broker.js easily.

Boilerplate

This is supposed to be a collection of boilerplate files and structures to get Websocket projects going – it’s not a functional software package by itself. Example usages are contained in the basic code files. There is also an example Nginx configuration included.

Config

The file config.json contains all the configuration options necessary for the components to talk to each other, and as such the file is read by the example client, the broker, and the PHP server component. The file contains the configuration that I used to test the suite on my server, so you need to fill in your own paths, domain names, and port numbers.

Ludum Dare #31: Snowma’am

Take on the role of the formidable Snowma’am and defend the Light of Winter!

Well, you’re a magical snow witch who can crush her enemies by animating snow monsters, you know the drill ;) It’s a strategy/tower defense-style game. Turns last 3 seconds and advance automatically. Select a snow creature by clicking on it, then move it or attack things by clicking on the destination. Movement is restricted to one field at a time.

Keyboard shortcuts (optional):
P – pause game
A/D – select previous/next unit
S – select Snowma’am

As always, I appreciate comments more than votes :) And if you encounter a bug (which is very likely), please describe it in the comments below.Compatibility: didn’t test on IE or Opera, so beware. Due to compositing slowness, I disabled the falling snow effect on Firefox (you’ll have to use Chrome to see it). Minimum window size is about 1200×900 pixels.

This game was made solo by me for LD31, from scratch. I used Logic Pro X for the score, Audacity for sound editing, Pixelmator for graphics editing, the Terminal and Coda for code editing. Cinema4D for the 3D work. Libraries are jQuery and Howler.js – otherwise it’s a vanilla JS/CSS/HTML app.

Post-LD Changelog:
– updated web URL to use CDN (should load faster now)
– fixed an error that caused the heal spell not to work
– fixed a bug that caused the spell buttons not to update
– decreased the round timer by half

Downloads and Links

The Basilisk Is a Lie

BasiliskA thought experiment known as Roko’s basilisk escaped from the dungeons of LessWrong has recently been causing a wave, mostly among fans of sensationalist headlines. The core proposition can be paraphrased like this:

In the future there will be an ethical AI that punishes everyone who knew they could have but in practice did not work towards its eventual birth. If the humans in question are deceased by then, a simulation of their minds will be punished instead. This is a moral action, because due to the AI’s capabilities, every day that passes on Earth without the AI is a day of unimaginable suffering and death which could have been prevented.

Now I am way less prone to ballsy absolutist assertions than practically anyone frequenting LessWrong, but this whole thing is wrong on many levels.

Ethics
The central argument about the ethical validity of this punishment scheme is beyond questionable, specifically the motivation of the AI. At the point where the AI achieves this capability, the assertion that the execution of the punishment is morally imperative is mistaken. By that time, nothing is actually achieved by carrying it out. The behavior of those “guilty” will not change retroactively. Since their future behavior is also irrelevant, the argument rests on the assumption that without the prospect of punishment there would have been no motivation for humans to develop the AI. While this is false in itself, punishment after the fact without the hope of achieving any effect besides the imposition of suffering cannot ever be an ethical act. Ethics aside, the contributions of individuals not directly connected to the eventual birth of the AI would be murky to judge as well. What’s the correct “punishment” for a computer scientist, as compared to a medical doctor?

Draw_this_birthday_cake.svgFeasibility
While there is little uncertainty that general AI is feasible and, if we continue on the path of scientific discovery, unavoidable – significant doubts exist about the nature of that AI. If this thought experiment shows nothing else, it does illustrate that our notions of what constitutes a “friendly” AI are wildly divergent. One can only hope for the sake of whatever becomes of humanity as well as the AI’s sanity that reading LessWrong will be one of its less formative experiences.

Where feasibility deserves to be harshly questioned is the simulation idea this Basilisk concept relies on to carry out its punitive actions. The most central assumption here is that a mind reconstructed from extremely lossy data fragments is still the absolute (!) equivalent of its original version.

That means at the core of this is a belief that if I were to die tomorrow, and my mind was being reconstituted from nothing but my old Amazon shopping lists, this would be the same as me.

It should be very obvious that this is not true, but to make matters worse my “sameness” value not a Boolean. It’s not even a scalar value, it would have to be a vector spanning a lot of aspects, each measuring how much of the original mind was successfully transferred. It is disconcerting that this basic notion is not being shared by the rationalist movement. Instead, it is apparently considered feasible to reconstruct any specific thing by using deduction from first principles.

Inevitability
The sheer amount of models and parameters that could lead to the development of general artificial intelligence is huge and in its entirety inconceivable. While it is still appropriate to engage in informed speculation, one should be skeptical whenever certain models and parameters are cherry-picked and arranged just so, in order to illustrate a thought experiment that is then deemed to be an inevitable outcome. This reduces technical complexity and historical uncertainty to an absurdly simplified outcome which is simply taken as fate.

Already, a big number of AGI scenarios have become intellectual mainstream, some of which claim exclusivity for themselves. Some go further and assert inevitability. Otherwise rational people can come to these conclusions of inescapable future outcomes because they are losing sight of the complexity of factors and conditions their reasoning is based on. No statistician would chain together a list of events with 80% assumed probability each and claim the end product is a matter of destiny. Yet, for some reason, futurists do this.

It is reasonable that a number of these scenarios might eventually play out, with some variation, and in some order. They obviously can’t all be true at the same point in time and space, including the Basilisk.

Of course that also means, just because nothing in principle prevents it, somewhere in the universe Basilisks may well exist already. But there is no reason to assume it has to on Earth. It would take a special cocktail of circumstances.

A Modern Pascal’s Wager
The core argument why this idea is perceived as dangerous is that people who understand it will be forced to act on it. This means acting out of fear of future punishment, just in case there is an invisible entity out there who cares enough about your actions. Even if you accept this premise, and even if you’re deluded into thinking this is the path to an ethical life, the huge problem is predicting what that entity wants you to do so you can avoid punishment.

This is the definition of a problem where you do not have enough information to make an informed decision. In the absence of any information about that deity, acting on its behalf is an execution of random fantasy.

The claim behind the Basilisk is again one of inescapable certainty, in fact it desperately relies on that property. Because you supposedly know what the Basilisk wants – it wants to exist – this is seen as a solution to unknown deity problem. However, this only works if you believe in the properties of Roko’s basilisk dogmatically, disregarding all other AI futures. This is in fact the exact analogue to the original Pascal’s Wager where the not-so-hidden assumption was that the Christian fundamentalist god was the only one you had to please.

Of course, within the context of an AI that can simulate people, this is all moot. There is nothing preventing said AI from simulating you in any set of circumstances, including perpetual punishment or everlasting bliss. In fact, there is no real cost to simulating you in a million different scenarios all at once. Acting out a random fantasy based on the off chance that in the future one of your myriad possible simulations will have a bad day is not rational.

Some of the reasoning on display here seems to mistake blunt over-simplifications for clarity of thought. To an outsider like myself it looks like complex multivariate facts are constantly being coerced into Boolean values which are then chained together to assert certainties where none are really warranted. There is a certain blindness at work where everyone seems to forget the instabilities hidden within the reasoning stack they’re standing on. But what’s worse is that fundamentally unethical behavior (both on part of the AI and its believers) is being rebranded as legit.

I see now the way to hell is paved with people who think they are acting rationally.

The Minds of Octopodes

Octopus_verrucosusThe fact that relatively high intelligence has arisen from many architectures multiple times on this planet bodes well for the frequency of intelligent life on exoplanets.

Even today, there are still some theoreticians who assert the formation of intelligent life is tightly coupled to our specific brand of brain and should hence be considered a huge accident. Yet we see the formation of minds in a lot of places, and they can be almost arbitrarily far removed from the human brain.

The octopus is a great example for a radically different neurological substrate. Given enough time and a little bit of luck, some funghi too might evolve a mind of their own as Fuligo Septica already shows some capacity for problem-solving behavior.

The argument that human-like intelligence is again another unlikely step discards the myriad of social animals with advanced problem solving capabilities, some of which are even tool users, and some of which have actual languages and cultures. A lot of these have come onto the stage very independently from us, having sprung from far removed genetics – and yet they have enough in common with us that should make us recognize the frequency of species with minds might indeed be high wherever life takes hold.

 

The Publicity Coaster

Rollercoaster_Tornado_Avonturenpark_Hellendoorn_NetherlandsWhat do you do if you’re a designer who desperately needs some attention? Well, you invent the Euthanasia Coaster, of course.  So edgy and controversial it causes me to give it some attention right now!

I come from Germany, the country that not only raised the bar when it comes to the industrialized killing of its population, it pretty much redefined the scope of evil achievable with cult-like dictatorships empowered by modern technology.

Euthanasia as used in the article is a form of newspeak introduced by the Nazis, a cynical redefinition that constitutes a corruption of the word’s original meaning.

As such, the word choice used here makes me just as uncomfortable as the concept itself. Originally, euthanasia (greek for “good death”) is a means of ending a life for which the only perspective left is suffering. In this sense, the word is still being used (appropriately) in conjunction with terminal illness where it does have a place as part of the right to self-determination exercised by patients who make the conscious and informed decision to end their lives in order to avoid this suffering.

I really wish people would stop taking hints from the Nazis when it comes to vocabulary, even if it’s on purpose. “Death Coster” or “Ride of Doom” would be perfectly sufficient. But poor word choice isn’t the only thing that plagues this PR, it’s also doesn’t really hold up to biophysical scrutiny.

The Actual Science

The “critical” portion only lasts a few seconds, up to a minute. Depending on the direction of the force, 10g for a minute would be close to the edge for untrained people but not expected to have long-lasting harmful effects in most cases.

There are several ways in which high g forces cause harm to humans. The article mentions blood flow, specifically applying the amount of acceleration necessary to stop oxygenated blood from reaching the brain. Completely stopping the flow of blood for 60 seconds will result in a loss of consciousness, but the designers of the coaster seem to be under the impression that achieving this even for a moment kills people. They’re wrong. If normal blood flow is restored after 60 seconds of anoxia, no adverse effects are to be expected at all, not even in the short term. Of course statistically there will be cases where the heart enters one of several possible failure modes under these conditions (again mostly in humans with pre-existing health problems) and while I expect it to be rare among the healthy population, those people would indeed need immediate medical attention – but they too can be expected to make a full recovery if they receive it.

High g forces can also damage blood vessels due to simple overpressure, causing them to rupture. This will happen with body parts located in the direction of the force applied. In this design, this will be the lower extremities, where this damage – if it occurs – will be minimal. But if you suspended people “upside down”, that would be another story. Overpressure in the blood vessels of the brain is a dangerous thing. Again, I’m not sure 10g for 60 seconds is enough, but I’d intuitively say if there was any way of inducing fatalities with this coaster that would be the way to go. Especially people with existing defects and weaknesses of the blood vessels in their brains would be most at risk, people with aneurisms for example.

Lastly, high g forces can cause tissue trauma due to compression or internal impact damage. 10g for 60 seconds would not be enough to cause that in healthy organs. But if the coaster’s design was changed to 20 or 30 g, delivered over an extended period, injuries and fatalities due to organ trauma (including the brain) will occur.

So on final consideration: this roller coaster will cause people to pass out for up to a minute. However, this effect is completely reversible. In healthy people, no lasting damage is expected. In fact, this kind of acceleration is a standard part of what happens when fighter pilots train in centrifuges – although 10g is I believe at the extreme end of what could be safely considered for training purposes.