Javascript – It’s not shit!

Or: How I Learned to Stop Worrying and Love the module

Javascript, it would be fair to say, has taken a lot of flack over the years, and I would be lying if I said that yours truly hadn’t been a relatively prolific detractor pretty much since I first started using the language back in 2006. To me, javascript represented the “sloppy end” of web development.

Javascript was where my application was no longer allowed to be beautiful. Javascript was the “dumping ground” where the average code base ends up looking like the wild west after only a relatively short incubation time. Javascript is where lines upon lines of bespoke, nuanced code get added to hundreds of obscure, structureless script files, held together by dozens of “black box” libraries that some other sucker suffered to create.

Javascript was, by and large, something I tried to write as little of as possible. Nobody was more surprised than me to observe (albeit peripherally) the ascension in popularity of Node.js, with its promise of being able to write javascript “Everywhere”. Further confounded was I by the subsequent rise of technologies such as ElasticSearch and MongoDB, which promised a javascript-like syntax right back to your data layer. To me, this sounded like the kind of thing that belongs in one of the deeper, darker circles of hell, perhaps a place to condemn those developers who don’t write unit tests.

Had the whole world gone mad?

However I stand before you a changed man! I put it to you that javascript is not only “not shit” but that the javascript ecosystem is now, more than ever, one of the most vibrant and exciting ecosystems operating in the development sphere today.

So what changed my mind? To explain precisely where I used to stand, here is a quick history of javascript, and my relationship with it.

A brief history of javascript

Javascript was born in 1995 when Netscape, creators of the then-dominant “Navigator” browser (launched just one year earlier) had the idea of creating a “glue” language that would help web pages become more dynamic. The internet was still in its infancy at this stage but crazy concepts such as “shopping online” were starting to come to fruition (the first iteration of would go live that very same year).

Internet explorer 2 (also launched in 1995) was the first of Microsoft’s browsers to support Javascript and within 5 years Microsoft had toppled netscape to make Internet Explorer 6 the worlds most popular browser. Two years after that, open source upstarts Mozilla launched the Firefox browser, which also shipped with Javascript, and made Internet Explorer into a laughing stock among the web-browsing community. That is, until google dethroned them just a few years later in 2008 with their Chrome browser, once again shipping with Javascript (by this point completely ubiquitous).

Rewind a bit: in 1997 a specification known as EcmaScript (ES) was created – this specification was actually based on Javascript (retroactively, therefore, javascript is an “implementation of the Ecmascript specification”). The Ecmascript specification would continue to exist but would largely just sit in the background doing nothing and remaining unknown to a large proportion of the development community. It wasn’t until 2015, with the advent of the 6th edition of ES, that significant changes were made to the syntax and a serious attempt was made to modernize the javascript language. The changes proved to be wildly popular with the developer community at large and these days transpilers such as babel have removed any concerns about browser compatibility that would previously have dogged any upgrades to the language.

And so a common theme emerges: that of a very fast moving landscape. This is the nature of the internet and javascript, good old dependable javascript, gets dragged along for the ride. In doing so, Javascript itself was infected with a touch of the same breakneck pace. There are always problems to solve with Javascript – by the time the developer community has solved one, two others have emerged!

Enter the module

One of the biggest problems I faced in the early days of my javascript dabblage (over a decade ago now!) was this: I want to structure my files such that they are all neat and tidy, like my server side code is, but this platform will not let me do it. There is so much cruft I have to write just to get simple operations to be nicely encapsulated – How do I share data between files? I know there’s loads of stuff I’m supposed to be doing to my JS code such as minifying and concatenating and whatnot, how do I do all of that?

The way I eventually got around sharing data between files was to put a god object in the global scope. The way I got around the minification issue? Well… I just didn’t.

And the reason I didn’t is that the internet didn’t really solve these problems until relatively recently. The concept of a small, reuseable “module” didn’t really become truly ubiquitous until the appearance of angular in 2009 and the dojo loader, and it didn’t leave those frameworks until 2011 when require.js brought Asynchronous Module Definition to the masses. Around the same time though, a competing module-definition specification (common.js, as used and popularized by node.js) emerged and thus, like busses, we had three solutions to the problem where once we had none.

As if having three solutions wasn’t enough, some bright spark then came up with UMD (universal module definition) as a way of unifying them – the concept being to create one module definition to rule them all. The reality though, is as follows:

“Does it look like the user is using AMD? Okay cool the you are an AMD module, unless they aren’t, in which case see if it looks like they’re using common.js. They not using that either? Fuck it then, put it in global scope”

That, ladies and gentlemen, is the best we have at time of writing 🙂

Package managers and Build tools

Bower – the “package manager for javascript”, was introduced only in 2012 and is already looking to be on the scrap heap – since the cool new thing is to allow your module to be used both server and client side, NPM is fast making bower obsolete. More of that classy Javascript dethroning.

Perhaps the most hostile dethroning of all though appears to be happening among build tools – Grunt appeared in 2013 and changed the face of building javascript. Tasks such as concatanating and minimizing javascript were now trivial, but also so were things like creating js docs and running tests – Grunt was enabling us all to be better Javascript developers.

Despite releasing v1.0 earlier this year and still being in active development, it was largely abandoned in favour of gulp in 2015 – with gulp’s reign to be very short lived as the wind appears to now blow in Webpack’s direction less than a year later.

Has all this made javascript better then?

The reason I started to look more heavily into Javascript was basically because it started to interest me. What was once a scrappy little scripting language I would go out of my way to avoid appeared to be going through something of a renaissance and like most developers, I wanted to see what the fuss was about.

And suddenly the real reason I still hated javascript in spite of the rest of the world became obvious – all of these innovations, all of the problems being solved as quick as they’re discovered, all of the tools and paradigms coming and then going… I just kinda missed them all! Working largely on monolith applications had shielded me from the Single-Page application revolution, and my general avoidance of writing javascript entirely had shielded me from build systems and package managers. The fact that all of these innovations came on so suddenly goes some way to explaining, if not excusing, how this was all able to pass me by.

The pace at which the javascript landscape is changing is to be utterly applauded – NPM, at time of writing, holds the world record for number of hosted packages (and growing by an average of 400 new packages a day – four times ahead of its closest competitor according to

This is where the ideas are thriving – and so it follows, this is the place to be.


The stupidest thing I ever saw a developer do

So this story comes up quite a lot – this is the stupidest thing I ever saw a developer do. I need to paraphrase a little because it’s been many years, but in a nutshell we’d had a new guy start as a junior developer and he was working through bug tickets, with instruction to run solutions past another developer prior to checking them in. He was dealing with an issue where the backtrace read something like this:

undefined method `capitalize' for nil:NilClass (NoMethodError)

Which is a relatively simple exception to explain and is probably, in my experience, the most common backtrace a developer will ever see. It usually means that someone has not coded defensively, or that something has been coupled to something else in a non-obvious way and someone didn’t realise. You can easily achieve the error yourself:

my_hash = {}

Anyway, after working on it for a few hours our developer came to me with his solution. Here is what he had done:

class NilClass
  def method_missing(method_sym, *arguments, &block)

For non-rubyists, here is a translation of what that bit of code does:

Let’s change the meaning of nothing, so that whenever someone tries to do something to nothing, we just do nothing.

I am not entirely proud of my reaction at the time, which was basically to laugh at the guy, tell pretty much everyone in the office about what he’d done, and then continue to tell the story for many years to come. To his credit, this “fix” had actually, to the untrained eye, fixed the problem because the software was now behaving as expected.

Evidently, the developer in question had, upon seeing the backtrace, Googled it. He had found a stackoverflow entry where someone else was having the same issue and some helpful user had submitted the NilClass monkey patch as a joke answer to the question. Other helpful users had up-voted this answer, which lead to our guy not identifying it as a joke.

I tried in vain to track down a link to this stackoverflow entry (it was many years ago) but to no avail.

Like I said, I tell this story quite a lot – the most recent time was earlier on today in fact, and it got me thinking. I worked through  the steps that this particular developer had taken:

  • He googled the error
  • He found a “solution”
  • Without understanding it, he stuck it in the codebase

And it dawned upon me that maybe I had been a little harsh on the guy, because that little trio of bullet points right there is one I have followed a whole bunch of times myself, especially when I was first getting started as a dev.

I never had a mentor, and regularly found myself being completely out of my depth and being asked to do things which I had no idea how to do. It was scary, it was stressful, and whilst being pushed in the deep end does make you very good very quickly (provided you swim) it can lead to you becoming overly reliant on the internet for solutions. You become adept at “skimming over documentation”. It’s bad practice.

If this story feels kind of familiar to you, then know that you are not alone – almost every programmer I have ever worked with has at some point engaged in “stackoverflow driven development” . Nowadays I always try to be aware of what I do not know – Taking time to properly understand what a solution means is not wasted time.

And if the guy who I laughed at ever reads this, please consider this an apology.

Moving a WordPress site

If you ever find yourself in the unfortunate position of having to work with wordpress, you may have run into problems when deploying to live. Imagine you’ve got a wordpress site all set up locally and you’re ready to deploy to an expectant client. You might have used a custom theme, entered the content for the client and maybe diddled with the source files a bit, but it’s all working locally so what could possibly go wrong? Surely this is just a case of changing the config file, ftp’ing up the scripts and restoring the database right?

Erm… no.

Lots can go wrong.

  • You might find that once the site is live it still tries to direct you to localhost.
  • You might find you lose all your theme options if you’ve used a paid theme.
  • You might find that some of the content is gone, or hyperlinks within the content are still linking to localhost.


A lot of these issues are caused by hard-coded urls going into the database, and these don’t automatically update when you switch to live.  A simple find-replace on the SQL file won’t do it either because some plugins and themes work out their caching by counting the number of characters in a database field, and if you change it without going through the various script hooks you’re going to mess it up. Deploying a WordPress site to live *should* be a doddle, but things can go wrong. Below is the recipe we found works best when doing the first push to live.


Make sure the site is working as you want it locally. Once you’re sure it’s working and connected to a database and so on, back up both the site and the DB and keep them safe as the next steps will alter both.

Preparing the database

Save yourself many headaches and get hold of a copy of the wordpress command line tool ( – installation instructions are on the site.

Once that’s installed, navigate your terminal into the root folder of the site you want to put live and enter the following commands (remembering to add a port to “localhost” if you’re using a different port, and replacing “” with your live domain name):

wp search-replace 'localhost' ''

You will then see all the tables that got updated (make sure there’s some numbers in there so you can be sure that it did find/replace the domains!). Next there are two settings in the wp-options table. You can update these on the menu in the wordpress console, but you can’t get to the menu when it’s live because the site uses these settings to direct you to the homepage etc. so you need to change them before going live. If you try and update them in the dashboard of your local version, it’ll break your local version too because THAT will try to get to the live url. It’s a catch-22! wp cli helps again, these two commands will update those settings;

wp option update home ''
wp option update siteurl ''

Then back it up. That’s the database you’ll restore to live.

Edit the config

Just one last script to go. In the site’s root folder you’ll find a file called “wp-config.php”. There are four database related fields in there that you need to update to the live equivalents, which are “DB_NAME”, “DB_USER”, “DB_PASSWORD” and “DB_HOST”. If you don’t already know these, your hosting provider should be able to tell you (or, it’ll tell you while you’re setting it all up).

Good to go!

With all that done, just ftp the files up to where you want them and import your local database to live. Update DNS and you should be cooking!

I hate wordpress. The irony that this is a WordPress blog is not lost on me 🙂

Migrating a rails postgres based application to MySql

Had an application which started out on nginx/postgres, but needed to migrate it to apache/mysql (short summary of reason is that we had some php/mysql-compatible only apps and we didnt’ want to be running two servers). Anyway, here’s how we achieved it.

First get to the point where the application deploys to the new server. If you have your cap recipes set up like I do, that’ll mean you get the structure/indexes and so on all set up and you’ll basically have an “empty” application, save for your master data.

Once it’s there and running, we’re just going to need to do a data-dump of the postgres database, which you can do from your pg server with the following command;

pg_dump replace_with_database_name -f replace_with_database_name.sql --data-only --no-owner --no-acl --attribute-inserts --disable-dollar-quoting --no-tablespaces

Obviously replacing the bit about “replace_your_database_name” with your actual database name. What you’ll end up with is a file with *just* the commands required to do the inserts of your master data. There is some stuff you need to delete from there too though, specifically;

  • The schema migration records (those will already be in the new application from when you deployed it)
  • Any seeded data (for the same reason)
  • any references to postgres system tables (there shouldn’t be any of these if you dumped with the above)

So basically, once you have that file, you can just import it into your MySql database. I personally use phpMyAdmin, just basically open that and paste the file into the SQL tab, hit go, and there you go!

Concrete5 custom theme cheat sheet


This is a cheat sheet for setting up a custom theme in the current release of the awesome CMS “Concrete5” (which at time of writing is 5.5). This assumes that you have a bunch of static html files and you have your design all up and running within a file system. These are the basic steps to go through when you come to “concrete-fying” your site.

If you haven’t checked out Concrete5 yet, you definately should!

This is not meant to be a comprehensive guide, it’s just a quick step-by-step. Assumes knowledge of concrete5, php and basic front end. If anyone finds it useful then great.

These are, in order, the steps I go through to get a basic html template up and running in Concrete 5.

Basic Setup

Put all your html/js/css in a folder under the concrete5/themes folder

Create an additional file called “Description.txt” in that folder, first line is the title of the theme, second line is a short description.

Copy/paste the most “typical” html file in your collection, rename it “default.php”. Then within default.php…
Paste this after opening head tag:
<?php Loader::element('header_required'); ?>
Paste this before closing body tag:
<?php Loader::element('footer_required'); ?>
If you have a copyright line, use this to auto-populate date:
<?php print date('Y') . ' ' . SITE; ?>
Any relative links to stylesheets, replace with this:
<?php print $this->getStyleSheet('css/your_style_sheet.css'); ?>
Any other relative links (e.g. images, javascripts) should have this pasted before the path
<?php print $this->getThemePath(); ?>
Inside the theme folder, create a new folder called “elements” and two blank php files called “header.php” and “footer.php”

Grab all content that is “common to the top” of every page on your site (typically everything down to and including the opening body tag) and paste it into “header.php”

Grab all content that is “common to the bottom” of every page on your site (typically javascripts and the closing body/html tags) and paste it into “footer.php”

Where the header and footer used to be, place the following;
<?php include("elements/header.php"); ?>

If you have a jquery link in your templates, delete it, concrete5 will put one of these in and you can

Create editable sections

Replace global/local areas of editable content with the following (change string for each one);

<?php $a = new Area('Main'); $a-->display($c); ?>
<?php $a = new GlobalArea('HeroText'); $a-->display($c); ?>

Once they’re added, install the theme and check you have it all set up correctly. This is a good time to splat any bugs as common ones will arise at this point. Particularly common ones are;

  • Your CSS styles are conflicting with concrete 5’s css files (this will produce effects like the editable hover block / edit interface looking a bit screwy)
  • Your javascripts conflicting with the concrete5 javascripts. NOTE: make sure you haven’t included two jquerys (check console for errors).

Sorting out the nav

Add all pages in as you want them, then set up the nav. To do this;

  1. Create a folder – (root)/blocks/autonav/templates
  2. Copy the file “view.php” from (root)/concrete/blocks/templates/view.php and put it in your newly created folder
  3. rename the file to something like “header_nav_template.php”
  4. Edit the file to put in whatever custom html you want (the file is very well commented)
  5. Because we don’t really want the user adding or deleting the nav, we can hard code this block into the view using the following (replace with any options you want);

$bt = BlockType::getByHandle('autonav');
$bt->controller->displayPages = 'top';
$bt->controller->orderBy = 'display_asc';

The setup is similar for doing breadcrumb style navs, but you insteadyou are overriding “concrete/autonav/templates/breadcrumbs” (which you copy into the same folder as above). Then to hard code this into your view you use;

$autonav = BlockType::getByHandle('autonav');
$autonav->controller->orderBy = 'display_asc';
$autonav->controller->displayPages = 'top';
$autonav->controller->displaySubPages = 'relevant_breadcrumb';

Set up your various page types

Copy/paste your “default.php” page, rename it to each of the page-types you want within your site, e.g. “2-column.php”, “home.php”. You can make any ammendments to each of these at this stage (e.g. change markup, any additional editable content areas and so on).

Once you have done this for all your templates, you can pretty safely delete any remaining html files within the theme folder.

If you uninstall/reinstall your theme, concrete5 will see these new files and will then add in the nessecary database rows to get them all set up.

Submitting a form using Jquery ajax

This code will work for any scenario where you want to post back a form to the server asynchronously, and then render a response. The use of the “on” keyword here (JQ1.7 and above only) allows the event to stay attached to the element even if it’s posted back later. The scenario I used this was in an “add comment” type form, where the form would fade out on submit, and then fade back in with either a message stating that the comment was received successfully, or alternatively would re-display the form with error messages if it wasn’t.

    $(document).ready(function() {       
        $(document).on("click", "#formSelector" , function() {
            var $form = $(this).parents('form');

                type: "POST",
                url: $form.attr('action'),
                data: $form.serialize(),
                error: function(xhr, status, error) {
                    //Actions to take on error
                success: function(response) {
                    //Actions to take on success

            return false; //This stops actual postback (some browsers will still try)

Holy crap facebook!

Anyone who wants a raw, balls-out example of the true reality of the semantic web, or open graph in action, look no further than this. I make a troll-baiting post about call of duty;

Any of you cod-mw3 fans (pronounced “codmore 3”) fancy playing a real mans game, the Arma franchise is 50% off on steam at the moment 😀

I should point out that in my post, I never once mentioned “call of duty” by name. Facebook not only recognizes what i was on about but also tells me that my friends are also talking about it;

Anyone who thinks that this will not influence people’s buying habits is living in sodding Disneyland! If this had been a post about a game I was actually interested in buying, I am suddenly assured of its awesomeness by seeing my friends also talking about it. That’s suddenly several people to talk to about it, all of whom could influence me to buy it.

This sort of thing may be old hat, but it’s the first time it’s happened to me personally and franlky it’s awesome!