Javascript – It’s not shit!

Or: How I Learned to Stop Worrying and Love the module

Javascript, it would be fair to say, has taken a lot of flack over the years, and I would be lying if I said that yours truly hadn’t been a relatively prolific detractor pretty much since I first started using the language back in 2006. To me, javascript represented the “sloppy end” of web development.

Javascript was where my application was no longer allowed to be beautiful. Javascript was the “dumping ground” where the average code base ends up looking like the wild west after only a relatively short incubation time. Javascript is where lines upon lines of bespoke, nuanced code get added to hundreds of obscure, structureless script files, held together by dozens of “black box” libraries that some other sucker suffered to create.

Javascript was, by and large, something I tried to write as little of as possible. Nobody was more surprised than me to observe (albeit peripherally) the ascension in popularity of Node.js, with its promise of being able to write javascript “Everywhere”. Further confounded was I by the subsequent rise of technologies such as ElasticSearch and MongoDB, which promised a javascript-like syntax right back to your data layer. To me, this sounded like the kind of thing that belongs in one of the deeper, darker circles of hell, perhaps a place to condemn those developers who don’t write unit tests.

Had the whole world gone mad?

However I stand before you a changed man! I put it to you that javascript is not only “not shit” but that the javascript ecosystem is now, more than ever, one of the most vibrant and exciting ecosystems operating in the development sphere today.

So what changed my mind? To explain precisely where I used to stand, here is a quick history of javascript, and my relationship with it.

A brief history of javascript

Javascript was born in 1995 when Netscape, creators of the then-dominant “Navigator” browser (launched just one year earlier) had the idea of creating a “glue” language that would help web pages become more dynamic. The internet was still in its infancy at this stage but crazy concepts such as “shopping online” were starting to come to fruition (the first iteration of amazon.com would go live that very same year).

Internet explorer 2 (also launched in 1995) was the first of Microsoft’s browsers to support Javascript and within 5 years Microsoft had toppled netscape to make Internet Explorer 6 the worlds most popular browser. Two years after that, open source upstarts Mozilla launched the Firefox browser, which also shipped with Javascript, and made Internet Explorer into a laughing stock among the web-browsing community. That is, until google dethroned them just a few years later in 2008 with their Chrome browser, once again shipping with Javascript (by this point completely ubiquitous).

Rewind a bit: in 1997 a specification known as EcmaScript (ES) was created – this specification was actually based on Javascript (retroactively, therefore, javascript is an “implementation of the Ecmascript specification”). The Ecmascript specification would continue to exist but would largely just sit in the background doing nothing and remaining unknown to a large proportion of the development community. It wasn’t until 2015, with the advent of the 6th edition of ES, that significant changes were made to the syntax and a serious attempt was made to modernize the javascript language. The changes proved to be wildly popular with the developer community at large and these days transpilers such as babel have removed any concerns about browser compatibility that would previously have dogged any upgrades to the language.

And so a common theme emerges: that of a very fast moving landscape. This is the nature of the internet and javascript, good old dependable javascript, gets dragged along for the ride. In doing so, Javascript itself was infected with a touch of the same breakneck pace. There are always problems to solve with Javascript – by the time the developer community has solved one, two others have emerged!

Enter the module

One of the biggest problems I faced in the early days of my javascript dabblage (over a decade ago now!) was this: I want to structure my files such that they are all neat and tidy, like my server side code is, but this platform will not let me do it. There is so much cruft I have to write just to get simple operations to be nicely encapsulated – How do I share data between files? I know there’s loads of stuff I’m supposed to be doing to my JS code such as minifying and concatenating and whatnot, how do I do all of that?

The way I eventually got around sharing data between files was to put a god object in the global scope. The way I got around the minification issue? Well… I just didn’t.

And the reason I didn’t is that the internet didn’t really solve these problems until relatively recently. The concept of a small, reuseable “module” didn’t really become truly ubiquitous until the appearance of angular in 2009 and the dojo loader, and it didn’t leave those frameworks until 2011 when require.js brought Asynchronous Module Definition to the masses. Around the same time though, a competing module-definition specification (common.js, as used and popularized by node.js) emerged and thus, like busses, we had three solutions to the problem where once we had none.

As if having three solutions wasn’t enough, some bright spark then came up with UMD (universal module definition) as a way of unifying them – the concept being to create one module definition to rule them all. The reality though, is as follows:

“Does it look like the user is using AMD? Okay cool the you are an AMD module, unless they aren’t, in which case see if it looks like they’re using common.js. They not using that either? Fuck it then, put it in global scope”

That, ladies and gentlemen, is the best we have at time of writing🙂

Package managers and Build tools

Bower – the “package manager for javascript”, was introduced only in 2012 and is already looking to be on the scrap heap – since the cool new thing is to allow your module to be used both server and client side, NPM is fast making bower obsolete. More of that classy Javascript dethroning.

Perhaps the most hostile dethroning of all though appears to be happening among build tools – Grunt appeared in 2013 and changed the face of building javascript. Tasks such as concatanating and minimizing javascript were now trivial, but also so were things like creating js docs and running tests – Grunt was enabling us all to be better Javascript developers.

Despite releasing v1.0 earlier this year and still being in active development, it was largely abandoned in favour of gulp in 2015 – with gulp’s reign to be very short lived as the wind appears to now blow in Webpack’s direction less than a year later.

Has all this made javascript better then?

The reason I started to look more heavily into Javascript was basically because it started to interest me. What was once a scrappy little scripting language I would go out of my way to avoid appeared to be going through something of a renaissance and like most developers, I wanted to see what the fuss was about.

And suddenly the real reason I still hated javascript in spite of the rest of the world became obvious – all of these innovations, all of the problems being solved as quick as they’re discovered, all of the tools and paradigms coming and then going… I just kinda missed them all! Working largely on monolith applications had shielded me from the Single-Page application revolution, and my general avoidance of writing javascript entirely had shielded me from build systems and package managers. The fact that all of these innovations came on so suddenly goes some way to explaining, if not excusing, how this was all able to pass me by.

The pace at which the javascript landscape is changing is to be utterly applauded – NPM, at time of writing, holds the world record for number of hosted packages (and growing by an average of 400 new packages a day – four times ahead of its closest competitor according to modulecounts.com).

This is where the ideas are thriving – and so it follows, this is the place to be.

From Rails to Node.js – observations

As rails developers, we are somewhat spoiled when it comes to productivity. For every conceivable piece of work you might want to undertake, a developer somewhere will have already spent a lot of time figuring out the best way to do it and in most cases will have written a nice little gem that takes all of the donkey work out of the implementation. This, combined with the beauty and elegance of the ruby language, is the main reason that most rails developers don’t want to code in anything but their beloved framework.

Everything else just looks like a lot of faffing, right?

So when I sat down to teach myself node.js, the relatively new kid on the block (2009, with ES6 support added only in September 2015), the first thing I tried doing was tracking down the equivalent packages from NPM that would allow me to enjoy the same levels of productivity that I did in rails.

Ultimately I would realise that trying to find equivalents between these two platforms is counter to productive learning. They are different, and it is good that they are different because it means the choice of platform becomes an informed one rather than just a personal preference. That said, there are a number of de-facto node packages that mirror the rails ecosystem. Here’s a rundown (note, this is a living list and may be updated at any time);

Functionality Ruby/Rails Node.js
Pagination Kaminari baked into sequelize, for mongoose use Mongoose Paginate
ORM (relational) ActiveRecord  Sequelize
ORM (mongo) MongoID  Mongoose
Web Frameworks Rails, Sinatra  Express, Koa
ElasticSearch mapping Chewy  Mongoosastic (for mongo).
 Dev Env-vars Dotenv  Dotenv
 Procfile management Foreman  Node Foreman
 Logging  Baked into rails / stdlib  Winston, Morgan
 TDD/BDD Rspec  Mocha
 Websockets ActionCable Socket.io

Then there are some packages that are really only applicable to the node/js landscape but which are equally as indispensable. You can get a gist for these on the NPM site itself, which provides a list of the most starred packages (a good indication of what the community is using). Some highlights include;

  • Async – write cleaner asynchronous code
  • Moment – date manipulation / parsing
  • Underscore – Functional Programming library
  • Q – CommonJS Promises

Conceptually, here are the main differences I have noticed between rails and node.js.

  • Node packages tend to be smaller and do less than ruby gems. But there are more of them! The node community really seem to have embraced the concept of breaking a problem down into the smallest possible parts. Ruby gems are often accused of being bloated (rails most of all) – with node packages it’s the opposite, with ‘lightweight’ and ‘micro’ often appearing in a package’s description.
  • The node community is ridiculously un-opinionated. Whereas rails will try to shoehorn you into doing things ‘the rails way’, node gives you no such boundaries. In my opinion this is both a curse and a blessing. Without the constraints of convention, we are free to choose the most appropriate architecture for a given problem. At the same time, your average rails developer will join a new project already knowing roughly how the application will hang together whereas with node, you have no idea what to expect!
  • That said, the node community do have certain opinions that inform style and architecture decisions. These include conventions such as ‘error first callbacks‘, ‘use promises/streams if they are available’ and ‘avoid nested callbacks‘.
  • The node community appears to favour mongoDB over RDBMS like postgres or mysql. I can understand why – it takes the concept of doing ‘Javascript everywhere’ right back to the database layer! With disk space no longer coming at a premium, NoSQL has risen in popularity however RDBMS are still catered for by node and remain the most popular storage solutions.

This article will expand as I explore further into the Node.js landscape more fully, check back to stay up to date.

The stupidest thing I ever saw a developer do

So this story comes up quite a lot – this is the stupidest thing I ever saw a developer do. I need to paraphrase a little because it’s been many years, but in a nutshell we’d had a new guy start as a junior developer and he was working through bug tickets, with instruction to run solutions past another developer prior to checking them in. He was dealing with an issue where the backtrace read something like this:

undefined method `capitalize' for nil:NilClass (NoMethodError)

Which is a relatively simple exception to explain and is probably, in my experience, the most common backtrace a developer will ever see. It usually means that someone has not coded defensively, or that something has been coupled to something else in a non-obvious way and someone didn’t realise. You can easily achieve the error yourself:

my_hash = {}
my_hash[:key].capitalize

Anyway, after working on it for a few hours our developer came to me with his solution. Here is what he had done:

class NilClass
  def method_missing(method_sym, *arguments, &block)
    return
  end
end

For non-rubyists, here is a translation of what that bit of code does:

Let’s change the meaning of nothing, so that whenever someone tries to do something to nothing, we just do nothing.

I am not entirely proud of my reaction at the time, which was basically to laugh at the guy, tell pretty much everyone in the office about what he’d done, and then continue to tell the story for many years to come. To his credit, this “fix” had actually, to the untrained eye, fixed the problem because the software was now behaving as expected.

Evidently, the developer in question had, upon seeing the backtrace, Googled it. He had found a stackoverflow entry where someone else was having the same issue and some helpful user had submitted the NilClass monkey patch as a joke answer to the question. Other helpful users had up-voted this answer, which lead to our guy not identifying it as a joke.

I tried in vain to track down a link to this stackoverflow entry (it was many years ago) but to no avail.

Like I said, I tell this story quite a lot – the most recent time was earlier on today in fact, and it got me thinking. I worked through  the steps that this particular developer had taken:

  • He googled the error
  • He found a “solution”
  • Without understanding it, he stuck it in the codebase

And it dawned upon me that maybe I had been a little harsh on the guy, because that little trio of bullet points right there is one I have followed a whole bunch of times myself, especially when I was first getting started as a dev.

I never had a mentor, and regularly found myself being completely out of my depth and being asked to do things which I had no idea how to do. It was scary, it was stressful, and whilst being pushed in the deep end does make you very good very quickly (provided you swim) it can lead to you becoming overly reliant on the internet for solutions. You become adept at “skimming over documentation”. It’s bad practice.

If this story feels kind of familiar to you, then know that you are not alone – almost every programmer I have ever worked with has at some point engaged in “stackoverflow driven development” . Nowadays I always try to be aware of what I do not know – Taking time to properly understand what a solution means is not wasted time.

And if the guy who I laughed at ever reads this, please consider this an apology.

Ruby’s instance_exec method

instance_exec is a method you can use to change the “scope” of a block. One of the great advantages of doing this is that it can make your code more readable, particularly when writing a Domain Specific Language (DSL). So how does instance_exec work and why would you ever want to write a DSL?

instance_exec is pretty easy to demonstrate by way of a silly example. Let’s suppose you are running a Cattery. For anyone who doesn’t like silly examples, you can think of the cats as some kind of unmanaged resource (for example, a database connection) and the cattery as a pool of these resources (e.g. a database connection pool). Any cats that manage to escape the cattery can be considered to therefore be a memory leak that will harm your application and ultimately lead to sleepless nights and long days.

It might actually be easier on stress levels to just think of them as cats.

First, we define a simple class to represent a “cat”:

class Cat
  def speak!
    puts "meow!"
  end
end

and then another class to define the cattery itself, which internally stores an array of the cats currently in the cattery as well as providing an interface to add a new cat. Additionally, the cat should express itself by “speaking” after it is added to the cattery:

class Cattery
  def initialize
    @cats = []
  end

  def add_cat(cat)
    @cats << cat
    cat.speak!
  end
end

Let’s try it out:

cattery = Cattery.new
cattery.add_cat(Cat.new)

=> "meow!"

Lovely stuff.

Now imagine you discover that your cats keep escaping and you decide that the easiest way to stop this happening is to install a “door” on the cattery (assuming that the resident cats have not yet figured out how to open doors). So, you change your Cattery class accordingly by providing an interface to allow a door to be open and closed. You further alter your “add cat” method such that it will not allow a cat to be added unless the door is open:

class Cattery
  def initialize()
    @cats = []
    @door_open = false
  end

  def open_door
    @door_open = true
  end

  def close_door
    @door_open = false
  end

  def add_cat(cat)
    if @door_open
      @cats << cat
      cat.speak!
    else
      raise "Can't add a cat when the door is not open!"
    end
  end
end

Sweet. Now when you try adding a cat when the door is closed you get an error:

cattery.add_cat(Cat.new)
=> :in `add_cat': Can't add a cat when the door is not open! (RuntimeError)

cattery.open_door
cattery.add_cat(Cat.new)
=> "meow!"

Great! Nice and secure. only one problem… the developer who wrote that last piece of code forgot to close the door afterwards (doh!). How do we ensure that the door always gets closed properly after a cat is added?

Maybe you could use the begin/ensure (try/finally for non rubyists) syntax within the add_cat method? Well, if you go down that road then you’re breaking the single responsibility principle for that method – it’s called “Add cat” that’s all it should really be doing… but now it’s responsible for both adding the cat AND opening and closing the door. The method would also become coupled to the door interface which may cause future issues if we ever decide to change that interface or extract the door out into a class of its own. You might think that you could just remember to do a begin/ensure every time you add a cat:

begin
  cattery.open_door
  cattery.add_cat(Cat.new)
ensure
  cattery.close_door
end
=> "meow!"

But then you’re pretty much back in the boat you were originally, where you were relying on programmers to remember to close the door after themselves… only this time you have twice as much code!

At this point rubyists will start considering blocks. You might find yourself thinking along these lines :

* We could create a method to safely open the door and close the door afterwards, yielding to a block.
* We could remove the open/close methods on the cattery so people need to use that method to add a cat.

Such a method might look like this:

class Cattery
  ...

  def safely_open_door(&block)
    begin
      @door_open = true
      yield self
    ensure
      @door_open = false
    end
  end
 
  ...
end

Purrrfect. This would allow you to do the following:

cattery.safely_open_door do |this_cattery|
  cattery.add_cat(Cat.new)
  this_cattery.add_cat(Cat.new)
end

And you could stop right there – the solution works and the cats are safe. However, the code is not beautiful. We have multiple versions of the same variable floating around, both inside and outside of the block, which is confusing. It looks messy. Wouldn’t it be nicer if we could, just within the scope of the block, consider ourselves to be in the “cattery domain” where we could exclusively talk to the cattery without worrying about what’s going on outside of the block?

This is where instance_exec comes into play.

instance_exec changes the scope of the code within the block itself, and subsequently will change the result of calling “self” within it. In its current form, the block is scoped to the “main” object, which is why we are able to access the “cattery” variable.

What would be really nice is if this block were to be scoped to the cattery itself – any code within it would therefore be specific to the domain of dealing with a cattery. We can actually achieve this with one simple change to the code we have already:

class Cattery
  ...

  def safely_open_door(&block)
    begin
      @door_open = true
      instance_exec(&block)
    ensure
      @door_open = false
    end
  end
 
  ...
end

All we have changed is “yield” to “instance_exec”. The block will still be called (you can even pass additional arguments to the instance_exec method if you want to and they’ll be yielded to the block too). Making this small change allows us to finally write the code we want to:

cattery.safely_open_door do
  add_cat(Cat.new)
  add_cat(Cat.new)  
end

Or in one line:

cattery.safely_open_door { add_cat(Cat.new) }

Gorgeous.

So why would you ever want to do this? Well, the easiest way of demonstrating why you might want to create a DSL is to look at one you have already been using, maybe without even realising it. Try to imagine life without this syntax:

Rails.application.routes.draw do
  resources :products do 
    resources :comments
  end
end

Yup. The config.rb routes file in a rails application uses exactly the techniques that have been described here to create a language for you to talk about routing within the routes.rb file. If you output the result of “self” within the routing block you’ll find that it’s one of these:

 #<ActionDispatch::Routing::Mapper:0x000000040a4548>

Anyone who’s been in the game long enough to remember how routes worked in rails 2 may recall that previously routing used to work like this:

ActionController::Routing::Routes.draw do |map|
  map.connect '/products', :controller => 'products', :action => 'index'
  map.connect '/products/:id', :controller => 'products', :action => 'show'
end

And you’ll be hard-pressed to find anyone who wants to go back to that syntax. In conclusion then, instance_exec can help you:

* Write neater code
* Write less code
* Create rich Domain Specific Languages for you and other developers to use

Ruby Gotcha: Setting default Date, Time and DateTime

If you’re using dates, times and datetimes throughout your appliacation, you might find yourself wanting to specify a default format for each. The default formats come back as follows;

[1] pry(main)> Date.today.to_s 
=> "2014-10-28" 
[2] pry(main)> Time.now.to_s 
=> "2014-10-28 13:07:15 +0000" 
[3] pry(main)> DateTime.now.to_s 
=> "2014-10-28T13:07:19+00:00"

In a commercial web application, it won’t be long before you have a manager knocking on your door asking you to change this format to something a little more human-friendly. It is true that you do have a “strftime” method that allows you to do this:

[1] pry(main)> Time.now.strftime("%H:%M") 
=> "13:41"

But if you’re thinking like a programmer, you will not want to be copy/pasting that format all over your application, you’ll want to change it in one place and for that change to be adopted application-wide. In a rails application you may reason that this could be done in an initializer:

# config/initializers/time_formats.rb 
Time::DATE_FORMATS[:default] = "%H:%M" 
Date::DATE_FORMATS[:default] = "%e %b %Y" 
DateTime::DATE_FORMATS[:default] = "%e %b %Y"

Nice and tidy! So let’s check that out in the console…

[1] pry(main)> Time.now.to_s 
=> "13:45" 
[2] pry(main)> Date.today.to_s 
=> "28 Oct 2014" 
[3] pry(main)> DateTime.now.to_s 
=> "13:45"

Oh dear! Although the Time and Date formats have worked correctly, it appears the DateTime format is broken and is only giving us back the time! This is the gotcha – a datetime will cast its value to a Time and return THAT value, with its associated default format. You can see this from the source code: http://www.ruby-doc.org/stdlib-1.9.3/libdoc/date/rdoc/DateTime.html#method-i-to_s

dt_lite_to_s(VALUE self) { 
  return strftimev("%Y-%m-%dT%H:%M:%S%:z", self, set_tmx); 
}

So how do we get around it?

 Solution

There are probably a whole bunch of solutions to this issue, but one is to use one of the more unique features of ruby and monkey-patch the DateTime class:

 
# config/initializers/time_formats.rb 
Date::DATE_FORMATS[:default] = "%e %b %Y" 
Time::DATE_FORMATS[:default] = "%H:%M" 
DEFAULT_DATETIME_FORMAT = "%e %b %Y %H:%M" 
class DateTime 
  def to_s 
    strftime DEFAULT_DATETIME_FORMAT 
  end 
end

Viola

[1] pry(main)> Time.now.to_s 
=> "13:52" 
[2] pry(main)> Date.today.to_s 
=> "28 Oct 2014" 
[3] pry(main)> DateTime.now.to_s 
=> "28 Oct 2014 13:53"

Would be interested to hear about other solutions to this gotcha if anyone else has any ideas.

Moving a WordPress site

If you ever find yourself in the unfortunate position of having to work with wordpress, you may have run into problems when deploying to live. Imagine you’ve got a wordpress site all set up locally and you’re ready to deploy to an expectant client. You might have used a custom theme, entered the content for the client and maybe diddled with the source files a bit, but it’s all working locally so what could possibly go wrong? Surely this is just a case of changing the config file, ftp’ing up the scripts and restoring the database right?

Erm… no.

Lots can go wrong.

  • You might find that once the site is live it still tries to direct you to localhost.
  • You might find you lose all your theme options if you’ve used a paid theme.
  • You might find that some of the content is gone, or hyperlinks within the content are still linking to localhost.

 

A lot of these issues are caused by hard-coded urls going into the database, and these don’t automatically update when you switch to live.  A simple find-replace on the SQL file won’t do it either because some plugins and themes work out their caching by counting the number of characters in a database field, and if you change it without going through the various script hooks you’re going to mess it up. Deploying a WordPress site to live *should* be a doddle, but things can go wrong. Below is the recipe we found works best when doing the first push to live.

Preparation

Make sure the site is working as you want it locally. Once you’re sure it’s working and connected to a database and so on, back up both the site and the DB and keep them safe as the next steps will alter both.

Preparing the database

Save yourself many headaches and get hold of a copy of the wordpress command line tool (http://wp-cli.org/) – installation instructions are on the site.

Once that’s installed, navigate your terminal into the root folder of the site you want to put live and enter the following commands (remembering to add a port to “localhost” if you’re using a different port, and replacing “whatever.com” with your live domain name):

wp search-replace 'localhost' 'whatever.com'

You will then see all the tables that got updated (make sure there’s some numbers in there so you can be sure that it did find/replace the domains!). Next there are two settings in the wp-options table. You can update these on the menu in the wordpress console, but you can’t get to the menu when it’s live because the site uses these settings to direct you to the homepage etc. so you need to change them before going live. If you try and update them in the dashboard of your local version, it’ll break your local version too because THAT will try to get to the live url. It’s a catch-22! wp cli helps again, these two commands will update those settings;

wp option update home 'http://whatever.com'
wp option update siteurl 'http://whatever.com'

Then back it up. That’s the database you’ll restore to live.

Edit the config

Just one last script to go. In the site’s root folder you’ll find a file called “wp-config.php”. There are four database related fields in there that you need to update to the live equivalents, which are “DB_NAME”, “DB_USER”, “DB_PASSWORD” and “DB_HOST”. If you don’t already know these, your hosting provider should be able to tell you (or, it’ll tell you while you’re setting it all up).

Good to go!

With all that done, just ftp the files up to where you want them and import your local database to live. Update DNS and you should be cooking!

I hate wordpress. The irony that this is a WordPress blog is not lost on me🙂