Talking in public

14 Mar

Recently, I had the pleasure to be invited to talk at Code Harbour about “The Command Line”.

Code Harbour is usually based at The Workshop in Folkestone, a really cool new building featured quite heavily in the media for it’s indoor slide!

(If you’re in Kent, I’d highly recommend attending the next meetup, it’s a good mix of tech types with lively discussions and Pizza!)

I was a little nervous accepting the invitation to speak as I don’t consider myself an “expert”, for this reason, I did state this in one of my slides, mainly to cover myself in case things went wrong, but interestingly, it didn’t really dawn on me until I started speaking that it doesn’t matter.

Luckily, I was in good company. I’d attended a few Code Harbour meetups in the past and I knew most of the guys so it wasn’t too traumatic.

I’ve often believed that to talk in front of anybody, you had to be an “expert” or at least have some public speaking experience. For some reason, I’d always assumed that those people I’d seen at conferences in person or YouTube coverage of conferences were at the top of their respective fields.

Turns out, to talk in public, you just need to step up & well… talk in public!

“When you are an expert, it’s hard to explain what you know. It’s hard to explain your methodology etc. This is a shortcoming.”
Angelina Fabbro: JavaScript Masterclass

Preparing for my talk, I found I was learning MORE about the command line, about things I use EVERY DAY than ever before. I’d actually improved on the things I do on a daily basis away from the place where I use them most!

This got me to thinking… if I want to learn a new subject… could I learn a little, then simply do a talk about it? How risky would this be?

Obviously I’m not suggesting you go out & sign up to the biggest conference you can find just so you can learn a subject, that’s just crazy… I just believe that if you are struggling to grasp a concept, or just want to give your learning a boost… do a talk, even if it’s among friends / family / work mates.

Talking in public will force you to re-frame the subject in such a way that your brain will get a cognitive “kick” and you will reap the benefits.

Good luck!

To defer or not to defer

18 Feb

In a recent exercise to help improve page load speeds, we went through some of our sites, adding the defer attribute to some of our script tags.

<script src="localFile.js" defer></script>

It’s worth noting that you should not add the defer attribute to any file or library, that you depend on in any inline scripts. For example

<script src="//" defer></script>
$( 'document' ).ready( function() {
    //Do some javascript here. This is a simple example
    alert( 'The DOM is now Ready' );
} );

will fail, reporting $ is not defined. This is easily fixed by removing the defer attribute on the jQuery script tag.

After adding the defer attribute across some of our pages, we started to notice some random behaviour.

We do have some scripts that depends on previously loaded scripts. This is why we chose defer and not use the async attribute, based on the theory that defer should load the scripts on the order in which they are found in the document.

When defer was first introduced a lot of browsers implementation was erratic (according to , but Firefox’s implementation was correct. Our testing showed that Firefox’s current implementation (as of Firefox 27.01) appears to load the deferred scripts in a random order, at least 1 of 10 times the page is loaded, but Chrome and Safari seam to work as per the specification.

Our solution is simple. Make sure any scripts which are depended on later are not deferred, until this issue is resolved.

Google Analytics Callbacks

2 Jan

If your using classic analytics from Google (ga.js), you might not know that it does actually provide a callback  within the track event function.

Use the code below to set your callback;


This callback will stay valid for any further pushes, but can be cleared with;

_gaq.push(['_set', 'hitCallback', null]);

Hope this helps

Using Google analytics to track ‘Call Us’ actions

2 Jan

Now that we’re all working in the ‘mobile first’ paradigm, the ‘Call Us’ button is almost a staple of every site.

Thankfully this is as easy as linking to tel:01234 567890 just like we do with mailto: links, but how do we track clicks on these links? Normally we’d attach an event listener to the click and process the tracking code there, prior to the browser launching the external application to deal with the original link.

If you’ve noticed that your tracking data doesn’t look right or your missing ‘Calls’ in your analytics, then you’ll want to keep reading.

The browser treats these like any normal URL link and starts the process of navigating away from the page, cancelling any outstanding network requests.

Not a problem, if your chosen tracking service uses a synchronous, blocking process, but it will be if you’re using async services, such as Google Analytics because these network requests don’t get to finish posting before the browser cancels them.

So, how do we solve this problem? We process our tel: link in the callback feature provided by Google Analytics.

$( document ).on( 'click', '#callUsButton', function( event ) {
  var originalURL = $( this ).attr( 'href' );
  _gaq.push( [ '_set','hitCallback', function(){ location.href=originalURL; } ] );
  _gaq.push( [ '_trackEvent', category, action, label ] );
} );

If your using you might want to check this blog post for details on how to use the callback.

Grunt out your profanity

27 Nov

I spotted an interesting tweet a few weeks back where Ian Davis mentioned that “deviantART’s CSS broke for some people because one of their CSS files had f*** in a stylesheet comment”. On reading deviantART’s take on the situation on their blog they quoted: “The irony here is that we didn’t have to do anything to fix this bug”. Their reasons for this bold statement are that their LESS to CSS compilation takes out comments.

To give you some more background on why the CSS was blocked we need to understand that some peoples access to the internet goes through web filtering. Everything for companies running Websence on their company internet through to public workstations in libraries and schools. Many peoples access to the internet is filtered and with CSS and JS not being compiled and the raw code being delivered to the users browser, any profanity in that code would be seen by any filtering software. If your CSS gets filtered then the user may be blocked from receiving that and they would see the HTML of your site with no styles applied.

Now there are two reasons why I find this interesting. Firstly I have read enough profanity in code over the years to know that it does not only occur in comments. You can easily have a rude CSS class which would not be removed by taking out comments, and you can have an offensive variable or function name in Javascript, which again would not be removed with any compilation that removes comments. The second reason my interest was peaked is because I had just returned from FOWA Conference where I was fortunate enough to attend Addy Osmani’s workshop on “The Front-end Tooling Masterclass”. In this workshop he covered Yeoman, Grunt, and Bower and I was keen to find an excuse to play with Grunt.

Testing assumptions
So my mission began. First I wanted to prove that stripping comments did not prevent profanity still getting into production CSS or JS. The quickest way to do this was to start a new project following the grunt getting started guide. I copied the uglify example and created a test Javascript file with the following code in it:

// Some test JS to prove this comment gets stripped but the profanity does not.
var testFunc = function() {
  console.log( 'f**k' );


I then ran grunt on the terminal and my output was the following:

/*! grunt-playground 2013-11-27 */
var testFunc=function(){console.log("f**k")};testFunc();

As you can see the comment is stripped out and words in comment would be removed but my bad language in the console.log() remains in the file. Now my test was using uglify and on a JS file, not using LESS to CSS compilation so for completeness of my testing I went ahead and npm installed the grunt less compiler. I then created the following less file:

// This comment should vanish
@color: #4D926F;

#rudeWord {
  color: @color;
h2 {
  color: @color;

Ran grunt again and the output was:

#rudeWord {
  color: #4d926f;
h2 {
  color: #4d926f;

As you can see, had “rudeWord” been some real profanity it would have remained in the file.

So onto a solution
As a solution I first wanted to code a grunt profanity linter, but before doing this I thought I should check the grunt plugins list to see if such a thing already existed. It did not exist, but what did was a generic pattern linter. So taking this as a base I started to experiment. I npm installed grunt-lint-pattern and added it into my Gruntfile.js. At this stage my Gruntflie.js looks a like this:

module.exports = function(grunt) {

  // Project configuration.
    pkg: grunt.file.readJSON('package.json'),
    uglify: {
      options: {
        banner: '/*! &lt;%= %&gt; &lt;%="yyyy-mm-dd") %&gt; */\n'
      build: {
        src: 'src/&lt;%= %&gt;.js',
        dest: 'build/&lt;%= %&gt;.min.js'
    less: {
      options: {
        paths: ['css']
      // target name
      src: {
      // no need for files, the config below should work
        expand: true,
        cwd: 'less',
        src: '*.less',
        dest: 'build',
        ext: '.css'
    lint_pattern: {
      options: {
        rules: [
            pattern: /rudeWord/,
            message: 'Profanity is not allowed.'
      files: [

  // Load the plugins that we need.

  // Default task(s).
  grunt.registerTask('default', ['lint_pattern', 'uglify','less']);

Upon running this grunt correctly detects the “rudeWord” in the less file and errors out:

Running "lint_pattern:files" (lint_pattern) task
Warning: Profanity is not allowed.
Use --force to continue.

Aborted due to warnings.

So how do we now filter actual real world profanity you’re asking. Good questions, first we need a decent regular expression for our pattern. A quick google search and I stumbled on some good examples for detecting f**k and s**t.

A quick update to our Gruntfile.js to try out the s**t example and a change to our console.log in our test javascript file and we have a working solution:

Running "lint_pattern:files" (lint_pattern) task
Warning: Profanity is not allowed.
Use --force to continue.

Aborted due to warnings.

There are a few issues with our solution. Firstly it does not tell us which line the profanity was found on, it only tells us in which file it was found. It also does not tell us which example of profanity was found. If we were checking for a full set of naughty words we might want to know which one was detected in our source code. The second issue that niggles me is the Scunthorpe problem. Although the regular expressions we used from our earlier link are clever enough to detect variations on the words f**k and s**t, they don’t stop false positive detection of those words in the middle of another word. I am not sure that either of those two appear in the middle of valid english words, but the “c word” does appear in Scunthorpe and there are many other examples that would produce false positives if we expanded out our pattern to include the top 20 or 30 profanities in the english language.

So I have presented one solution but I am not 100% happy with it. I think I will when I get some time go on to code a grunt-lint-profanity which would solve the issues I describe above and also come packaged with a good default set of tests to cover the main English words you would want covered by a profanity filter. Watch this space…

Watch out for the natives

7 Nov

Recently, whilst debugging a problem, I came across a library we were using that extended the Array object (amongst other things) of Javascript with a few handy methods.

It turned out that in Firefox v25, there was already a native “find” method on Array’s & that the library we were using had this handy piece of code wrapped around it:

  if( !Array.prototype.find ) {
    // Declaration of the find method followed...

Normally this would work out great as a “shim” as should the browser already implement that particular method, the code allows the browser to do it’s thing, otherwise we are helping out by writing the implementation for it.

Unfortunately, in our case, the library wasn’t using the same interface as the native “find” method (mainly because the library was written before any spec was drawn up for this Array method).

This meant that an error was getting thrown as the browsers’ “find” method expected the first argument to be a function whereas the libraries interface was written to expect a string. Debugging this was a nightmare as I was looking at the source of the library trying to understand why the errors were occurring.

In my opinion, we should not extend native Javascript objects (Array, Object, Date, String, Math etc) and instead opt to do things the way Underscore does and namespace the code. This should reduce the risk of future changes to native objects breaking your code as well as making it a LOT easier for developers to debug things ;)

From here to AWS Opsworks

1 Nov

Are you using AWS Opsworks to manage your cloud application? As of last Monday (14th October) we are! In short, we’d finally had enough of putting up with our painful 9 step deploy process, manually handling load variances throughout the year and our pre-existing decentralised caching issue so we spent 3 months on and off migrating our software and architecture to fit the Amazon model and boom, here we are. Auto-scaling, auto-healing, centralised caching (Elasticache), 1-click deploy goodness.

We first built our nodejs app over 2 years ago. In those dark times, getting node to do what we wanted at all was challenging enough, so much so that we got it into production without much thought on how we would manage things when they bailed or even how we manage load when we hit our busy periods. Things just got done manually (not just me right?). I remember too many occasions i’d be on iChat with my boss discussing rolling an extra box at 2100 on a Friday night as we were hitting cpu limits. You know how it works, the business is being serviced pretty well, it’s hard to justify time to “make it nicer” for the dev team to manage. We didn’t even have time to centralise our caching :(

Then in February this year (2013) Amazon launched their beta Opsworks service. It promised much and although still in beta, we (Holiday Extras Shortbreaks) are ardent AWS users (if they provide a service we need, we’ll take it from them) it was worth looking into. Admittedly we didn’t know any Chef (and still don’t), we got by, refactored our solution to work on the platform and spent a fair chunk of time testing and trying to understand what Opsworks is actually doing throughout it’s lifecycle, how/when it scales, how do we know when it’s struggling (AWS Cloudwatch) and that sort of stuff.

Ultimately, all I wanted to be sure was that if it went down, it would actually pick itself back up again. The answer to that is yes, yes it does. There’s not a lot of documentation around at this stage, a lot of our work was trial and error. There was a lot of “what the heck is going on there” and “oh, so thats how it works” but we got there and that’s the important bit.

Pre Opsworks we were taking an ELB with 2 (or more) small EC2 instances, we were caching in memory on box (Redis) and were watching it with a 3rd party Xabbix tool. What we’re now rolling is very similar architecturally, an ELB, n boxes and an Elasticache server (still Redis). So we haven’t really changed anything as far as the business is concerned, we haven’t even really changed anything from an IT perspective, but we’ve made the jump as it was the right thing for us to do as a team. We’ve already saved a few hours in the recent deploys (something our QA team can manage now) and we’re no longer worried ahead of the winter peak, how much traffic is going through them, it’s all just working. Yay working!


Get every new post delivered to your Inbox.