Limit the length of an input with JQuery

For instances where a textbox/textarea need to be limited to a specific number of characters, one way of achieving this is with a bit of JQuery. Simply give the input a class (in this case, “maxThirtyCharacters”) and set a “maxChars” variable to the appropriate number. There will be tidier ways of doing this, which you can adapt to your specific situation, but the very basic version goes a little like this;

    $(".maxThirtyChars").keyup(function() {
        var maxChars = 30;
        if ($(this).val().length > maxChars) {
            $(this).val($(this).val().substr(0, maxChars));
            
            //Take action, alert or whatever suits
            alert("This field can take a maximum of 30 characters");
        }
    });

remember to also check server side though.

Advertisements

ICO gave me a cookie without asking!

I have made a previous entry registering my disgust at the ignorant, uninformed implementation that the EU is imposing on the use of cookies. Rather than re-covering old ground, let me sum it up for you in two words: utter crap.

“We’re supposed to warn people before giving them cookies?!”, I thought, “How the heck would that work? I’m going to scare away all of my customers.” I waited anxiously for someone to actually implement such a feature to give me a better idea of what is expected.

I was therefore very interested when I heard that the ICO website actually has one of these warnings on it! Fantastic, this’ll be implementation straight from the horses mouth! So let’s head over and see what they’ve done…

You can play along at home by visiting their site at http://www.ico.gov.uk/, which presently looks like this;

Yes! We have wording! Let’s see what we have here…

The ICO would like to use cookies to store information on your computer, to improve our website. One of the cookies we use is essential for parts of the site to operate and has already been set. You may delete and block all cookies from this site, but parts of the site will not work. To find out more about the cookies we use and how to delete them, see our privacy notice.

Followed by a check box asking me if I agree.

What’s this!? “one of the cookies we use is essential and has already been set”!? I headed straight to the browser settings to see what it was. After clearing all cookies and refreshing the page, I discover it’s the cookie that holds your session ID, deposited on my PC without me giving my permission.

So we are allowed to deposit essential cookies? What the heck counts as essential?! Ironically, under asp.net (which they’re using) the session cookie is actually quite easy to get rid of by using the URL to achieve a cookieless-session. This means that the one “essential” cookie they give you is actually the easiest to avoid using! Admittadly you do sacrifice oodles of security using your querystring as a session tracker, as well as a lot of usability, so we’re back to the question of what “essential” actually means. You could argue that analytics tools are essential to the running of any succesful e-commerce business, for example.

Let me save you the time: The guidelines DO NOT MENTION this clause currently, and as such the ICO presently does not comply with it’s own standards!

I am seriously considering drafting a letter of complaint!

By the way those guidelines, whilst not really worth the paper they’re written on, are worth a look if you want a giggle. I can almost hear the twisted lament of web designers all across Europe at some of the design recommendations contained within;

One possible solution might be to place some text in the footer or header of the web page which is highlighted or which turns into a scrolling piece of text when you want to set a cookie on the user’s device.

Yeah, marquees are back baby!!!!1!!!

^^ that would be scrolling if this wasn’t wordpress 😦 anyway, please share if you care x

Simple C# Screen Scraping Proxy with JQuery

Was asked today how to do a screen scrape an external site using JQuery. The short version is, you can’t do it with JQuery alone. There exist certain security measures that prevent ajax requests going out to other domains/points of origin.

You can achieve the effect in a number of ways. The most old school of these is using an iframe, but in most cases this just won’t cut it as you’ll need to be able to manipulate the returned HTML.

A better way is to code up a simple server side proxy that does the scrape, and then do your ajax postback to there instead. Here’s an example in C#…

            
using (WebClient client = new WebClient())
{
   string url = "http://www.google.com/"; 
   Byte[] requestedHTML = client.DownloadData(url);
   UTF8Encoding objUTF8 = new UTF8Encoding();       
   
   //This line just writes the string straight back to the response, but you 
   //could just as easily stick it in a string variable and manipulate it to 
   //your hearts content!                         
   Response.Write(objUTF8.GetString(requestedHTML)); 

}

Let’s say you saved that as the code behind of an otherwise blank page called “Ajax/scape.aspx”. You’d then just need to use the jquery “.load” command

$("myDiv").load("Ajax/scrape.aspx");

and you’re there! Note that the “load” command will cache by default, so if you need something more complex look up the “.ajax” command.

Multivariate and User testing – What they don’t tell you!

We’ve recently undergone some rather intensive user testing and are following it up with a similarly intensive multivariate test. For anyone new to the field, multivariate testing is a method through which you can test out various configurations of your website (different images, different text, different colours etc..) in order to see which combination works best. This is not to be confused with A/B testing, which is where you have two or more completely separate pages and test those against each other: here we are talking about one page, with various sections which get switched out and in.

There are various tools available to implement this testing – some of which are free. By far the best free tool I’ve been exposed to (and the one I’m using for the current testing I’m doing) is Google Website Optimizer (GWO). If your company is anything like mine, they’ll basically entrust all of the consultation and advice giving to an external marketing company who will sit there stating the obvious for days on end before letting you do any actual work. Assuming that part is out of the way, there are many things that you really can only learn actually from doing multivariate and user testing.

This is therefore a guide from a developers perspective, including the gotchas and some of the bitter truths you’ll need to face when you start a test. What this article will not do is shower you in little “secret tips” like many of these types of guide profess to do. Without the aid of clairvoyance, it is impossible to know anything before you test. If it was, there would be no need for any of this. By all means go and read the psychic ramblings of some tight jeaned hipster, but when you’re done, come back here for the truth 🙂

There will be loud voices

Multivariate testing is very, very interesting, and for this reason it will attract the attention of every marketing person who is involved. It is a very simple topic to grasp in principle, but is very deceptive in that respect. The bits that look hard, such as actually implementing the test itself in code, are actually very easy. The bits that look easy, such as deciding what to test and commenting on the results, are actually incredibly difficult. Herein lies the problem!

If you take the time to try and explain this difference, you’ll probably find that a queue of people form who are more than willing to “take on” that mantle, naively believing that they posses some kind of inner knowledge that nobody else has which allows them to instinctively know about strategies and analytics. I’m not in any way suggesting you go to the mattresses with your marketing team, but anyone who steps up and immediately starts making suggestions based on hunches and feelings basically needs to be subdued asap!

The important thing to take away from this is that these strategies are all very well documented; even google’s site itself is an absolute goldmine for research. I’m not going to go into all that here, because the information is out there already. All I will say is that you should use these resources and make sure anyone involved uses them too.

Multivariate testing can take place over multiple pages

GWO is capable of running tests over multiple pages. Let’s say, for example, you wanted to test the logo on your site, and you had four different ideas for a design. The logo appears on every single page on your site, but during the GWO test setup wizard it will all be explained in the context of this happening on one page.

All you need to do in this case is pick one page that the logo appears (or whatever it is you’re testing) and do the setup on that page. As long as the javascript tags surround that section across your entire site, it’ll make it into the test. The tracking script should also appear on every page. The only one you don’t want to duplicate is the conversion script, which should only appear on your conversion page.

Exclude some IP ranges

In our organization, we have a call centre who take orders over the phone. When an order comes through, they place the order using our website. At all costs, you need to exclude these people from the tests! Let’s say you have one, really good call centre employee (for the purposes of this demonstration, we’ll call him “Matthew”). Matthew is so good, he converts every single call he gets into a sale. Upon the first visit he makes to your new “multivariate-test enabled” site, Mathew is given a cookie indicating one of your test configurations.

Over time, this configuration appears to be winning – but it’s actually not. The customer on the end of the phone can not see what Matthew sees, and Matthew is going to make the sale whatever the site looks like. Your data is essentially contaminated. Equally, you’ll be contaminating it yourself every time you visit the site for other developments (your data entry people, for example, will be given configurations which they may never use to book but which are still contributing to your statistics).

The way to avoid this is basically to exclude the tracking and conversion scripts from your internal people. How you do this basically depends on your own organization’s infrastructure. I managed to achieve it in ASP.net by excluding the tracking and conversion scripts using a Placeholder control which showed/hid them based on IP addresses. Remember to exclude the “home” IP address as well, otherwise you’ll be contaminating the data on your development machine too.

If you do this, you’ll lose (internally) the ability to preview your combinations in GWO. A small price to pay for clean data.

Don’t run more than one test at a time

GWO will let you run multiple tests at once. Don’t do it! Imagine one test was to check out various logo designs, while another was to test out various basket pages. You have no way of knowing, from the results, how one of the tests has interacted the other. For example, you might end up completely serendipitously with every customer who sees the good logo ending up on the crap basket page, or vice versa, which will completely skew your results. You don’t want this!

Multivariate testing can take a long time

You can test multiple sections on your page, and each section can have multiple variations. You therefore end up with a number of “combinations” for the page (blue logo with yellow button, yellow logo with blue button…). Even if you keep it as simple as doing a test with six sections, each with one variation, you are left with a whopping 64 separate combinations!

How long it will be before you see results will very much depend on how much traffic you are getting. GWO itself includes an intelligent recommendation system and my advice would essentially be to follow it to the letter. It can be very tempting to make assumptions early on – after a week it can look like you have a clear winner. LEAVE IT.

Tests can take months before you any conclusive trends emerge. The process can be sped up through various methods. GWO, for example, includes a big-brother style eviction facility, where poorly performing combinations can basically be excluded from the test if their conversion rates become obviously worse than the rest. The main thing to take away is that if you are looking for “quick wins” then you shouldn’t be using multivariate tests to identify them – it takes time and if you stop them early you’ll gain nothing.

One piece of advice I was given a while back is “test little, test often”. I wouldn’t say that was a hard fast rule, but if you do test lots, don’t test often.

It might not just be the design

Marketing people are often very, very quick to blame low conversion rates on the look/design of a website. While this can make a difference, It is not the only avenue worth exploring. If you’re finding that every design test you do results in little or no improvement, try testing things like images or, most importantly, copy.

The marketing people where I work are quite big fans of flowery language and coupling up every word with an adjective. They also are fans of waxing lyrical about how much of a saving you will make if you buy OUR product, and how WE want you to be satisfied – it’s all very self obsessed and needlessley long. I managed to convince them to run a test to change to purely factual copy, which tells the customer just what they’re getting and doesn’t pad it out with extra words. This version is currently showing a 15% uplift – I would hypothosize that therefore people ARE reading the copy and they JUST want to know what they’re getting.

Be prepared to be shocked

It’s a bitter pill to swallow, but sometimes you just need to accept that the “rubbish version” is the version you need to use. You don’t go into e-commerce to gain the admiration of your peers or to embarrass your competitors: you go in to sell more kit than they do. Along with the multivariate testing we undertook some user testing – basically this included a set of users coming in and using our site. Unbeknownst to them, we were sitting just up the hall watching their every move, and frankly we couldn’t believe some of the crap and abuse they were coming out with.

You are a web-person. You aren’t most people – most people are idiots. This forces us to the terrible conclusion that all of the beloved best practices and modern techniques that you spend many hours learning may need to be thrown in the bin. If a big ugly border, heavy images and comic sans sells more kit, you basically need to use them. Ignoring this fact is like pissing in dark trousers – you get a nice warm feeling but nobody notices, and you smell afterwards.

There will also be times when you just need to accept that what you changed (the button colour, the size of the image etc.) simply makes absolutely no difference at all. Multivariate tests do take a long time to yield definitive results, but I have experienced on many occasions that sometimes, the customer just doesn’t care.

Google might not be able to see your site

We had an issue with one of our sites where basically the GWO wizard went crazy and said it couldn’t see our sites. If you get this, just proceed on the assumption that it can. The way we get around it is just basically uploading dummy notepad files to the google server with the scripts on (although, if you do this, be sure you’ve installed all the scripts okay for real). In fact, in the case of most conversion pages, you’ll want to do this anyway as generally you can’t just “visit” the thank you page.

Doing it this way around has no negative impact on the test at all.

Don’t expect miracles

Multivariate testing is an invaluable resource in improving your conversion rate – however it does require a LOT of patience. You will hear many folk stories about web developers who increased their conversions by 20 or 30 percent from changing the colour of a button. In my experience, this is not normally the case. More realistically you’ll be looking at a 1 or 2 percent increase, which you then implement and move on to the next test. Innovation can come over time, it doesn’t nessecarily need to be an instant “boom” moment.

This is why it’s also important not to “peek” at the results. Make no mistake – when you first run the test, the first week will show massive improvements with a clear winner. These statistics should be disregarded at all costs – there exists an element of luck in which users get which combinations and some will work for some and not for others. In a nutshell, those combinations that appear to be streaking into the lead initially could just as easily end up being the absolute losers. This brings me back to the earlier point about not making any drastic decisions too early – you need to let the test settle down.

That’s not to say your 20/30 percent increase won’t happen, but if you think it has happened you need to be damn sure about it before taking action

Be prepared for inconclusive results

While the online world of “influential authorities” seem to be falling over themselves to tell you about the importance of testing, in reality you will probably find that most of your tests actually end up being inconclusive (or, with very little difference). One of the tests we performed recently was a copy test on a product page which was barely English – it was re-written to very high standards and a test was performed. After 3 months, there was very little between the variations, with the original ever so slightly in the lead. There are two conclusions we could draw from this;

  • Illegible, gramatically inaccurate copy doesn’t matter at all
  • Illegible, gramatically inaccurate copy doesn’t matter to our customers

For whatever reason, be it lack of traffic volumes or just that the customer is only focused on price, there will be cases where no-brainer tests (which marketing people will force you to implement all the time, by the way) will fail to perform.

Conclusion

I’m probably going to re-visit this article over time as I learn more, but hopefully this should assist anyone who is undertaking multivariate testing in a practical environment.

EU Anti Cookie Laws – Utter Nonsense

So I woke up yesterday, like many web developers, to the news that the often threatened European anti cookie law is finally upon us after three years and will be coming into force on the 25th of May. The radio 4 show I was actually listening to delivered this news in the form of an interview with a typically uninformed governmental type who appeared to think that this law would be a massive pain in the backside to developers but that we “had no choice”.

Even the BBC article I just linked to doesn’t get all the facts right! Our own minister of culture takes a bit of a smarmy attitude (quote “we should not see any delay in action as a ‘get out of jail free card'”). It’s very clear that whichever bunch of suited monkeys came up with this ridiculous ruling also weren’t in possession of all the facts before making the decision. The UK, unfortunately, is bound as part of their EU agreement to enforce this law otherwise they’ll be in quite a lot of trouble, so all eyes at the moment should be on the interpretation that our government implements in order to comply. Given the level of competence they’ve shown in recent years, confidence certainly isn’t high. It should be of absolutely no surpise to anyone that the industry is simply not prepared because nobody with half a brain would ever think something this ridiculous would ever come to fruition.

Well, I’m here to just clarify that the “Pain” that this will apparently cause developers is absolutely MINUTE compared to the pain this will cause users. If you’re a developer, you’ll already know the scale of what we’re dealing with. We’re a resourceful bunch and will be able to handle whatever these cretins throw at us (heck, we do that at work every day right?).

Users, on the other hand, are stuck with this forever. Fortunately regular web users are also pretty resourceful and I dare say some kind of loophole will be discovered to get around whatever we end up with. The people that will probably be hit hardest will be the most vulnerable web users (i.e. the thickies that don’t even know what a browser is). So what have those users got to look forward to?

Well, contrary to what all these articles tell you, cookies are not evil things that infect your computers and steal personal information. They are tiny key-value pairs that store information to give you persistant state accross the otherwise stateless web. They’re also used to power many commonly used web framework concepts such as the session (where a little encrypted session-key cookie is placed on the clients browser to give them a persistant session for 30 minutes or so). Decline cookies and it’s bye bye “being logged in” and bye bye “shopping cart”.

Web analytics software that companies use to track website performance and improve usability / conversions are mostly also cookie powered (including google’s own solution and analytics-leaders omniture). Yes, this software will certainly be used to track user behaviour in order to improve a websites conversion rate, but this is no more unscrupulous than the “nectar card” system which people use every day. Will be pretty interested to see how those companies deal with whatever happens.

To take this one step further – it is actually scientifically impossible for this not to be annoying for users. If the site itself needs to get permission to plant a cookie, and the user says no, then it’s going to have to ask again on every single page because without planting a cookie there’s no way for the site to know that it already asked you! Actually that isn’ strictly true – in the absence of cookies, the last bastion of maintaining state therefore becomes the utterly insecure and easily by-passable query-string.

And it’s worth pointing out as well that the querystring can be used to track user behaviour just as easily as a cookie can, and is far less private (asp.net developers – you can achieve this functionality by adding the “cookieless” attribute to your Session settings in web.config). In fact, banning cookies will only really “wound” the beast they are trying to kill. Pretty much the only thing that cookies give you uniquely is the ability to tell whether someone has visited your site before (and all associated data that you might have given them last time they did).

It’s also scientifically impossible to actually get the users permission in a bulletproof way! Most browsers already have a “prompt me about cookies” setting but apparently just turning this on won’t be enough (already stated). They can’t really use javascript because people might not necessarily have that turned on, and they can’t use a DOM element because those can just be manipulated away. Placing the responsibility to ask permission for cookies in the laps of web developers has basically ensured that there is no bullet proof way of making sure the user gives that permission. Nice one!

If we’re lucky, the ban will be restricted to uses of cookies for certain things rather than just being a blanket ban, but unfortunately this is also scientifically impossible. There is absolutely no way for anyone monitoring cookie usage to tell what a particular cookie does without seeing the accompanying server side code (unless it’s obvious from the name). Unless it’s a blanket ban then, this will be an un-enforceable law.

Speaking of those organizations who will be enforcing this… how are they going to prove that a website didn’t ask users permission? Will we all now be expected to keep databases proving our innocence? And how on earth would we actually obtain this proof to begin with? It’s not like we can match a web request to an actual person (you certainly can’t get that sort of information from the IP address of your average web user).

I don’t mind going out on a limb here and saying that I personally think that this law will never happen or if it does happen it will be implemented in such a way that nothing will actually change for the most part. I will also put my hand up and admit that I have probably jumped straight to the “worst case scenario” when going through the implications. With any luck it’ll end up just being a mandatory note in the footer.

To all the privacy lobbies out there campaigning for this: Listen up. If you don’t want people knowing your IP address, don’t use the internet (your IP address is included in every request you make and there’s nothing you can do to stop it being there). If you don’t want people knowing your personal details, don’t type them into the internet. If you don’t want people knowing what you’re buying online, don’t buy things on the internet. Quit trying to wreck it for the rest of us. Focus your energies on campaigning against specific abuses of privacy, such as people selling your details to advertising firms or other such actions.

So I stand by my initial point: This is an unenforcable law dreamed up by a ridiculous group of people. If attempts are made to enforce it then I predict european web-wide rebellion! To agree to these laws would be to hand a whopping great e-commerce victory to the rest of the world. The only people ultimately harmed by the decision would be users and I’ll be damned if I’m going to give my own users a rubbish web experience just for this.

prettyDate not working in IE

By far the simplest way of parsing dates from the twitter API is to use John Resig’s prettydate script. I found that this tended to have problems in IE when parsing twitter style dates. Fix was very simple though. Just add the following line to the start of the script…

function prettyDate(time) {
    time = time.replace("+0000", "");
    ... and so on

Basically the IE javascript engine was struggling with the “+0000” part of the date that twitter returns, so just get rid of it before parsing the date.

Exploring Website Accessibility: Part 2

Introduction

So in part 1 I created two websites, one of which adheres strictly to the various accessability guidelines that have been published by numerous authorities, the other of which pretty much goes out of its way to ignore them all. In this part I’m going to go through exactly what makes the good version “good” and the bad version “bad”, and I’ll show you exactly what the consequences are of coding sites the bad way by showing the output from a screen reader. In part 3 I’m going to go through the correlation between accessible websites and other more well practised web techniques such as SEO and the user experience.

So just a quick recap then, here is the code for both sites;

And here is what they look like when rendered;

We did briefly touch on the fact that, to the untrained eye, the bad version “looks better”. Unfortunately, I believe that this fact is chiefly responsible for all of the awful web coding that goes on in the world, and for the number of people who assume that knowing HTML keywords means they are suddenly a web designer. HTML is, of course, supposed to form the semantic layer of your site. It has absolutely no business “looking good” at all! This is often a bit of a tough sell to web newbies, but hopefully when we look at the results it should convince any non-believers 🙂

So here we go then: Where did the bad version go so wrong?

Noddy HTML code

The “bad version” should have had most professional web designers clawing at the walls. Web standards have completely gone out of the window and what we’re left with is a meaningless blob of text. Whoever coded it has clearly been focussing purely on how the site will look when rendered, rather than semantically analysing each element and selecting an appropriate tag to use. They have also taken the decision of using tables to form the basis of their layout (this is why the site “looks better” than the other version). Additionally there are lists of items which aren’t decorated with a list tag and liberal use of the line break tag.

The reason that coding in this way causes an accessibility issue is that screen readers can also see HTML tags, and help their user by “announcing” that they are about to hear a list of items, or that they are about to hear some tabular data. Because the author of the bad version has thrown semantics to the wind, neither the screen reader or the user will be able to tell whether they are hearing a list or just a plain old paragraph. Even worse, the site will be described to the listener as a table, which causes superfluous announcements which get in the way of the content. The visually impaired user doesn’t really give a hoot if your site “looks fine”, they just want to hear the content and the use of a table layout has wrecked their experience.

In addition, check out the difference in headings between the good version and the bad version. Headings are another element which screen readers will “announce”. This makes it clear to the listener that a new section is beginning. I had also previously mentioned that it is possible for screen reader users to pull up a list of the headings on the page. Each heading should therefore be descriptive of the section that it precedes, otherwise when they pull up their list it’s not going to be much good to them.

Access keys

The bad version hasn’t implemented access keys. As mentioned in the previous article, access keys allow a visually impaired user to use a keyboard short-cut to access certain links and there are well established conventions for which keys should perform which actions. The lack of implementation on the bad version means that the user would be required to either tab through the site until they find what they’re looking for, or they’d have to listen to the whole page being read until they found what they were looking for. Access keys will also be announced by the screen reader.

The good version, on the other hand, implements access keys precisely to specification. In particular note that the key “4” takes the user straight to the label of the search form, and that key “s” skips over the navigation. The good version has also implemented a handy guide as the very first element on the page (meaning that this will be the first thing the user hears after the page title). At this point they are able to infer that this is an accessible site, and may choose to perform one of the common actions such as searching or contacting you.

There are arguments against using access keys though. The first is that apparently some screen readers utilize the same keystrokes and so it is possible to “override” the screen reader functionality with your website functionality, which is completely ridiculous. Hopefully screen reader developers and browser developers will resolve this between themselves.

The other argument against using access keys is that because they are so rarely used, they end up just getting in the way. This is also completely absurd and defeatist, how do you expect to change anything with an attitude like that?

Both of these arguments clearly point to issues with either the browser or the screen reader – I don’t see why web designers should need to code around these failings once again. It wouldn’t be too hard to just implement a “please don’t announce access keys” feature, surely? For this reason, the good version has implemented them, however I have been careful to make sure that all but access key 4 are announced BEFORE the skip navigation key – meaning that if the user skips the navigation then they won’t hear the access keys being announced.

Front loaded paragraphs and content

The screen reader will start by announcing the page title, and then parse the rest of the site as it appears in the HTML document. Lets imagine then, that the user is listening to the bad version of the website. Given that access keys haven’t been implemented, they don’t actually get to find out what the store actually sells until Line 46 of the code, because there’s a whole bunch of marketing crap such as “Do you like cheap deals?” etc. before the site even mentions the types of products it provides. At best, you’ll annoy your user, and at worst they’ll have gone somewhere else before they even get that far.

The good version, on the other hand, gets straight to the point and opens its welcoming spiel by specifying exactly what the site sells and where it sells to. At this point the user can continue listening to the site description should they want to. Also note that the “Navigation” appears after the main site content. This means that the user gets the “main content” read to them prior to the “navigation content”, which makes perfect sense as navigating away from a page is something you would generally do after you’d read it.

Alt text on images

The bad version doesn’t include alt text in the logo image, the good version does.

Images are another example of a tag that would be announced by the screen reader (jaws uses the word “Graphic”). This is immediately followed by the alt text. Before I did much accessibility research, I used to think that putting hugely verbose alt text into an image tag was a great idea – describe the image in it’s totality so that someone listening to the description could picture it in their mind. I’ve since come to realise that this would probably be pretty annoying! Therefore, short snappy descriptions are best.

Marketing Spiel

Sometimes, as a web developer, you hear marketing types constantly mention certain words over and over again to the point that you wish they’d never learned them. Two such offenders are “Call to action” (meaning that the user should be encouraged to take an action once they’ve seen something funky like the price or the product name) and “Above the fold” (meaning things that are visible before the user is required to scroll). Now, there are appropriate uses for both of these; what I have a problem with is not their use but their over-use. Unfortunately when it comes to accessibility, the problem actually gets a lot worse.

The bad version contains three “calls to action”, one next to each of its products. I mentioned in the first article that visually impaired users tend to tab through elements on a page. When the user reaches the “Click here” link, the screen reader will say just that to them. With no understanding of the context, it’s unlikely that the user is going to follow the link, thus your call to action becomes a call to inaction!

The page title also goes on and on about the great deals that the user can expect from the site. Given that it’s highly likely the first time a user sees that title is going to be in a set of search engine results, it is likely that the title has been designed to stand out and be clicked on, as well as being keyword rich. The good version, on the other hand, has a short, plainly descriptive title (which it should be given that it’s what a screen reader will dictate before anything else). Here, tragically, we’ve hit upon a conflict between the interests of SEO and the interests of accessibility, more of which I’ll come on to in the next part.

On to the results!

To show you exactly how the Jaws screen reader would interpret the two pages, I’m going to use a free firefox plugin called “Fangs”. Fangs can provide us with three summaries;

  1. The output that the screen reader would dictate
  2. A list of headings on the page that a user could call up
  3. A list of links on the page that the user could call up.

Here we go then…

Result set 1: Screen reader output

This is what the screen reader would dictate for each of the sites. “Announcements” appear in bold.

The good version:

Page has four headings and thirteen linksHome left double angle bracket My Simple Store dash Internet Explorer Heading level two Accessibility guide List of six items bullet Access key S to skip navigation and this guide bullet Access key four to perform keyword search bullet Access key one For home page bullet Access key six For help bullet Access key eight For Terms and conditions bullet Access key nine To Contact Us List end Graphic My Simple Store Logo List of four items bullet Link Home alt plus one bullet Link Contact Us alt plus nine bullet Link Terms and Conditions alt plus eight bullet Link Help alt plus six List end This page link alt plus s Heading level one Welcome to My Simple Store!! We sell a great range of this type of product in the location you’re looking for. We’ve made thousands of customers happy and hope to do the same for you. Heading level two Our products List of three items bullet Link Product one This product is awesome! bullet Link Product two This product is also awesome! bullet Link Product three Awesome, just like all the others! List end Heading level two Navigation Search our site colon Search colon Edit Enter Keywords Search Site buttonList of five items bullet Link Menu Item one bullet Link Menu Item two bullet Link Menu Item three bullet Link Menu Item four bullet Link Menu Item five List end

The bad version;

Page has one heading and twelve links Awesome Cheap products, fast delivery, Money Back Guarentee dash Internet Explorer Table with one column and four rows Table with two columns and one row Link Home Link Contact Link Terms Link Help Table endTable with two columns and one rowNavigation Search colon Edit Search Site button Link Menu Item one Link Menu Item two Link Menu Item three Link Menu Item four Link Menu Item five Heading level one Welcome to our Website!! Do you like cheap deals? Do you like fast delivery? Do you like awesome quality? Then you’ve definately come to the right place! Check out our store and our amazing offers!!! These are the best products of this type in your location! Our products Product one ! This product is awesome! To view more details, Link Click Here! Product two ! This product is also awesome! To view more details, Link Click Here! Product three ! Awesome, just like all the others! To view more details, Link Click Here!Table end Table end

So which would you prefer? Remembering that with the first one you can just hit alt-s to skip the entire thing (right down to the “welcome to my simple store” heading) I don’t think there’s any contest really.

Result set 2: Heading summary

This is what the user would hear if they called up a list of the headings on the page;

The Good version:

  • Accessibility Guide
  • Welcome to My Simple Store!!
  • Our Products
  • Navigation

The bad version

  • Welcome to our Website!!

Yeah…

Result set 3: The links summary

This is what the user would hear if they called up a list of links, or more importantly if they tabbed through them.

The good version

  • Home alt+1
  • Contact Us alt+9
  • Terms and Conditions alt+8
  • Help alt+6
  • Product 1
  • Product 2
  • Product 3
  • Menu Item 1
  • Menu Item 2
  • Menu Item 3
  • Menu Item 4
  • Menu Item 5

The bad version:

  • Home
  • Contact
  • Terms
  • Help
  • Menu Item 1
  • Menu Item 2
  • Menu Item 3
  • Menu Item 4
  • Menu Item 5
  • Click here!
  • Click here!
  • Click here!

Where’s your call to action now huh? 😛

Closing thoughts

So that’s what a screen reader would output and I will go into more detail on that in the next part. I did just want to mention one thing before closing though: As a web developer you should always be coding for your user. What we’ve done here is attempt to make one site that will work for both visually impaired users and those who aren’t. Even though the “good version” is very accessible, the ultimate solution is and will always be to create a seperate version of your pages for each type of user, this way you can truly code each situation appropriately.