Making a Fast Website

1
Official Construct Post
Tom's avatar
Tom
  • 29 Feb, 2012
  • 3,683 words
  • ~15-25 mins
  • 5,648 visits
  • 0 favourites

It’s always been important to make your website fast. Not only is it obvious visitors are going to prefer it but it’s now well-known that Google uses loading speed as a ranking metric. The initial page load of your website is perhaps the most important. The longer it takes to load the more visitors are going to press back and find an alternative. A slow website is something that could potentially frustrate visitors so it’s important to try and remove it from the equation.

Jakob Nielsen is an authority on usability and has studied response times extensively. He notes that response times of:

  • Up to 100 milliseconds is perceived as instantaneous response
  • Up to 1 second will be noticed by the user but their thought flow will remain uninterrupted.
  • Over 10 seconds is also mentioned but this shouldn’t be applicable to normal websites which is the focus of this blog. If your site takes longer than 10 seconds to load something is probably critically wrong!

He makes a point of noting that these numbers haven’t changed over the years. A lot of research related to the Internet is going to expire quite rapidly but the numbers for fundamental qualities like response times appear to be hardwired.

It’s hard to get a page load under 1 second, but it’s the sweet spot people should be aiming for. Under one second the user’s thought flow remains intact which is crucial if you are trying to sell the user an idea or a product. It’s also going to reflect well on your product and services – if the website is fast it’s likely your product is going to share similar qualities as well (especially if your product is the website).

If you’re still not convinced, there’s some interesting data out there if you dig around. Marissa Mayer (Currently VP of Location and Local Services at Google) spoke at Web 2.0 about a test they ran where they concluded a 500ms increase in page load time on Google led to 20% less traffic and revenue.

Greg Linden (ex Senior Manager & Principal at Amazon.com) ran A/B tests on Amazon where they delayed the page speed in 100ms increments and found that “even very small delays would result in substantial and costly drops in revenue”. According to Kohavi and Longbotham (2007) every 100ms increase in load time decreased sales by 1%.

The effects aren’t just financial either according to some researchers. Slow web pages:

  • Lower perceived credibility (Fogg et al. 2001)
  • Lower perceived quality (Bouch, Kuchinsky, and Bhatti 2000)
  • Increase user frustration (Ceaparu et al. 2004)
  • Increase blood pressure (Scheirer et al. 2002)
  • Reduce flow rates (Novak, Hoffman, and Yung 200)
  • Reduce conversion rates (Akamai 2007)
  • Increase exit rates (Nielsen 2000)
  • Are perceived as less interesting (Ramsay, Barbesi, and Preece 1998)
  • Are perceived as less attractive (Skadberg and Kimmel 2004)

(Source: http://www.websiteoptimization.com/speed/tweak/psychology-web-performance/)

So now we know it’s important…

Where do we start?

A good starting point when measuring your website’s speed is to establish the minimum overheads. A good way to calculate this is by measuring ping times to your server with this tool. A ping is a round trip to the server and the response times will vary depending on the distance between the server and the client as well as the quality of the network between. Our server is located in New York, which is reflected in the chart below with a blazingly fast average round trip time of 5.5 milliseconds. Further away places like Europe see a ping of around 100 milliseconds which for the distance is still very impressive. You’ll also notice that countries like China are the slowest. They have a much poorer miles per millisecond rating probably due to a worse quality network (the ‘Round Trip Miles’ is obviously a simplistic estimate which will have an impact on the ‘Miles per ms’ but for the purposes of this chart it sufficiently illustrates the points and is nonetheless interesting to calculate!)

I’ve also added in a theoretical physical minimum response time which is based on the time taken to cover the distance at speed of light down a fibre optic. The speed of light is 670,616,629 mph, but this is reduced to "only" 445,000,000 down a fibre optic due to the effect of refraction. Obviously geographical obstacles, indirect routing paths, and real network hardware will increase the time. However it provides a useful theoretical minimum - there's no point trying to get your ping time less than the direct fibre optic time - it's impossible!

City Average Response (MS) Theoretical Minimum (MS) Round Trip Distance (Miles) Miles Per MS (higher is better)
New York (USA) 5.5 0 0 -
London (England) 77.2 56.5 6,981 90.4
Sydney (Australia) 231 160.6 19,856 86
Moscow (Russia) 125.4 75.4 9,326 74.4
Sao Paulo (Brazil) 129.9 77.2 9,542 73.5
Beijing (China) 315.3 109.2 13,500 42.9

It’s quite amazing when you think about it - a message being transmitted and returned over at least 7,000 miles of network in 77 milliseconds.

These are the inescapable minimum times it will take visitors from around the world to load your webpage. You have very little influence over these numbers.

Useful Free Tools

When working on speeding up your website there are a few excellent tools you can use for free that can help you measure your progress.

Pingdom Website Speed Test

http://tools.pingdom.com/fpt/

This is a great tool that I use a lot to measure our website loading time with options to test from different servers as well. One thing worth noting is that social networking ‘share’ or ‘like’ boxes (like we have) will make the results appear a lot slower than they actually are. Some of the CDNs involved seem to have highly variable response times. Also, they often make AJAX requests that continue to run after your page has finished loading, and some tools will include this time when measuring your page load time. So when testing a website I tend to pick a page that doesn’t have any of these social buttons on it. This isn’t really cheating, I think the base page load time is the most important and a delayed load of social buttons is generally out of your control and not the meat of the content which the visitor is likely most interested in.

YSlow

http://developer.yahoo.com/yslow/

Developed by Yahoo, this free tool lets you know what areas on your website can be changed to improve your page load time. It’s also very useful for showing you if you’ve set your caches up correctly! It can be installed as a Chrome extension too.

How to make your website faster

We’ve worked quite hard to make sure our website loads fast. The YSlow page describes a lot of techniques in great detail and is an excellent resource. I’m not going to try to write a replacement for YSlow’s guide as they are far more knowledgeable than I and go into far more detail, but instead I will just give an overview of what I consider the most important techniques and my experiences with implementing them.

The most obvious – page size

This is the most obvious but often overlooked. The more data you have on your page, the longer it will take to transmit. This is partly why I’m generally against the use of CMS (content management systems) where possible. I’ve observed a lot of websites that are bloated with a lot of HTML and JS includes. Some websites are bloated to the point of approaching 1MB of raw HTML code which is insanity. Our HTML5 game engine page is probably in the realms of a ‘normal’ and the raw HTML code is only 15kb in size. If you hand control of your content over to a CMS you also lose a lot of control over the code which can severely negatively impact your page size.

Serve your pages compressed

In IIS there’s a simple switch (under ‘Compression’) which allows you to distribute content from your server to the client in compressed format. There are virtually no downsides to using this with modern servers. The benefits are obvious: if we use YSlow to analyse our homepage we can see if we expand the CSS tab the 7.7k CSS file is sent GZipped as a total size of 1.7k. That’s about 22% its size, so now less data will need to be transmitted. Compression algorithms work excellently on text, especially highly repetitive text. CSS, HTML and JS by nature have a lot of repeating chunks inside them which make them compress very efficiently.

GZip compression is also well supported, and according to HTTP-compression.com ALL common web browsers support it. Internet Explorer has had support for this since version 4.0.

If your server isn’t an antique it shouldn’t make any noticeable impact on performance either (except perhaps in some edge cases which I’ve yet to see or hear about).

Put your Javascript at the bottom

Loading Javascript files will block other downloads on the page. It’s recommended in the HTTP specification that browsers can download up to 2 resources in parallel from each hostname (Edit: chrisacky on HackerNews correctly points out that a lot of modern browsers surpass this guideline. For example Chrome and FireFox allow 6 connections per hostname). However when your browser is downloading JS it will block all other downloads even if they exist on different hostnames. Putting your Javascript at the top (in the head tag) can create seemingly sluggish behaviour and a perceived slower loading time, since it takes longer for anything else to render on-screen.

It’s best to put your script includes at the bottom of the page just before the closing body tag. This could create some design problems for websites (again this is another problem with CMSs) and may not be as simple in some cases as just moving them, but it’s advisable where possible. The HTML5 games Construct 2 exports use this technique to ensure the game loads last, after the rest of the page is ready. Also if you specially design your scripts so they can load in any order (not just the order they’re listed in the HTML), you can also look in to using the async or defer attributes, but this can increase difficulty even further in some cases.

Use Sprite Sheets

Every time your browser makes an HTTP request there are overheads made to making the request itself. If you have a page with a dozen or so small icons like this page: http://www.scirra.com/html5-game-engine you are making a dozen or so HTTP requests with their associated overheads!

This is why it’s best to use a sprite sheet: http://static2.scirra.net/images/store-icons.png

All the images in one file mean there’s only one HTTP request, so the cost of the overhead is only paid once. You can use this image as a background image for a div, for example as follows:

<xmp style="font-size:14px;padding:0;margin:0">.store-icon {
 width: 32px;
 height: 32px;
 background: url(http://static2.scirra.net/images/store-icons.png);
}

</xmp>

I’ve found that using inline CSS for positioning the background can be more maintainable than defining it inside a CSS class. There is little difference though so I wouldn’t think it matters either way.

Sprite sheets are good for decorative elements. The disadvantage of sprite sheets is that they are very unfriendly for Google image search - but then again who cares for these elements?

Sprite sheets work best for uniform elements - that is, images that have the same dimensions. These are easy to maintain. Beware if you start adding different sized images to your sprite sheets, it can become fiddly to maintain compared to standalone images.

Another benefit of sprite sheets is using them as mouseover images. If you have all your images and mouseover images in one sprite sheet, they all load at the same time so when the mouseover image is called it switches instantaneously, compared to a separate image file the user may experience a few hundred milliseconds delay as the mouseover image.

Cookieless Domains and Multiple Domains

If you’re hosting your images on the same domain as your webpage and you have cookies of some description (don’t forget sessions still use a cookie!) then with every request to image files, CSS files, JS etc the cookie data is also sent with the request. This adds to the amount of data that needs to be uploaded from the client.

We use the domain Scirra.net to host all our static content. Almost all our static content is hosted on Scirra.net which is a domain that doesn’t use cookies at all. This makes loading the images faster than if they were hosted on Scirra.com.

As we mentioned before, a browser is recommended to only download 2 files from each host in parallel. Therefore multiple cookieless domains will work in parallel, improving your page load time. We’ve set up 4 subdomains, static1.scirra.net to static4.scirra.net.

Setting up these domains is remarkably simple. You set them up on your domain and make them all point to the same directory on your server. Now if you have 12 static resources on your pages you can distribute them evenly over the static domains which will help with parallel loading of static resources.

One important thing to note is that if you load static1.scirra.net/images/construct2-splash.jpg on one page and then load it with the static2 subdomain on another page the browser will reload the image and not take advantage of the cache! Ensure that once a resource is loaded on one subdomain it is permanently assigned that subdomain. Although it is accessible on other subdomains it is only used on one so the browser can cache the response.

To solve this issue on our website, I wrote a small function `allocateStaticPath`. When we want to put a picture on our website we will put it our code as:

<xmp style="font-size:14px;padding:0;margin:0"><img src="<%=CommonFunctions.AllocateStaticPath(" images/construct2-splash.jpg")%="">"
width="672" height="359" /></xmp>

We then define the function:

<xmp style="font-size:14px;padding:0;margin:0">/// <summary>
/// Turns a path for static resource into it's related static URL static[1-4].scirra.net.
/// </summary>
/// 
Original URL (eg "/images/picture.png")
/// <returns>Full URL to resource</returns>
public static string AllocateStaticPath(string resource)
{
	// Get the root static domain (//static$.scirra.net)
	string returnPath = Settings.CookielessDomainImagePrefix;
	resource = resource.ToLower();

	// Add missing slash if one doesn't exist
	if (resource.Substring(0, 1) != "/")
		resource = "/" + resource;

	var splitpath = resource.Split('/');
	var chars = splitpath[splitpath.Length - 1].ToCharArray();
	var seed = 0;
	for (int i = 0; i < chars.Length; i++)
		seed += chars[i];
	var random = new Random(seed);

	// Set the static ID subdomain
	returnPath = returnPath.Replace("$", random.Next(1, 5).ToString());

	return returnPath + resource;
}</xmp>

This function uses the characters of the files name to set a seed value, then a random number 1-4 is picked. As the seed is going to be the same every time for each filename it will return the same number every time. It then returns the assigned static URL and renders the image like this:

<xmp style="font-size:14px;padding:0;margin:0">[img="2444"]</xmp>

As you can see this is incredibly easy to maintain once it’s all setup and yields real benefits in regards to speed of page loading. You should notice that all static resources are pretty evenly distributed across the static domains on each page. You may be unlucky and find a bias towards one domain on important pages load your homepage but these can be manually tweaked.

We found that for our website 4 static domains seems to be most effective. You may find that you need more or less (I would caution against more than 4 generally though as you will start to suffer slowdowns from too many DNS lookups).

One interesting trick is not to specify `http://` at the start of your URLs for static elements, but simply a double forward slash `//`. This is a very useful and little-known URI feature that will select the protocol of the current page being viewed, very handy if you’re switching between HTTPs and HTTP as it will select the current one and not throw security warnings!

Always specify your image sizes

It’s too easy to not add width and height attributes to every image. However it is incredibly important for perceived loading time! If the images dimensions are not known to the browser rendering the page than the page will constantly be resizing various elements on its page as it works out how big the image actually is. By specifying the dimensions the browser can reserve that fixed space as it renders all elements which will stop content jumping around the page. Although this has no effect on actual page load time, it has a big impact on usability of your website and perceived loading time.

External JS

It’s important to keep your Javascript in external files where possible, since it allows the browser to cache the scripts and not load them on every page load!

It sounds pretty simple but often it is not. Sometimes Javascripts need to contain variables which differ for each user which means it seems difficult to put them in external files. A lot of websites I’ve seen will have huge swathes of Javascript code on their pages because of this very reason. We solved this issue by using global variables which render directly to the page, and our external Javascript files will reference these variables when needed. It now allows us to cache these Javascript files.

Caches

Check that browsers correctly cache static resources. Aggressive caches are best, with an incrementing querystring parameter at the end of them to force browsers to reload them when you make a change to them. This helps prevent static resources being reloaded every page view.

Content Delivery Networks

We have not utilised a CDN yet, but are looking into it. CDNs are well documented and incorporate a lot of the benefits described above on their own accord. CDNs are geographically dispersed which means there is less geographical bias in loading times for static resources.

Other tweaks

There’s a host of other tweaks you can do, depending on how obsessive you are about speed of your page! It’s important to remember however that you reach a point where the time you put into making new changes outweigh any gained benefit. It also can cost you a lot of time in the future with maintainability.

A Fast Server

We recently moved server, away from a 2GB RAM, 2 core 2.2ghz system to a 4 core 3.2ghz server with 8GB of ram. We also now use licensed SQL server which remove hardware restrictions in the express edition and we’ve noticed pages are loading a lot quicker and our CPU isn’t maxing out during peak times! When you’re nearing your server’s processing limits it’s important to upgrade to better hardware to give your site more breathing space for growth. More RAM also means more of your application and database can permanently sit in memory.

The slowest webpage in the world is a page that doesn’t ever load. Although I appreciate the comedy value when people congratulate startups for breaking due to the ‘Hacker News’ or ‘Reddit’ effect (when the site gains so much traffic it breaks) it really is a bit of a disaster for a startup as they missed out on a boat of visitors and potential customers/press for the sake of saving a few extra dollars a month on server costs. It’s important to have a server that can cope with a sudden influx of traffic, so do upgrade when your natural daily server performance is starting to approach it’s capacity. Don’t wait until it’s too late!

The Results

Below I’ve printed out the page load results between us and some competitors. A couple of points to note are that:

  • Pingdom happens to use a NYC server to test response from which is where our server is located, however other test subjects show significant geographical bias in different regions
  • We’ve used the homepages of competitors as test pages because homepages are arguably the most important pages
  • Social buttons add a large amount of time to page loads. As these don’t form the meat of the content we’ve stripped these from our page for tests (no other competitors homepage has these social buttons).
Page Page Size Amsterdam, NL Dallas, USA NYC, USA Average
scirra.com?t=1 530kb 1.18s 0.81s 0.5s 0.83s
gamesalad.com 1.5mb 2.45s 0.72s 1.1s 1.40s
yoyogames.com 1.4mb 2.21s 3.12s 2.13s 2.47s
stencyl.com 747kb 3.88s 1.29s 1.9s 2.36s

Tests were run at 3.30pm GMT+1 29 Feb from Pingdom.com.

Why?

These performance tweaks might all seems like a time drain but I firmly believe it isn’t. If your website loads faster than your competitors you have a distinct advantage. It may be hard to measure but I’m convinced this advantage exists. Visitors will be more likely to visit more pages on your website, they will be more likely to enjoy the experience and they will be less likely to lose their train of thought which is valuable when pitching an idea or product to prospective customers and users.

Startups should be leading the field, they have the agility and knowledge to execute these techniques and gain another much needed competitive edge!

Subscribe

Get emailed when there are new posts!