Get in touch with our team

08.01.2018

16 min read

Running WordPress entirely behind Amazon Cloudfront to achieve global sub-1000ms page load times

This article was updated on: 07.02.2022

This post was first written in 2018, and whilst at the time of updating, Jan 2020, this is not wrong, there are a number of other methods of speeding up your WordPress website.

We now recommend Kinsta web hosting for standard WordPress multi page applications as their bespoke caching layers are just as great as those seen in this tutorial, and also strongly advocate performant JavaScript Single Page Applications (SPAs) if you or your developers have the skillset

Use Amazon as a full page cache solution and enjoy the benefits of high availability and global coverage – all whilst reducing your site’s perceived response time to a fraction of what it was – with this guide.

I’m sure I’m not alone when I go to spin up a new server and err on the side of caution when picking the instance size in Amazon’s dashboard. “Small” might be too small for even a modest B2B lead generation site like ours.

Our traffic is growing monthly, and we periodically run aggressive paid campaigns around asset downloads – so we need to be somewhere fit for the future that can handle traffic spikes and an increasingly global audience.

The aim of this exercise is to achieve a page load time under 1 second, which, as Google Developers put here, is a reasonable requirement for mobile users to see only the above the fold content. On desktops, we’d expect faster load times so we expect to load the entire page in this time. We also want to achieve this result, within some margin, globally.

Amazon Web Services

There are plenty of services out there which can accomplish similar results to those shown here, but our preference is the Amazon Web Services platform as we’re familiar with it. In essence, the steps we need to accomplish are (don’t jump ahead – there’s some required setup below):

  1. Environment configuration
  2. Configuring your current setup
  3. Setting up Cloudfront
  4. Local testing
  5. Going ‘live’ with your Cloudfront distribution

1. Environment configuration

Before we begin, be aware that many WordPress Cloudfront blogs are simply talking about using Cloudfront as an asset-only origin-pull CDN (content delivery network) mirror rather than a full page cache. Under an asset CDN model, your web server is still handling all HTTP requests to deliver pages. Conversely, with a full page CDN setup, all of your non-transactional HTTP requests will simply be served a static/snapshot HTML version of your web page in a split second. The request doesn’t touch your server until customers start engaging with shopping carts or filling forms (or you drop them a cookie – more on this later).

You don’t need to be using Amazon Web Services to host your website for this to work. But in the aim of keeping everything together, it’s worth pointing out you can host your website on Amazon’s EC2 servers (behind a load balancer if you wish) and you can also manage your DNS through Amazon’s Route 53 DNS service, too.

You will likely want an SSL certificate on your origin server, but you can also manage your public facing SSL Certificate via AWS’ Certificate Manager, which provides a good level of security and trust for your customers. We have chosen Let’s Encrypt for our origin server’s TLS supplier, but during the setup of this process, Let’s Encrypt won’t work as you need a public domain, so we used a premium SSL we had that was about to expire to bridge the gap for us. Alternatively, you could launch insecurely on HTTP and then go through the upgrade process retrospectively (origin first, then Cloudfront as discussed below — you can then change the behaviour of the Cloudfront distribution to upgrade HTTP to HTTPS traffic for you).

2. Configuring your current setup

It’s a lot easier to make these changes on a duplicate server, rather than your live one, for obvious reasons – but I’ll leave the exact process for you to decide. Bear in mind some of the steps below will temporarily impact public site delivery.

You will need to ensure your origin server can be served at different URLs. We chose a subdomain to ensure DNS management was easier. Let’s say we also ensure our website is addressable and accessible at example.impression.co.uk, which you can easily achieve by adding an additional Alias entry into your vhosts file and then restarting the process. For NGiNX, this would look something like;

server {
listen 80;
server_name impression.co.uk www.impressiondigital.com example.impression.co.uk;
...
}

Once this configuration has been changed, then you are ready to set up your TLS/SSL certificate and Cloudfront distribution.

In AWS, select Certificate Manager and follow the intuitive steps. Remember this is for your main public www. domain. You’ll need to verify your ownership via a standard administrator verification email, but the process of creating a certificate is relatively painless. (I believe you can also import your own certificates if you need to, but we haven’t completed this ourselves just yet so cannot recommend this approach.)

Cloudfront does allow you to extend the “Cache-Control” header for minimizing your AWS bills, but there are a few various settings you will want to make on your server before getting started. These are typical optimisations which you should really consider rolling out regardless;

  • Enable gzip compression
  • Remove query string from static resources
  • Extend browser cache lifetime
  • Compress and concatenate CSS and JavaScript files

I’ve covered all of these briefly below for an NGiNX setup. Apache alternatives are available, though, and actually Google’s Pagespeed module for both Apache and NGiNX is very good at adding much of this for you.

Also, many WordPress plugins achieve the same, but we prefer less WordPress plugin code bloat where possible.

Enable gzip compression

By default, gzip compression is likely already enabled, but it’s always good to check. Smaller files mean less bandwidth transfer to your website visitors, but with a service feeding this for you, it will also translate to a small cost saving, too.

Check your nginx.conf file, located here: /etc/nginx/nginx.conf

The most important lines below are gzip on and gzip_types;

gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

If you needed to edit this file, ensure that it’s saved and you test and restart the NGiNX process.

Remove query string from static resources

Some services will not cache files which are referenced with query strings in their URLs. This is because feasibly the query string is making the file response in some way dynamic, or it is referencing a particular file version. There are a couple of simple WordPress functions.php drop-ins which can achieve this quickly for you — as with all WordPress code editing, please ensure this is a good fit for your needs on a local or staging environment first;

function impression_remove_asset_version( $url ){
$parts = explode( '?', $url );
return $parts[0];
}
add_filter( 'script_loader_src', 'impression_remove_asset_version', 15, 1 );
add_filter( 'style_loader_src', 'impression_remove_asset_version', 15, 1 );

Extend browser cache lifetime

This is important as browsers, without knowing how long you expect a resource to be up-to-date for, will request it on every page load. Images and other media very rarely change once they’re uploaded, so include a snippet like this in your sites-available configuration and your users’ browsers will stop repeat queries to your files — and this will save you on your Cloudfront bills, too.

location ~* .(js|css|png|jpg|jpeg|gif|ico)$ {
expires 1y;
}

Compress and concatenate CSS and JavaScript files

With WP Bedrock, we use Laravel Mix in our build process to ensure we get fresh file names on each code compilation. This is a cache-busting way to publish new site content, but it may not be available to all depending on your build process. What we would suggest is in some way ensuring your CSS and Javascript file names are versioned so that you can take advantage of this caching behaviour, too.

In addition, like many good build processes now, you should attempt to concatenate all similar file types, and also to compress them into smaller files. This will save initial bandwidth and also on request volume. We use Webpack at Impression, but Gulp, Grunt, Codekit, etc, are all good alternatives and they will achieve similar results.

Once you’re all set up, you should see the following in your dashboard:

Finally, before we set up the Cloudfront distribution, the other required step, should you wish to keep server-style logs, is to set up an empty S3 bucket. This is where the logs will be stored, as seen in the next image. The format is a little different to Apache/NGiNX server logs but it’s a good idea to get these stored ready for when they’re needed. If they are, then you’re likely going to want to download and concatenate them via a command line/terminal window as each compressed log file is tiny.

3. Setting up Cloudfront

To set up Cloudfront, head to your Amazon Cloudfront dashboard, click “Create Distribution” and then select “Get Started” under the Web heading.

Enter your newly publicly accessible origin domain name, and either set or accept the default ID. I also like to set the Origin Protocol Policy to “match viewer” and then handle HTTP -> HTTPS upgrades at both the server and Cloudfront levels.

Origin headers

Next up, if you want to be extra secure, you can set headers here which are automatically appended to requests when they are sent to the origin. If you wanted to password protect your origin via htpasswd or similar, this is where you’d insert the username and password into the Authorization header in order to allow requests through freely.

Default caching behaviours

Depending on your current (or planned) HTTPS policies, you may want to allow either both HTTP and HTTPS, or just upgrade all requests to HTTPS at the Cloudfront level. Typically we choose the upgrade option here. This is the default cache behaviours and you can override these with specific settings for specific URL path patterns when we come to add additional custom behaviours.

Whitelist headers

Within this same section, you can further specify your default header behaviour. We need to forward requests on to the origin server with the original hostname appended, for WordPress to handle the requests properly. To do this, select “Whitelist” for the value of “Cache Based on Selected Request Headers” and then select “Host” from the list as per this screenshot.

Whitelist cookies

WordPress relies on a number of cookies to function, which are listed below. Under “Forward Cookies” select “Whitelist” and paste in the below. PHPSESSID is required to persist server sessions, and the other are for WordPress admin/logged in sessions to maintain an authorised state.

  • PHPSESSID
  • wordpress_logged_in_*
  • wordpress_test_cookie
  • wp-settings-*

Global distribution?

Regarding price classes, you’ll have the option here to choose just how global your content needs to be. The options are clear from the image below — if you currently receive or aspire to receive global traffic then there’s only a small set up in cost to select “All Edge Locations” here.

TLS / SSL Certificates

Under the SSL section, select this SSL/TLS certificate which you generated earlier. If your certificate isn’t visible, then use the reload button or check that the certificate has validated your ownership and generated successfully.

Access logging

Logging is also configured on this setup screen. Just Enable it from the checkbox and select the S3 bucket name which you set up earlier. Set a prefix if you intend to share this bucket with other sites or use it for other purposes, too.

4. Additional custom behaviours

In addition to the defaults set above, there are some additional cache Behaviours required for WordPress to function properly. All custom cache Behaviours are URL pattern based – so easily account for URLs like wp-admin, .jpg, and specific page URLs. Switch to the Behaviours tab and click Create Behaviour to get started.

The purpose of some of these additional rules is to allow HTTP requests to punch straight through the Cloudfront cache without serving the incorrect, cached content whilst doing so. The other rules are designed to force long lifetimes on asset files, for example JavaScript, CSS and media.

Disable caching

Create a new Behaviour and copy across the settings from my image below. I chose here to redirect all HTTP traffic to HTTPS as that’s what our setup at Impression is capable of. I input the pattern which is discussed below, and most importantly for invalidating this pattern’s caching is the setting for the option “Cache Based on Selected Request Headers” which is set to All.

To complete the setup, ensure all WordPress related routes also allow for POSTing data in and click save. Next up, complete the same for each of the below if appropriate for your setup.

  • wp* (this covers all WordPress admin and JSON routes. It also covers framework routes and WordPress subfolder installs caused by frameworks, for example Bedrock.)
  • .well-known (this is a required route for Let’s Encrypt postbacks)
  • xmlrpc.php (As far as I’m aware, you can treat this as optional, however deliberately missing out this WordPress file will remove many remote publishing capabilities of WordPress)
  • purge* (we also use PHP-FPM caching on our servers as a backup, and because it integrates with our continuous deployment setup, so we allow for POSTing and GETing the /purge URLs too. You may not need this one.)

Extending cache lifetime

Opposed to the steps above, where we want to disable caching, for asset files and even the HTML pages, we want to extend both their Cloudfront and browser cache lifetime to much larger values.

Images, versioned code files (CSS, JavaScript) and other media, in theory, will not be changed – ever. So these files may persist in the cache.

To configure these settings for such files, proceed as above, but leave request types to only GET, HEAD. Under “Object Caching” select “Customise”, and then change the maximum and default TTL values to a very large number — something like 31536000 (1 year).

Also, set “Compress objects automatically” to “Yes” so that files with the “Accept-Encoding: gzip” header are compressed by Cloudfront.

5. Local testing

If you have completed your setup and have added your custom Behaviours, then you’re ready to confirm your setup is working well. Whilst you could go straight ahead and hope, testing is in fact very simple to do via your hosts file. The reason being is that the origin server will already be visible to the Cloudfront distribution, therefore no further Cloudfront changes are required in order to verify success.

To get started, use Terminal (or a Windows equivalent) to get a valid IP address for your Cloudfront distribution. These can change over time, but in my experience don’t do so very frequently – but either way do not rely on them – always CNAME in a live environment.

$ dig +short d286mopxzp4hip.cloudfront.net
54.230.9.186
54.230.9.26
...
54.230.9.210

Once you have a valid IP, just add this to your local hosts file as per any other DNS live test and then, from a cookieless browser (or curl via Terminal) request a page you’d like to test. It’s likely on the first request you won’t see a Cloudfront Hit header (as the file isn’t yet in the cache) but on the second you should see the file being requested from the Cloudfront cache.

The headers you’re specifically looking for are shown below;

via: 1.1 d887f5d446ae9c64e0365e1b394dac21.cloudfront.net (CloudFront)
x-amz-cf-id: aIldEmX8N6PwXX9LfOiK3TUSAj1oCrOo_C9HiN-ZoV0zxgDF4SNCHw==
x-cache: Hit from cloudfront

Also, given your custom Behaviour settings, it’s also worth trying to log in to WordPress and to check that Cloudfront isn’t caching this. Also test your contact forms, any shopping baskets and similarly dynamic functionality elements. Our rules above should account for most situations but without looking at your specific situations we can’t guarantee that.

6. Going ‘live’ with your Cloudfront distribution

To go live, once tested, just change your public DNS settings on your name servers to point the www. subdomain to your Cloudfront distributions web address via CNAME. If you don’t currently have a non-www to www redirection in place, then just use a freely available one via a Google search so that your origin’s IP address/identity is concealed.

If you are using Let’s Encrypt, now is a good time to run the certbot renew command via your command line, in order to double-check for issues with the requests pointing to /.well-known getting through ok.

If during testing, or following a future deployment, you need to clear the Cloudfront cached version of an asset or page, then you can do this under the Cloudfront Invalidations tab. Try to match the file as precisely as possible as clearing global Cloudfront caches is technologically expensive – in terms of resource, but there’s also a small cost to you (if used excessively).

Full page WordPress Cloudfront caching results

Impression has been running on a very similar setup to this for a little while now and we’re very pleased with our results. We’ve managed to achieve almost consistent sub-1000ms response times via Pingdom tools in our local test location (Sweden), and when we test across the Atlantic in New York we get very comparable results too. Here are our results;

If you have any feedback or suggestions, or comments of any kind, please leave them below. If you think we might be able to help you increase your website’s resilience and performance then please get in touch with our team. If you successfully implement this through our instructions, please post a screenshot like ours below!