Bastian starts by saying that PageSpeed Insights is not enough!
The Chrome team introduced the need to translate user experience to metrics. For example, the time to first paint and the time to first contentful paint, when the browser renders the first bit of content, and the time to first meaningful paint, are all different metrics that we can use. This latter example is the first ‘hero’ element – the first metric that we really need to care about because this is what we’re visiting the page for.
We need to think about our templates in terms of the most important elements. How do we make sure that specific elements are loading as fast as they can without caring about the rest? If you want to figure this out for your own site you can open the profiling tab in Chrome Dev Tools and see when certain stages are being rendered.
You can scale this with the Performance Observer. You take your regular Analytics data and add a snippet in manually (this only works in Chrome) – this allows you to see page performance in more detail. You can break these events down further by URL to figure our how fast hero elements are loading on any given URL.
GTM is a bit easier and involves just copying and pasting code, which can be a nicer way to do it.
You can combine your insights with Data Studio to report on them.
We need to talk about the critical rendering path to understand how to make pages and elements load faster. The critical rendering path is concerned with resources involved in loading the important part of the page – the bit that appears above the fold on the devices that you care about. What’s happening below the fold is much less important at first glance.
Briefly, there are two concepts to understand what’s happening. The CSSOM (CSS Object Model) is the first, which is the map of all the CSS style info for the page in question. The CSSOM is combined with the DOM (HTML) to allow the browser to render the page.
The browser has to download the stylesheets before they can put the CSSOM together. The problem is that the browser needs the resources before it can display anything. But if you look at what Google does, you won’t see any single CSS file requested. The reason is that it’s much easier to in-line everything. This way the browser doesn’t need to make any external requests. However, this is awful for any developer and really limits what they can do from a maintenance perspective. Instead, we can build 2 stylesheets. One for the critical view and all the rest for what’s happening below the fold. This can still be a bit of a pain, but criticalCSS on GitHub makes this much easier to choose the right resolutions that you should be focusing on if you want to implement this. If you want to play around at first, try criticalcss.com.
To bring it all together, you in-line your critical CSS then load non-critical CSS asynchronously with rel=”preload”. Apply CSS when it’s finished with “onload”. You also need a fallback for non-JS requests.
Is it worth the effort? Bastian did a test on a standard WordPress site. Standard performance stuff improved the basic WordPress site. If you go one step further and build the CSS as suggested, it gets 30-40% faster than the optimised version. That’s quite a significant improvement, so it’s definitely worth the effort.
Bastian also touches on images. The problem with images is that our common formats are old. Image traffic around the web is significant when it comes to the number of image assets that need to be loaded (60-70%). Because of this, Google came up with an alternative to old formats called WebP, but this never really picked up. It’s not really mass-compatible. You can still use WebP with on-the-fly replacement to make sure it works on non-supported browsers (Edge, Firefox etc.), but you have to create formats manually. But there are other formats to try, so it’s not really scalable.
There are tools out there which apply modern smaller image formats on the fly (e.g. WebP and JPEG-XR). If you’re image heavy it’s worth playing around for this.
What you definitely shouldn’t do is use a massive background image on your site!
Finally we get to fonts. Custom web fonts are pretty and look great, but they’re often really slow. About 70% of sites use custom web fonts, but they result in additional data that causes render-blocking issues. The CSS (font) call causes the render to stop until the download has been finished.
You’d think that you could do asynchronous fonts, which is a good idea in theory. But there are two problems: there’s a nasty flickering in the browser from a flash of invisible or unstyled text. It’s really bad for UX.
What you can do instead is to use a fall-back font that is pretty much the same in styling as the intended web font. There’s a tool called Font Style Matcher which can help with this. There’s also new stuff to play around with for invisible text with various font-display strategies. Before you decide on your solution you need to work out what you care about, then work from there.
If you can do one thing, use ‘font-display:option’ in your CSS. This downloads the font in the background, and changes to the right font when it’s been fully loaded. AMP pages will implement this by default in the future.
You can see Bastian’s slides here: https://www.slideshare.net/bastiangrimm/web-performance-madness-brightonseo-2018
This post is one of 28 in our Brighton SEO 2018 collection
- Brighton SEO: Christoph C. Cemper – 20 Free SEO Tools You Should be Using
- Brighton SEO – Ways to to definitely get links for your business
- Brighton SEO: 5 Truths The Gurus Won’t Tell You About Facebook Ads
- Brighton SEO: Alex Major – Comparison Shopping: The Future of Google Shopping Ads
- Brighton SEO Keynote – Live Google Webmasters Hangout with John Mueller & Aleyda Solis
- Brighton SEO – Killing giants and competing in the SERPs
- Brighton SEO: Jeroen Maljers – Hidden Messages: The Psychology Behind PPC & SEO
- Brighton SEO: Arianne Donoghue – The PPC Automation Revolution Is Coming
- Brighton SEO: Laura Hogan – Big Links for £0
- Brighton SEO: Nichola Stott – Speed metrics in context of the UK Top 5,000 websites
- Brighton SEO 2018: We need to talk about competitor campaigns
- Brighton SEO: Bastian Grimm – Web Performance Madness: Critical Rendering Path Optimization
- Brighton SEO: Rob Bucci – Featured Snippets From Then To Now, Volatility, & Voice Search
- BrightonSEO 2018: Fili Wise – Optimising for SearchBot
- Advanced & Practical Structured Data
- Brighton SEO: Gavin Bell – Amplifying Your Content With Facebook Ads
- Brighton SEO 2018 : Craig Campbell – Risks and Rewards of PBNs
- Brighton SEO: Chelsea Blacker – Taming the Wild West of ASO
- Brighton SEO: George Karapalidis – Using machine learning and statistical models to predict revenue potential for search
- Brighton SEO: Barry Adams – Technical SEO in the Mobile First Indexing Era
- Brighton SEO: Kaspar Szymanski – Understanding Google Penalties by ex-Googler Kaspar Szymanski
- Brighton SEO: Mark Thomas – How much positive impact can crawl budget optimization have in a mobile first index era?
- Brighton SEO: Chris Liversidge – Using Machine Learning Technology To Build Audience-Led Analytics
- Brighton SEO: Emily Mace – Diagnosing Common Hreflang Tag Issues On Page & In Sitemaps
- Brighton SEO: Steve Rayson & Giles Palmer – How Metrics and Data Drive Advocacy Effectiveness
- Brighton SEO: Tom Anthony – Diving Into HTTP/2 – A Guide For SEOs
- Brighton SEO: Tom Pool – Command Line Hacks For SEO
- Brighton SEO: Eleni Cashell – How to Unleash The Power Of Unique Content