A core focus for technical SEO is how well Google can crawl our website, and whether, indeed, it can access everything it should. As we constantly evaluate our site’s performances, we are trying to influence and control an output (SERPs visibility). We can’t do this unless we control the input and in this case, the input is how our websites are built and structured.
So, how do we analyse how well Google (Searchbot) can crawl our site, and how it understands it? Whilst there are now hundreds of tools and pieces of software that can be used, Search Console still remains the best tool for truly accurate analysing, said Fili Wiese in his Brighton SEO talk.
How Google crawls the web
We know that the Google bot is extremely conservative and it does not want to crash your website, so as soon as it slows it down, it will exit. Our objective is to maintain a website that is as efficient as possible to crawl so that as much of our site is crawled as regularly as possible.
In this order, crawlers look to understand
- HTTP & CSS
Google starts its crawl from the top down, but then it will move to a random crawl.
Crawling via links
Links play a key part in Google’s crawl process. Why does Google like links?
- It helps with the crawl budget and suggests which pages need crawling to discover new content
- An internal linking structure is also important for crawl efficiency to help steer the bot to key pages throughout your site
- Breadcrumbs allow Google to crawl more efficient
The next few months, Chrome will be moving to HTTPS fully. If you are going through a migration, it is recommended to improve your current sitemap with regard to internal linking and hierarchical structure.
Google has a mobile specific crawler. With the introduction of AMP, do we want to have two code bases? – Responsive and AMP or should we canonicalise to one? The choice is ours but it is a consideration that needs to be made.
- Mobile first is key
- Optimise, optimise, optimise
- Audit regularly
This post is one of 25 in our Brighton SEO 2018 collection
- Brighton SEO: Christoph C. Cemper – 20 Free SEO Tools You Should be Using
- Brighton SEO – Ways to to definitely get links for your business
- Brighton SEO: 5 Truths The Gurus Won’t Tell You About Facebook Ads
- Brighton SEO: Alex Major – Comparison Shopping: The Future of Google Shopping Ads
- Brighton SEO Keynote – Live Google Webmasters Hangout with John Mueller & Aleyda Solis
- Brighton SEO – Killing giants and competing in the SERPs
- Brighton SEO: Jeroen Maljers – Hidden Messages: The Psychology Behind PPC & SEO
- Brighton SEO: Arianne Donoghue – The PPC Automation Revolution Is Coming
- Brighton SEO: Laura Hogan – Big Links for £0
- Brighton SEO: Nichola Stott – Speed metrics in context of the UK Top 5,000 websites
- Brighton SEO 2018: We need to talk about competitor campaigns
- Brighton SEO: Bastian Grimm – Web Performance Madness: Critical Rendering Path Optimization
- Brighton SEO: Rob Bucci – Featured Snippets From Then To Now, Volatility, & Voice Search
- BrightonSEO 2018: Fili Wise – Optimising for SearchBot
- Advanced & Practical Structured Data
- Brighton SEO: Gavin Bell – Amplifying Your Content With Facebook Ads
- Brighton SEO 2018 : Craig Campbell – Risks and Rewards of PBNs
- Brighton SEO: Chelsea Blacker – Taming the Wild West of ASO
- Brighton SEO: George Karapalidis – Using machine learning and statistical models to predict revenue potential for search
- Brighton SEO: Barry Adams – Technical SEO in the Mobile First Indexing Era
- Brighton SEO: Kaspar Szymanski – Understanding Google Penalties by ex-Googler Kaspar Szymanski
- Brighton SEO: Mark Thomas – How much positive impact can crawl budget optimization have in a mobile first index era?
- Brighton SEO: Chris Liversidge – Using Machine Learning Technology To Build Audience-Led Analytics
- Brighton SEO: Emily Mace – Diagnosing Common Hreflang Tag Issues On Page & In Sitemaps
- Brighton SEO: Steve Rayson & Giles Palmer – How Metrics and Data Drive Advocacy Effectiveness