Get in touch with our team

29.11.2023

29 min read

The complete JavaScript SEO guide

In our complete JavaScript SEO Guide, we will uncover the important elements of JavaScript (also known as JS) that SEOs need to know about. We’ll talk about how Google crawls and indexes JavaScript, why it benefits both users as well as SEOs and most importantly, the details you’ve been looking for about JavaScript SEO best practices. 

Whether you’re new to SEO and looking to familiarise yourself with the topic or a seasoned developer looking for a fresh perspective, this blog is just what you are looking for.

What is JavaScript?

JavaScript is a programming language and a core technology of the World Wide Web along with HTML and CSS and is commonly abbreviated as JS hence the capitals within JavaScript.

The computer programming language JavaScript should not be confused with another language, Java, as they are very different. JavaScript has been noted as one of the fastest-growing languages in the world, and it’s unlikely that’s going to change anytime soon.

Modern web pages are comprised of three major components:

  • HTML – Hypertext Markup Language. Imagine this to be the “skeleton” to the web pages “physical form” – let’s use a Zebra as an example. As the bones in a Zebra skeleton provide structure, so too does HTML to a site. It organises the static content on a site into headings, paragraphs, lists etc.
  • CSS – Cascading Style Sheets. Picture this as the “fur” that covers the Zebra skeleton. Much like a Zebra has an instantly recognisable black-and-white appearance, the CSS is the design, style and wow factor added to a website, making up the presentation layer of the page.
  • JavaScript – We have what looks like a Zebra now, but it wouldn’t be a great deal of use in the savannah as just skin and bones. JavaScript provides the “muscle” enabling movement to a form and the interactivity to a website. JavaScript is either embedded in the HTML document within script tags, predominantly the header and footer or linked to a script hosted elsewhere.

There is currently a wealth of Javascript (JS) libraries and frameworks. Some that we work with here at Impression include jQuery, AngularJS, ReactJS and EmberJS. As well as jQuery UI, Chart.js is a good example of the most visual. The latter is for interactive charts while the former offers a bank of user experience (UX), ‘datepicker’ inputs you see all over the web, scrollers and drag and drop. Then there is node.js which focuses on run-time updates enabling pages to update in real-time.

Put simply, as muscles make a Zebra run, JS is a programming language that makes webpages interactive and animated, bringing a page to life to engage a user.

Now we have a rough understanding of what JavaScript is in relation to web pages, it helps to understand a little bit about how the language developed. 

A short history of JavaScript

These eight facts might not win you your local pub quiz, but for almost 30 years, JavaScript has been integral to making the web an interactive and engaging place to be. 

  1. JavaScript was first invented in 1995 by Brendan Eich whilst working at Netscape.
  2. Internet Explorer 4 was the first browser to support JavaScript’s ES1 (ECMAScript 1 – Version 1 of ECMA standard JS). Please note: we are talking about all main browsers support, rather than specific browsers support which will be higher or lower for specific features as they each have their own priorities.
  3. For those not familiar with the earlier days of the internet, Netscape and Internet Explorer were competitors in the web browser market and competed with JavaScript too. Netscape eventually became Mozilla which released the Firefox browser in 2004. 
  4. The ES4 version was meant to be a collaborative version between the two companies however Microsoft dropped out resulting in the version being skipped.
  5. Google rocked the market when it debuted its Chrome browser in 2008 and its V8 JavaScript engine that utilised just-in-time compilation (JIT) to outperform competitors. This meant other browsers needed to overhaul their engines for JIT too.
  6. By 2009 an agreement was made to combine all relevant work between all parties to help drive the language forward collaboratively, which is when we saw the ES5 release in the same year.
  7. By 2014 all browsers supported up to JavaScript ES5 and the latest version at the time of writing that all browsers currently support is ES6.
  8. JavaScript is currently onto its 14th ECMAScript edition (ES14) and whilst browsers are looking to support specific features from some of these new releases, we may not see full support up to these later versions for some time to come.

The important elements of JS you need to know

What is Javascript crawling & indexing?

Google crawls a website using its ‘Googlebot’ crawler and indexes with the ‘Caffeine’ infrastructure. Each performs very different functions – Googlebot is all about discovery.

It’s a complex system, factoring in crawl queuing, scheduling and budget, URL importance, and server response time to name a few. But in essence, it finds all URLs and crawls them, discovering content and code present on a URL.

When the crawler finds URLs that have been changed or added since its previous visit, they are sent to the indexer. The indexer will try to make sense of the URL by analysing the content as well as assessing subprocesses like canonicalisation, page layout and more.

Importantly, the Web Rendering Service (WRS) within the indexer renders webpages and then executes JavaScript. We can see just how Google’s WRS sees your page on Search Console’s Fetch & Render feature.

The crawler and indexer work harmoniously. The crawler sends anomalies to the indexer, which in turn feeds new URLs (discovered sometimes by executing JavaScript) to the crawler. The indexer also helps prioritise URLs for the crawler, with more emphasis on high-value URLs that the indexer wants to crawl more frequently.

What is Ajax?

Asynchronous JavaScript and XML, known as AJAX, is a set of web development techniques combining, you guessed it, JavaScript and XML to create asynchronous web applications.

Asynchronous means “not existing or occurring at the same time”, so the web applications can communicate with a server without interfering with other functions or lines of code on a page by operating in the background.

Usually, all the assets on a page are requested and fetched from the server and then rendered on the page. AJAX can be implemented to update the content or a layout without initiating a full page refresh. Usefully, as often, pages on a site don’t differ greatly, using AJAX means only the assets that differ need to be loaded, which improves the UX.

A great example of this in action is Google Maps; the page updates as the user navigates without a full page reload.

What is the Document Object Model (DOM)?

Think of the Document Object Model (DOM) as the web browser’s actions taken after receiving the HTML document to render the page.

It is essentially an application programming interface (API) for markup and structured data such as HTML and XML.

The HTML document is the first thing the browser receives. Following that, it will start analysing the content within and fetching additional resources, such as images, CSS, and JavaScript files. What forms from this parsing of information and resources is referred to as the DOM. It can therefore be thought of as an organised, structured interpretation of the webpage’s code. A “bridge” that connects web pages and programming languages.

While the DOM is a language-agnostic (unrestricted to a specific programming language or library) API, it is most commonly used in web applications for JavaScript and dynamic HTML.

As a result of dynamic HTML, the DOM is often very different from the initial HTML document. Dynamic HTML is the ability for a page to change its content depending on user input, environmental (such as time of day), and other variables, leveraging HTML, CSS, and JavaScript.

From an SEO perspective, it’s important to understand the DOM, because it’s how Google analyses and understands webpages. Visually, the DOM is what you see when you “Inspect Element” in a browser.

What is JavaScript rendering?

Rendering focuses on fetching the relevant data to populate a page, and the visual layout templates and components and then merging them together to produce HTML a web browser can display. It’s here where we should introduce two fundamental concepts; server-side rendering and client-side rendering. It is critical for every SEO managing JavaScript websites to recognise the difference between the two. 

The established approach, server-side rendering, involves a browser or a search engine bot (crawler) receiving HTML that describes the page exactly. So, the content is already in place, and your browser or search engine bot must download the attached assets (CSS, images, etc) to present how the page has been designed. As this is the traditional approach, search engines generally do not have a problem with server-side rendered content. Traditionally websites that render like this would be programmed in PHP, ASP or Ruby and might have used popular content management systems like Kentico, WordPress or Magento.  

However, the more modernised client-side rendering approach is far different. Many search engine bots have been noted to struggle with this path as a result of the primary load projects a blank HTML page, with very little content, back to the bot. The included JavaScript code then sends a request to the server and uses the data it receives back to build and render the page.

How does Google crawl and index JavaScript?

Googlebot essentially processes JavaScript in three major stages, these being;

  • Crawling
  • Rendering 
  • Indexing 

As shown in Google’s diagram, Googlebot places pages within a queue for their crawling and rendering processes. From here, Googlebot fetches a URL from the crawling queue and reads the robots.txt file to see whether the URL has been disallowed or not. 

From here, Googlebot then parses the HTML response to another URL and adds it to the crawling queue. Once Googlebot’s resources allow, a headless Chromium renders the page and executes the JavaScript, which we will discuss later. The rendered HTML is then used to index the page. 

However, during Google’s 2018 I/O conference, they stated that JavaScript is now being processed in two separate waves. These waves can take a couple of days – a week to properly index JavaScript-heavy websites, but Google is actively working towards rendering in near-real-time. The diagram below displays the process Googlebot goes through when indexing.

Diagram showing the process Googlebot follows to index web pages

Javascript rendering process: Crawl queue to the crawler. During the processing the URL is queued to render, once rendered it indexes the URL.

As Google is running two separate waves of indexing, it is possible for some details to be missed during the indexing process. For example, if you’re not server-side rendering crucial title tags and meta descriptions, Google may miss this on the second way, having negative implications on your organic visibility on the SERPs.

Why Javascript is good for SEO (and users)

JavaScript is being used more and more across the web to provide more dynamic experiences to web users. This trend has been growing in recent years, with websites like AirBnB running solely on JavaScript (in this case, a variant of JavaScript called React JS).

While the experiences produced by JavaScript can be really good for users, the same cannot always be said for Google and other search bots that can struggle to render JS unless managed correctly – as we’ll go on to explain in this guide. The reality is that websites seeking to use JavaScript must consider Google in their implementation, ensuring their site can be crawled, rendered and indexed effectively. Google certainly wants to provide users with the best experience and so there is a gap to bridge here. 

You may hear that JavaScript is a “nuisance” to a website’s performance or that it’s holding back your website. In situations where you may hear comments like this, it may be due to someone not really understanding the technology or using it as a scapegoat for unexplained performance issues.

JavaScript is a very useful tool when utilised correctly but it is by no means perfect. As with everything, it’s about picking the right tool for the job. Developers may prefer to work with a language they’re very familiar with rather than creating something in another language that could perform better but if you’re working with a developer and you are unsure of whether they are using the right tool for the job, the best approach is to always to ask questions with an open mind. Let’s look at some further considerations and challenges.  

Can JavaScript be great for SEO and accessibility?

As long as there isn’t a script that is device-dependent (requires a mouse or keyboard) then a web page containing JavaScript will normally be fully accessible.

When JavaScript has been utilised to create visual interface elements, for example, an image change when hovering over with a mouse cursor, this example is an action that requires a mouse or another cursor interaction device like a laptop touchpad or drawing tablet so this specific interaction wouldn’t be classified as accessible. However, if this does not affect the important content being displayed on the page then it does not need to be.

JavaScript has the potential to optimise the visitor experience and improve accessibility, but it’s all based on how it’s used.

For SEO the user is the focus and we want to ensure that everyone can access and view a page, this means if a user disables JavaScript, or has it enabled (default), that both will allow the page to work and display the same important content and information.

How JavaScript poses a challenge for SEOs

Unfortunately, many JavaScript frameworks are deployed without SEO considerations taken into account, this can have a drastic impact on the overall performance of the website.

Is JavaScript inefficient for crawling and indexing?

As previously mentioned, the main issue with JS is if bots are unable to find URLs and understand the site’s architecture. Crawling and indexing can become slow and inefficient.

On JavaScript-based websites where most or all internal links are not part of the HTML source code, the crawler will only find a limited set of URLs in the first instance. Then it has to wait for the indexer to render these pages before extracting new URLs to be crawled.

The evaluation of a site’s internal link graph, therefore, has to happen again and again as new URLs are extracted from JavaScript via the indexer rendering of a URL. With every new set of pages the indexer identifies in the site’s JavaScript code, the internal site structure is re-evaluated and the relative importance of pages can change.

Importantly this can lead to issues, whereby key pages are deemed unimportant due to a lack of internal link value. Likewise, relatively unimportant pages are misinterpreted as high value because there are plain HTML links pointing to them.

Further, because internal linking is a strong signal to search engines regarding the site’s architecture and the importance of pages, pages are crawled and rendered according to their perceived importance. You could, therefore, see Google investing more time crawling and rendering the wrong pages over the pages you actually want to rank.

As a resolution, internal linking should be implemented with regular anchor tags within the HTML or the DOM rather than using JavaScript functions to allow the user to traverse the site.

JavaScript’s onclick events are not a suitable replacement for internal linking. While certain URLs might be found and crawled, likely through XML sitemaps or in JavaScript code, they won’t be associated with the overall navigation of the site.

One issue to avoid is blocking search engines from your JavaScript.

This may be accidental, but however it happens, if search engines are blocked from crawling JavaScript, they will not see your site’s whole picture. When search engines don’t experience what the end user is, it can reduce appeal and negatively affect SEO.

The best way to resolve this issue is by providing search engines access to the resources they need by working with the website development team to determine which files should and should not be accessible to search engines.

One recommendation is pushState, currently supported by Google. It’s Navigation-based and used to manipulate the browser History API.

Simply, pushState updates the address bar URL, ensuring only what needs to change on the page is updated.

The best use is seen with infinite scroll (when the user scrolls to new parts of the page the URL will update). This means the user shouldn’t need to refresh the page, as the content updates as they scroll down while the URL is updated in the address bar.

What should SEOs look out for?

Internal linking

With internal linking being one of the webmaster’s top priorities, regular anchor tags should be used within the HTML or the DOM as opposed to leveraging JavaScript functions, such as JavaScript onclick events to allow the users to traverse from web page to page across the site. Internal linking is considered such a strong signal to search engines regarding the complete website architecture and overall importance of pages. 

Content accessibility

For content to be clear for both the user and search engine bots, the content must be indexable within the load event, require an indexable URL and use the same best practices SEO for HTML applied to render JavaScript sites. Not only this but as aligned with basic SEO practices each piece of content on a website should be unique, from the website itself and across other sources on the internet shouldn’t be duplicated. 

Obtainability

A number of search engines will deploy headless browsing, a type of software that can access web pages, but does not show the user and transfers the content of the web page to another program that runs on the backend. A headless browser helps to render the DOM to gain a better understanding of the user’s experience and the content situations on the page. 

However, it’s important to understand how JavaScript is executed and interacted with by search engine bots, with huge organic consequences if not. An example of this is the globally renowned brand Hulu, where a significant visibility drop was seen due to a coding problem and the way the site was serving JavaScript to Googlebot. For further information on this drop-off in traffic, read the Hulu.com JavaScript Fail case study.  

Rendering

JavaScript can affect the overall rendering of a web page. Therefore, if something is render-blocking, this is likely to delay the page from loading as quickly as it has the potential to. As a rule of thumb, Google recommends completely removing or at least deferring any JavaScript that interferes with the loading of “above the fold” content on a website.

Above the fold refers to the part of the web page that is visible when the page initially loads. The subsequent portion of the page that requires scrolling is called “below the fold”. This can be apparent across a range of devices, including; desktops, mobiles, iPads and many more. In order to remove this, please refer to the section below that outlines key tools to use.

Single Page Application considerations

A single-page application (SPA) is a web application/ website that has been primarily designed and built to operate efficiently on the web. These pages are dynamically rewritten and loaded with the pieces you require, as opposed to loading an entire page from the server. 

The SPA approach provides a fast loading time, uses less bandwidth and provides the user with a pleasant experience by making the application behave more like a desktop application. It should be noted that there are many different SPA framework options available, depending on the use of the application. These include; React.js, Angular.js, Backbone.js, Preact, Next.js and hundreds more. 

When rendering SPAs, John Mueller stated the following.

There can be many hurdles when Google attempts to crawl and index the content on the SPA. Therefore, if you’re using SPAs, it’s recommended to test out multiple times, using the Fetch command, to understand what Google is able to pick up.

What happens if a JavaScript framework is deployed without taking SEO into consideration?

If a JavaScript framework is deployed without taking SEO into consideration, then the worst-case scenario it can run into is the possibility of Google and other search engines not being able to read the content found on the website and if left alone for too long in this state the website can be deindexed and no longer appear within search results because Google thinks the page’s on the site don’t contain any content within them and are therefore not relevant to specific search terms.

This sounds scary on its own but if the majority of your business comes in digitally, then this sort of issue will result in a drastic drop in business and conversions until resolved. Whilst we’re primarily talking about organic search within this article, if the page’s content can’t be seen by the search engines then it could also prevent any paid advertisements via a search engine from being shown too, as the search engine will see it as having no relevance to your target keyword and show other advertisers that do have relevance as the priority instead.

Any advertisements like dynamic search ads (DSA) campaigns that utilise content from the page for the advertisement won’t display due to finding no content.

This worst-case scenario example is based on a JavaScript framework working from a user perspective. User experience is a large part of SEO and issues with the overall usability of the page will also affect the SEO of the page/site. If there are usability issues with the JavaScript framework this can affect the rankings of the page(s) affected in a negative manner. Any Javascript that slows down the loading of page(s) considerably can also affect rankings negatively depending on the extent of delay. User experience and speed issues can potentially discourage visitors from continuing to browse or convert. 

What common problems does Google find in JavaScript?

Internal Linking

As mentioned earlier in the article, if internal links are being provided via JavaScript the rendering process needs to occur before the crawler is told where new links are on the site to crawl, resulting in a back-and-forth between the rendering process and the Googlebot crawler, slowing down both the crawling and indexing process.

Content accessibility

If all the HTML and content are loaded in after the initial server response (client-side rendering) then it can potentially result in the rendering process not detecting any content on the page if the rendering process test finishes before the content has loaded in.

When content isn’t rendered then you are serving less content to a search engine than a user, this is where JavaScript can cause issues with organic rankings.

Accessibility

Also mentioned earlier in the article, if the JavaScript hasn’t been coded with accessibility in mind such as being device-dependent (requiring a mouse or keyboard) to view important content on the page when it doesn’t need to be then it can affect the site’s ranking on Google.

How to make your JavaScript SEO friendly and follow best practice

Indexable URLs

Pages require indexable URLs that offer server-side support for each landing page. This includes each category, subcategory and product page. 

Not blocking JavaScript resources

Often websites will block resources they think have no importance to a search engine or a user, but if your page is feeding content into a page via JavaScript ensure the JavaScript resources that do this are crawlable and aren’t being blocked in robots.txt

Descriptive metadata

With each individual page on the site having a specific focus and target, they should also include descriptive titles and meta descriptions to help search engine bots and users precisely detect what the page is about. Not only this, but it helps users determine if this is the most suitable page for their search query. 

Using your browser’s “Inspect” feature

Once the rendered HTML has been obtained and meets the level of a traditional landing page expected by Google, many impacting factors will solve themselves. 

To review rendered HTML and general JavaScript elements, the inspect element of Google Chrome can be used to assist users in discovering further information about the webpage that is hidden from the user’s view. 

To discover hidden JavaScript files, such as user behaviour when interacting with a web page, you can attain this information from the Sources tab of the Inspect Element. If you loaded the desired page before opening the Sources tab then you may need to perform a refresh whilst the Sources tab is open to see the full list of sources for that page. 

Inspect is the more comprehensive method to view JavaScript-led sites as it will show you what’s on the page currently, so if anything is dynamically loaded in after the initial page’s response (server response) then it can be seen within inspect – this is known as client-side rendering. Server-side rendering will look to send the HTML with the initial server response and is more search-engine friendly.

You can test whether your website uses client-side rendering by going to view-source and comparing against inspect, if all HTML and content can be found in both view-source and inspect then the website is utilising server-side rendering if you can’t find any of the content of the pages within view-source then it is utilising client-side rendering. There is the potential that it may be a mix as well where important HTML and content are being sent via server-side rendering but a more dynamic section of the page is being delivered via client-side rendering. You can utilise Diffchecker for a more comprehensive view of what is different between view-source and Inspect.

An example of this can be seen on YouTube, we have a screenshot of a video on the website’s homepage that looks to deliver personal recommendations based on your viewing history:

But when we looked at the source code and ran a search for “your writing process”, it didn’t find anything:

But should we then run this same search on Inspect, we can see the video title, which indicates that the website uses client-side rendering and that this content is dynamically loaded in after the initial page load:

URL Inspection tool in GSC

The URL inspection tool allows you to analyse a specific URL on your website to understand the full status of how Google is viewing it. The URL inspection tool provides valuable data around the crawling, indexing and further information from Google’s index, such as successful indexation or structured data errors causing issues.

Improve the page loading speed

Google indicated that page speed is one of the signals used by their complex algorithm to rank pages, as well as, a faster page speed allows search engine bots to increase the number of pages helping with the overall indexation of a site. From a JavaScript point of view, making the web page more interactive and dynamic for users can come with some costs with page speed. To mitigate this, lazy loading can be advised to use for certain components, usually ones that aren’t fully required above the fold.

How to use JavaScript for personalisation and SEO testing

The main thing to consider when looking to provide a personalised experience for visitors is that Google and other search engines are not a fan of ‘cloaking’ which is where the main content is different between a crawler like Googlebot and a user.

When personalising content one should look at only modifying highly visible items that don’t change the main purpose or content of the page. Highly visible items are things like headers, introductions or recommended/similar items.

Other items that you could look to personalise are imagery on the page, this could be based upon a visitor’s geo-location or if they’re logged into the site, personal preferences set on their account, an example of this could be to use more suitable imagery for a user that is using a dark mode preference.

Personalisation can end up slowing the website down if it has to run checks before delivering potentially different elements or imagery. To avoid the major pitfalls in personalisation and speed performance decreases it’s best to avoid any third-party plugins or rules for personalisation and if possible utilise a CDN to help deliver larger elements such as imagery whilst mitigating any speed performance decreases.

You may be asking yourself, is it worth it? This seems like a lot of work/consideration for some minor changes to page(s). Well yes, there is going to be a fair bit of work involved and determining the rules that you will utilise and continuous testing to see how the personalisations perform, but the potential these personalisations could have on a visitor converting are extremely high.

JavaScript SEO Tools

With the endless capabilities JavaScript has, there’s an abundance of helpful tools available to assess and fix issues with JavaScript code. Here are just a few that you will find helpful as an SEO.

URL Inspection Tool

Found within Google Search Console, the URL Inspection Tool shows information on whether Google was able to crawl / index a page, whether any errors are occurring and why. 

Mobile-Friendly Test and Lighthouse

Google’s mobile-friendly tester provides information regarding how easy it is for a visitor to navigate your website on a mobile device and any issues with it meeting requirements to be classified as ‘Mobile Friendly’, however, this tool is being removed in December 2023.

For a more detailed description of any mobile-friendly issues or recommendations for the page, you can utilise the Lighthouse test found in Google Chrome’s DevTools which we get to via Inspect and then navigate to the Lighthouse tab. Once a test has been completed, navigate down to the mobile-friendly section which will have its own overall score out of 100 and any recommendations to improve mobile friendliness.

Page Speed Insights

Google’s Page Speed Insights tool (PSI) effectively details the mobile and desktop performance of a page. In addition to this, this tool also provides recommendations on how this can be improved.

If you are going down this list and have tried out Lighthouse you will notice that PSI and Lighthouse are similar, this is because PSI utilises Lighthouse for its test. 

Lighthouse is more commonly used by developers as it provides in-depth recommendations and analyses of the page you’re currently on, whilst PSI will fetch the page and test it. This means that if you have a page in staging/testing that is not yet live and behind a login, then PSI will likely be unable to test the page whilst Lighthouse can, provided the user is past the login stage.

Page Speed Insights also provides CWV data on how the page has performed on average with users if there are enough visits and data available from user experiences to provide this information.

Site: Command

The site: command is one of the most direct tools to help see if Google has properly indexed your content. In order to do this, you can complete the following command in a Google search: 

site: https://www.example.com “text snippet or query”

Replace https://www.example.com with your website or a specific category/page if you want to see pages that contain that category or a specific page only.

The text snippet or query is optional, if you don’t know the URL of a specific page you want to test, but know its overall topic/focus you can enter a strong relevance keyword for that page for it to find the most relevant results found on your site for said keyword.

The two can be used in combination, so you know the page is within a specific category and you don’t know the URL off by heart, but you know it’s about a specific topic, then:

site: https://www.example.com/category/ “specific topic”

If no “text snippet or query” is added to the site command and you enter the root domain URL (often the homepage) then the Google search result will provide a rough count of how many pages on the site are indexed on the search engine.

Diffchecker

Diffchecker is a unique tool that allows you to compare two types of text files and review the differences between both. This is especially useful for performing an analysis of a webpage’s original source code against the rendered code. This tool delivers comprehensive comparisons of how the content has changed after being rendered.

Chrome DevTools

Chrome DevTools is a set of tools for experienced web developers to build directly into Google’s Chrome browser. Chrome DevTools can help you edit and make quick styled changes without needing to use a text editor. It helps to discover problems in a fast manner, which in turn, can help to build better websites in a quicker period.

You can access this via Inspect which we talked about earlier or via the 3 dot menu > More Tools > Developer Tools

Crawlers

There are a myriad of crawlers available on the market that can crawl as a JavaScript crawler or imitate Google’s crawler and rendering process. We have listed a couple of our favourites below:

Screaming Frog: A crawler that is constantly updated with new features, has lots of helpful usage guides on their site, allows for custom extraction of specific elements on the site and is used by a very large percentage of SEOs within the industry.

SiteBulb: A crawler that is growing in popularity and provides easy-to-understand ‘hints’ which are recommendations on areas of improvement for the site’s SEO/visibility on search engines. 

BuiltWith

BuiltWith is a free website profiler that helps to discover what framework websites are being built with. Not only this, but it’ll also tell you about any integrations with a third-party website that the website has.

Get help with your JavaScript SEO

If you’d like to find out more information regarding JavaScript SEO or you’re undergoing some crawling or indexation issues due to JavaScript code, contact our team of technical and JavaScript SEO specialists, today.