searchmetrics email facebook github gplus instagram linkedin phone rss twitter whatsapp youtube arrow-right chevron-up chevron-down chevron-left chevron-right clock close menu search
1003310033

Googlebot can read JavaScript – how should SEOs react?

Traditionally, search engines have only read and rendered the HTML code of a website. This meant that optimizing the HTML code was what SEOs had to focus on. What does it mean for search engine optimization if the Googlebot is now also able to crawl and index JavaScript? We asked a few industry experts to find out.

blog_cover_javascript-550x400

Googlebot and JavaScript: What the experts say

To get a range of perspectives on the topic of Googlebot and JavaScript, we asked our experts the following questions:

  • Google says that Googlebot can crawl websites that are based on JavaScript – what challenges and opportunities do you see for SEOs?
  • What particular aspects should someone consider if they are planning a JavaScript website relaunch?
  • Which changes in terms of efficiency and accuracy do you expect to come from a web rendering update in Chrome?

And here come the answers.

 

Martin Tauber

Managing Partner, Marketing Factory GmbH

martintauber-200x200JavaScript-based websites offer great opportunities in terms of user experience because they are faster and more interactive to use.

However, the Googlebot still has difficulties in interpreting JavaScript, which means development has to be extremely clean and has to be rooted in close cooperation with the SEO Unit, if unpleasant surprises are to be avoided.

 

Dominik Wojcik

Managing Director, Trust Agents

wojcik_200x200There are opportunities in that now you don’t have two separate programming worlds (e.g. for escaped fragments), letting you focus on clean code and a clean web environment. As long as developers consider progressive enhancement and develop their web applications accordingly, Google should be able to cope just fine.

There are, however, hidden challenges. Which framework is being used? Will there be client side rendering or is it possible to implement server side rendering? May it even be possible to implement isomorphic JavaScript? Is the JavaScript implemented internally or externally? As SEOs, we will have to do an incredible amount of testing and trying different things out, in order to ensure that Google is indexing and weighting our pages as we wish.

Before a relaunch, a careful decision should be made on the framework to be used. Crawlability and performance should both be considered. Ideally, a test environment should be created that makes it possible to test the current development from outside, if client side rendering is being used. That said, I would highly recommend also using server side rendering. This impacts the server performance, but should minimize risks. Above all, you really have to test, test and test, using fetch & render to see what the Googlebot finds, indexes and crawls.

If Google does finally switch to a Chrome version higher than V49, then we could use headless Chrome in combination with something like Rendertron to create test environments that let us simulate a setup similar to that of the Googlebot. This would help us better understand how and what Google can interpret. This would make things a lot easier for us SEOs 😉

 

Bartosz Goralwicz

Co-Founder & Head of SEO, Onely

At the Searchmetrics Summit in November 2017, Bartosz Goralwicz from Onely spoke on the relationship between Googlebot and JavaScript:

 

Stephan Czysch

Founder & Managing Director, Trust Agents

stephan-cyzsch-200

We don’t want to have SEOs (or agencies) hearing people say, “By the way, we’re switching to JavaScript soon. Is there anything we have to think of in terms of SEO? Shouldn’t be, should there? But it’d be great if you could have a quick look before we go live with the new site on Monday.” This scenario would inevitably end in complete chaos. Bartosz [in the video above] provided a wonderful look at the subject of JavaScript and SEO.

As well as asking what Google can render, SEOs should look, when relaunching a website, at what the bot can see and establish what is different from the old website. I recently dealt with a website where the complete internal linking system was messed up following a JavaScript relaunch, because the link logic of the old site wasn’t carried over. There were also hreflang issues. It is therefore essential to work with a checklist of desired “SEO features”. Additionally, you should ask what JavaScript rendering really means for your uses: What kind of hardware are they using to access your website and how will that affect load times? For more on this topic,  can recommend this article by Addy Osmani.

 

Sebastian Adler

SEO Consultant, leap.de

profile-seb-200Even with an improved ability to crawl JavaScript, Google will prefer pure HTML content because it takes up fewer resources. The question isn’t whether Google can read and render JS, it’s whether you can and want to take some of the work off Google’s hands. If my content can be read, works and loads quickly perfectly well without JS, then that is still better for me.

The ability to render always depends on the technology behind it and, as Bartosz said (respect to him for all the effort he puts into his experiments and research!), you have to understand the technology fully if you are to make best use of it. The great opportunity here is in minimizing risks by providing important content as HTML and only using JS how it is intended: for additional features. The biggest difficulty is in finding errors if you do fully commit to JavaScript.

When relaunching a page, make sure the content you want to rank with works without JavaScript. This includes not just the main content, but also navigational elements. Many pages don’t have a menu when JS is deactivated. It makes sense not to include every single fancy feature but to ask whether a function is really needed for your business and your target audience. What would the impact be, if a certain feature didn’t work? And then make the relevant tests.

Besides the fact that I don’t expect Google to communicate the web rendering update very well to webmasters, I expect the main thing that will change will be the susceptibility to errors. Chrome and the frameworks develop really quickly, and with the new versions, new bugs are likely to come into the RWS.

A few things are sure to be processed more quickly or rendered more cleanly. But the main problem stay the same. Error-ridden code (from the point of view of the engine in use) cannot be interpreted. We have to find out how the engine interprets our code. During development, this changes the tool we have to use for debugging. But if you have your most important assets as quick-loading HTML (etc.) files, then you needn’t worry – you can concentrate on proper SEO work.

 

Björn Beth

Director of Professional Services, Searchmetrics

bjoern-in-circle2_200x200

We have to differentiate between crawling and indexing. Google can crawl JavaScript, but it takes far more resources than crawling pure HTML. It is more problematic for the indexer that renders the links (URLs) it receives from the crawler with the help of the web rendering service (WRS), in a similar way to Fetch & Render in the Search Console. To do this, Google uses its own Chrome browser (Version 41). With the help of the browser, it tries to create a Document Object Model (DOM) and interpret the page in the same way as it will be displayed in a browser. This can lead to problems, as Google, for example (as shown in tests run by Distilled and Bartosz Goralewicz), cannot cope with problems in the code, or other large problems occur when rendering, so that Google stops renderin the page after five seconds. This was shown in tests conducted by Screaming Frog.

Basically, JavaScript makes crawling and indexing far more complicated and creates a highly inefficient relationship between the two. If SEO is important for you, you should always make sure that a bot can read your pages as quickly and efficiently as possible.

Before relaunching from an HTML-based website to a JavaScript-based framework or library, then you should ensure that serve side rendering is included. For example, React comes with its own solution, which is called renderToString. This uses a browser-independent DOM interface that renders the JavaScript on the server, creates the DOM and sends it to the bot. AngularJS uses Angular Universal. This proves the client with everything that is important as pre-rendered HTML. The client then gets the JavaScript as it is required. You can, however, also work with headless Chrome on the server and send pre-rendered HTML to the bot.

Above all, I expect faster, more efficient rendering to come from Chrome 59, moving towards performance on a par with pure HTML. Only tests will tell if this really happens.

Crawl through the mud: Evaluate the health of your website

Analyze both the HTML and the JavaScript with the Site Structure Optimization including the JavaScript Crawler now with Searchmetrics! Your benefits:

  • Crawl all relevant JavaScript frameworks, including Angular and React
  • Improve website performance through prioritized list of technical issues
  • Compare crawls with and without JavaScript Crawling

Read more about our JavaScript Crawling

And what do you think?

That’s what these five experts think, but we have a lot more experts reading this blog. So what do you think about JavaScript? Have you already made changes on your websites? Have you already discovered anything interesting in your tests?