“Often, I see JavaScript being blamed when the problem is something else,” search developer advocate at Google,Martin Splitt, cited as the underlying issue behind many of the site errors he has run across. During the crawling and indexing session of Live with Search Engine Land, Splitt talked about the most typical JavaScript-related issues that can hurt a site’s SEO. He even spoke about some ways to avoid them.

A big misconception about JavaScript is that it doesn’t work well for search engines. “Well, you could [have JavaScript work well for search engines] if your JavaScript wouldn’t be roboted, so we [Google] can’t access your JavaScript,” Splitt said.

Some SEOs and site owners use their robots.txt file to block Google from accessing that code, when using external JavaScript files as part of the page, are usually unaware of the consequence. Although it won’t not break functionality for users, it will disable search engines from fetching that JavaScript to render the page.

“We do see people breaking websites for users, rather than for search engines,” said Splitt. Though indexable, these sites don’t provide a good user experience since they may need to send abnormally large amounts of data to load a simple list of products, as an example.

“Another thing that I see relatively often is that people rely on JavaScript to do things that you can do without JavaScript,” Splitt said, adding, “That’s not something that you need to inherently be careful about, it’s just something that I think is pointless.” Splitt’s go-to example for unnecessary JavaScript is using it in place of the standard HTML link. This can cause issues for Googlebot as it doesn’t interact with such features. This couldresult in it skipping over your links.

SourceGeorge Nguyen