fbpx

Category: Crawling and Robots

Google Posts Draft To Formalize Robots Exclusion Protocol Specification

On July 1, Google announced that it posted a Request for Comments to the Internet Engineering Task Force to formalize the Robots Exclusion Protocol specification. This is happening after an informal 25 year old standard for the internet. Google had this to say on its blog: “Together with the original author of the protocol, webmasters, […]

Robots.txt Tip From Bing: Include All Relevant Directives If You Have A Bingbot Section

Frédéric Dubut, a senior program manager at Microsoft working on Bing Search, said on Twitter Wednesday that when you create a specific section in your robots.txt file for its Bingbot crawler, you should make sure to list all the default directives in that section. “If you create a section for Bingbot specifically, all the default directives will […]

The First Steps Of Your SEO Audit: Indexing Issues

Indexing may be the first step to any SEO audit, but why? If your site isn’t being indexed, it’s basically not being read by Google and Bing. If search engines aren’t able to see it, there won’t anything you can do to improve the ranking of your web pages. This is why it’s important to […]

What Are 3 Ways To Improve Link Equity Distribution And Capture Missed Opportunities?

In The SEO community, there is quite a bit of talk in regards to link building.  The process of link building can be quite tedious and time-consuming.  The thing is, link building is even more difficult now more than ever as the web grows in its demands of higher and higher standards for the quality […]

Did You Know Google Sees JavaScript Links That You Don’t?

There are quite a few SEMs that are aware that Google is parsing JavaScript and processing content within the DOM.  Not only is this because Google told us, but also because it has been tested. Despite the fact that this is known information, there are tools that provide backlink data that only sees classically formatted <a href> […]

Google Explains What “Crawl Budget” Means For Webmasters

A blog post, written by Gary Illyes, has been written, called What Crawl Budget Means for Googlebot, and explains what crawl budget is, how crawl rate limits work, and what crawl demand and what factors impact a site’s crawl budget. Gary explained that, for most sites,  crawl budget is something that we wouldn’t normally have to […]

Have You Thought About Having Fun With Robots.txt?

In any industry, there’s always going to be that part of the job that seems to drag on, and feels incredibly boring.    In the industry of technical SEO, one of the most boring topics  out there is robots.txt.  For the most part, there isn’t any interesting problem that would need solving in the file, and […]

Is There A Horrifying Connection Between Malware, Google Search Console And AdWords?

When you get a security warning in Google Search Console (GSC), it can be a scary experience.  You can get a warning for a variety of reasons, including your site being flagged for being hacked, or for serving malware or unwanted software.  If something like this happens, security warnings in GSC can result in some […]

Study Shows 29% Of Sites Face Duplicate Content Issues & 80% Aren’t Using Schema.org Microdata

Recently, Raven Tools conducted a study that has uncovered some major on-page elements that are being overlooked.  One of the biggest offenders that was pointed out in the study was duplicate content.  The results have identified that 29 percent of websites have duplicate content, and 80 percent of websites don’t use microdata. For more detail on […]

What Steps Can You Take To Find And Block Bad Bots?

To SEOs, when it comes to understanding Googlebot behavior we would use Log Files.  But did you know that they can be used to identify any bad bots that are crawling your site?  After all, finding out what bots are crawling your site is important, as these bots are executing JavaScript, inflating analytics, taking resources, and […]