What is crawling?
Crawling is the process where Google deploys an web crawler, or internet bot, to publicly available web pages to “crawl” or read a page. When this happens, it downloads all the text, images, and videos found on the page.
As crawlers or spiders visit these websites, they utilize the links on those sites to find other pages.. They pay attention to new websites, changes to existing sites, and dead or broken links.
These bots follow links connecting webpages together and bring data back to Google’s servers about those pages.
In this episode of Whiteboard Friday, Jes Scholz discusses the foundations of search engine crawling and shows you why no indexing issues doesn’t meant you don’t have any issues.
She also shows you that, when it comes to crawling, quality is important and not quantity.