Ever since the release of the Pigeon update, things were changed for the worst or better, depending on the kind of website owner/operator you were. Ever since Pigeon, content became more of a focal of site owners, instead of keyword stuffing.
In cases like this, the value of the content itself has become a bigger point of interest for Google, and it seems they’re taking another step towards creating search results that focus not only on popularity, but the accuracy of information that the content contains.
A team of research scientists ag Google, according to a New Scientist report, has published a paper (PDF) which explains the idea of Knowledge-Based Trust (KBT). What is KBT? In a nutshell, it’s a different way of determining the quality of web pages by seeing how accurate the content is.
“The quality of web sources has been traditionally evaluated using exogenous signals such as the hyperlink structure of the graph. We propose a new approach that relies onendogenous signals, namely, the correctness of factual information provided by the source. A source that has few false facts is considered to be trustworthy.”
According to the paper, Google could use an extraction process to compare the facts it finds on web pages, and compare them to facts stored in a knowledge base. After the comparison has been made, the KBT rewards pages that are found to be more accurate.
But what if the web page doesn’t have enough facts within the content? It’s suggested that the KBT will look at other pages from the same site to determine an overall trustworthiness.
The authors say their early tests of Knowledge-Based Trust have been promising. “We applied it to 2.8 billion triples extracted from the web, and were thus able to reliably predict the trustworthiness of 119 million web pages and 5.6 million websites.”
The idea behind KBT wouldn’t work uniformly across the internet, since there websites and web pages that exist that aren’t made to share facts or aren’t about entities that exist in a Knowledge Graph-style database.