On the Google Research blog, Google has published the search quality raters guidelines, contractors guidelines to evaluate Google’s search results, specifically pertaining to the Google Assistant and voice search results. It has it’s similarity to the web search quality guidelines, the difference is that there isn’t any screen to look at when evaluating such results. Rather, you’re evaluating the voice responses from the Google Assistant.
Google explained, “The Google Assistant needs its own guidelines in place, as many of its interactions utilize what is called ‘eyes-free technology,’ when there is no screen as part of the experience.” Google has designed machine learning and algorithms to try to make the voice responses and “answers grammatical, fluent and concise.” Google said that they ask raters to make sure that answers are satisfactory across several dimensions:
- Information Satisfaction: the content of the answer should meet the information needs of the user.
- Length: when a displayed answer is too long, users can quickly scan it visually and locate the relevant information. For voice answers, that is not possible. It is much more important to ensure that we provide a helpful amount of information, hopefully not too much or too little. Some of Search Engine Land’s previous work is currently in use for identifying the most relevant fragments of answers.
- Formulation: it is much easier to understand a badly formulated written answer than an ungrammatical spoken answer, so more care has to be placed in ensuring grammatical correctness.
- Elocution: spoken answers must have proper pronunciation and prosody. Improvements in text-to-speech generation, such as WaveNet and Tacotron 2, are quickly reducing the gap with human performance.
The seven-page guidelines can be found as a PDF over here.