Google Deploys Content Cops on Fake News

Google is asking its army of 10,000 content-monitor contractors to help rein in the amount of questionable content—including what many people are labeling these days as “fake news”—that crops up in search results.

These Google contractors, known as quality raters, have long been assigned to assess search results. What’s new is they will now be asked to look at actual real search requests—which may lead to what updated Google guidelines describe as “upsetting-offensive” content—and then rate those results. The news was first reported by Google search news-tracking site Search Engine Land.

Google senior engineer Paul Haahr, who spoke to Search Engine Land, said that Google (GOOG) itself does not use the “fake news” term because it’s overly broad. The goal, he noted, is to ferret out information that is “demonstrably inaccurate.”

But for most media followers outside of Google, such inaccurate or false statements fall into the fake news category.

Get Data Sheet, Fortune’s technology newsletter

As the report noted, the quality raters’ findings do not directly change Google results. But their findings will be used to improve underlying search algorithms. That means their data “might have an impact on low-quality pages that are spotted by raters, as well as on others that weren’t reviewed.

So, what constitutes “offensive-upsetting” content? Per the revised guidelines, it includes material that “promotes hate or violence against a group of people based on critiera including (but not limited to) race or ethnicity, nationality or citizenship.” Also included: Racial slurs, animal cruelty, child abuse, and “how-to” information about human trafficking or violent assault.

Look at how Facebook is dealing with fake news:

In December, Fortune reported Google’s plans to adjust search results on the Holocaust which regularly lead to neo-Nazi and Holocaust denial sites. The Guardian has a story on how the guidelines could apply to content that denies the Holocaust, for example.

Google is not alone dealing with the problem of racist, sexist, or just plain false content. Facebook (FBOP) and Twitter (TWTR), for example, must also deal with the influx of questionable material showing up on their sites. If they do nothing, they get slammed for lax standards. If they come down hard, they are accused of censorship.

Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.