Google explains how Maps review moderation works
Google released a Overview how Maps reviews work. The overview includes details on how the company develops and enforces policies and the role of machines and humans in content moderation.
Creation and application of policies. “As the world evolves, our Strategies and protections,” Google said in the blog post. Regular reviews help Google adapt its policies to better protect businesses from abuse.
As an example, the company highlighted when governments and businesses began requiring proof of COVID-19 vaccination before entering: “We have additional protections in place to remove reviews Google who criticize a company for its health and safety policies or for complying with a vaccine mandate,” Google said.
Additionally, “once a policy is written, it is turned into training material — both for our operators and our machine learning algorithms,” the company said.
Review moderation powered by machine learning. User reviews are sent to Google’s moderation system as soon as they are submitted. If no policy violations are detected, the review can be live in seconds.
To determine whether a review might violate the policy, Google’s machine learning-based moderation systems evaluate reviews from several angles, such as:
- If the review contains offensive or off-topic content.
- If the account that left the review has a history of suspicious behavior.
- If there has been unusual activity at the location or business under review (for example, an abundance of reviews over a short period of time or if the location has received recent media attention that might motivate people to leave false review).
The role of human moderators. Google’s human moderators are responsible for performing quality testing and additional training to remove bias from machine learning models. Training models on the different ways words and phrases can be used helps improve Google’s ability to detect fake reviews while reducing the chance of false positives.
Human moderators also review flagged content. When Google finds fake reviews, it removes them and, in some cases, also suspends the user accounts they came from.
Proactive measures. Google’s systems continue to analyze reviews and monitor for anomalous patterns even after reviews are posted. These patterns include groups of people leaving reviews for the same group of businesses, or a business receiving an unusual number of 1 or 5 star reviews in a short period of time.
Additionally, the human moderation team is working to identify potential abuse risks. “For example, when there’s an upcoming event with a large audience – like an election – we put in place high protections for venues associated with the event and other nearby businesses that people might be looking for on Maps,” Google said.
Why we care. Reviews are a local ranking factor. Understanding how Google handles reviews can help you comply with its policies and improve your business’s visibility.
While the information above isn’t new, it does give us an under-the-hood look at Google’s moderation system in more detail than the company previously shared.
RELATED: How Google and Yelp Handle Fake Reviews and Policy Violations
New to Search Engine Land