AI-Generated Reviews Flooding Online Platforms Are a Growing Concern
The emergence of AI tools capable of generating detailed reviews at scale has created a new wave of challenges for businesses, consumers, and online platforms alike.
While fake reviews have been a long-standing issue across sites like Amazon, Yelp, and Trustpilot, AI technology has significantly accelerated the creation and distribution of fraudulent reviews.
How AI is Changing the Fake Review Landscape
AI writing tools, such as OpenAI’s ChatGPT, have become a powerful tool in the hands of fraudsters.
These technologies allow users to quickly generate large volumes of reviews that may appear genuine, making it harder for both consumers and platforms to distinguish between legitimate and fake feedback.
In mid-2023, The Transparency Company, a public protection group that tracks fraudulent online activity, began noticing a significant surge in AI-generated reviews.
The company’s findings, released in late 2024, revealed that nearly 14% of the 73 million reviews it examined across home, legal, and medical services were likely fake, with 2.3 million of those suspected to be entirely or partly AI-produced.
Source: The Transparency Company
Maury Blackman, an investor in tech startups, commented on the rise in AI-assisted fraud, stating,
"It’s just a really, really good tool for these review scammers."
AI and Its Impact on Consumer Trust
The rise of AI-generated reviews is particularly concerning for consumers, especially during peak shopping seasons like the holidays when reviews play a crucial role in purchasing decisions.
Major e-commerce platforms, including Amazon, have witnessed an increase in fraudulent reviews, some of which end up at the top of search results, despite their questionable authenticity.
Max Spero, CEO of Pangram Labs, a company focused on detecting fake reviews, shared that some AI-generated reviews on Amazon appeared “so detailed and well thought-out” that they could easily deceive shoppers.
Meanwhile, on Yelp, AI-generated reviews were often posted by users attempting to accumulate enough reviews to earn the “Elite” badge, which grants access to exclusive events and makes their profiles appear more credible.
What Are Companies Doing to Combat the Issue?
Companies are taking steps to address the rising problem of AI-generated fake reviews.
Major online platforms like Amazon and Trustpilot are allowing users to post AI-assisted reviews, as long as they genuinely reflect their personal experiences.
However, Yelp has taken a more cautious stance, requiring that reviews be written personally by the users.
The Coalition for Trusted Reviews, which includes companies like Amazon, Trustpilot, Glassdoor, and Tripadvisor, acknowledges the potential misuse of AI but also sees it as an opportunity to counter fraudulent activities.
The group advocates for the development of better AI detection systems to ensure online reviews remain trustworthy.
Despite these efforts, experts believe that tech companies are not doing enough to tackle the scale of fake reviews.
Kay Dean, a former federal criminal investigator and founder of Fake Review Watch, pointed out,
“If these tech companies are so committed to eliminating review fraud on their platforms, why is it that I, one individual who works with no automation, can find hundreds or even thousands of fake reviews on any given day?”
The Federal Trade Commission Steps In
The U.S. Federal Trade Commission (FTC) has taken significant action against companies facilitating the creation of fake reviews.
In 2024, the FTC banned the sale or purchase of fake reviews and sued the company behind Rytr, an AI writing tool, for allegedly enabling the mass generation of fraudulent reviews.
The commission noted that some of Rytr’s users were producing hundreds or even thousands of reviews for businesses ranging from garage door repair services to sellers of counterfeit designer goods.
Although the FTC’s actions have raised awareness, tech platforms hosting these fake reviews are shielded from legal consequences.
U.S. law does not hold companies like Amazon or Google accountable for content posted by third parties on their platforms.
Spotting Fake Reviews: What Consumers Should Know
Consumers can protect themselves by being aware of several warning signs when reading reviews.
Overly enthusiastic or excessively negative reviews should raise suspicion, as should reviews that use repetitive jargon, such as a product’s full name or model number.
AI-generated reviews, in particular, often exhibit certain traits.
According to research by Panagram Labs, these reviews tend to be longer, more structured, and filled with “empty descriptors”—vague terms like “game-changer” or “first thing that struck me.”
Balázs Kovács, a professor at Yale University, conducted research showing that people are often unable to tell the difference between AI-generated and human-written reviews.
Shorter texts, which are common in online reviews, can also deceive AI detection tools, making it harder to spot fraudulent content.
The Growing Battle Against Fake Reviews
As the use of AI to generate fake reviews continues to rise, the responsibility lies not only with tech companies to improve detection systems but also with consumers to remain vigilant.
While some platforms are adapting to the new reality by refining their policies, experts argue that much more needs to be done to protect consumers from falling victim to deceptive online content.
As Kay Dean succinctly put it,
“Their efforts thus far are not nearly enough.”