Defamation Risks Loom as Tech Giants Embrace AI
The increasing integration of user-generated comments and reviews into AI-driven platforms by Meta and Google has sparked concerns among legal experts about potential defamation liabilities.
With these developments, the platforms may inadvertently expose themselves to legal action if the AI-generated summaries include defamatory content.
The landscape of defamation law is evolving rapidly, but it appears that the technology is advancing even faster.
Legal Precedents and Responsibilities
Historically, when defamatory statements are made on platforms like Google or Facebook, it is typically the user who faces the brunt of legal consequences.
However, a significant ruling in 2021 involving Dylan Voller, an Indigenous man mistreated in a youth detention centre, changed the game.
The High Court of Australia ruled that not only the individual making the defamatory post but also the platforms hosting such comments could be held liable.
This precedent suggests a shift in responsibility that could leave tech companies vulnerable to lawsuits.
Legal expert Michael Douglas from Bennett Law highlighted the ramifications of this ruling, stating,
“If Meta sucks up comments and spits them out, and if what it spits out is defamatory, it is a publisher and potentially liable for defamation.”
He expressed scepticism about the effectiveness of potential defences, such as the argument of “innocent dissemination,” suggesting that companies should reasonably know when they are propagating defamatory content.
As Douglas pointed out, while there are new provisions for “digital intermediaries” in some state defamation laws, it remains uncertain whether AI fits within these legal protections.
AI Integration: A Double-Edged Sword?
As Google rolls out its AI, Gemini, across various platforms, including Maps, users can now ask for recommendations on places to visit or activities to engage in.
This new feature summarises user reviews, but the risk of sharing harmful or defamatory content looms large.
Similarly, Meta has begun providing AI-generated summaries of comments on Facebook posts, raising questions about the implications of such technology.
Prof David Rolph, a senior lecturer in law at the University of Sydney, noted that while the recent introduction of a serious harm requirement in defamation laws might mitigate risks, the landscape has changed dramatically with the advent of large-language models like AI.
“The most recent defamation law reform process obviously didn’t grapple with the new permutations and problems presented by AI,” he said.
This highlights the challenge that lawmakers face in keeping up with technological advancements, as they often find themselves playing catch-up.
The Balance of Perspectives in AI Outputs
In response to inquiries about defamation risks, Miriam Daniel, Google’s vice-president and head of Maps, assured that the company is vigilant in eliminating fake reviews and inappropriate content.
“We look for enough number of common themes from enough reviewers, both positive sentiments and negative sentiments, and try to provide a balanced view to when we provide the summary,” she stated.
Meta’s representatives echoed similar sentiments, acknowledging that their AI is still in its infancy.
A spokesperson noted that while the technology is improving,
“we share information within the features themselves to help people understand that AI might return inaccurate or inappropriate outputs.”
This admission highlights the inherent uncertainties associated with AI-generated content and the ongoing efforts to refine these technologies.
As both Meta and Google navigate this uncharted territory, the balance between innovation and legal responsibility hangs in the balance, leaving many to ponder the implications of AI on defamation laws.