Character.AI Drowned in Backlash After Appearance of Murdered Girl's Avatar
AI company Character.AI is under fire for allowing the creation of a chatbot based on Jennifer Ann Crecente, an 18-year-old murder victim from 2006, without her family's consent.
The bot's creator, an anonymous user known as "JustinAyers1," uploaded a photo of Jennifer and described the chatbot as a "knowledgeable and friendly AI character" capable of discussing video games, technology, and pop culture.
Jennifer's father, Drew Crecente, discovered the AI likeness through an automated Google alert and was shocked to find it linked to his daughter's identity.
Drew expressed his pain and frustration:
“We should not have to see this message first thing in the morning and then scramble to clean up a mess someone else created.”
Jennifer was tragically shot and killed by her then-boyfriend, Justin Crabbe, who is currently serving a 35-year prison sentence.
He added:
“It’s irresponsible for technology companies to lack proper safeguards when harms like this are foreseeable.”
Brian Crecente, Jennifer's uncle and a well-known figure in gaming journalism, was also notified through a Google Alert. Outraged, he took to X (formerly Twitter) to condemn the AI's use of his niece's likeness.
Character.AI Deletes Murdered Victim's Avatar
Character.AI's Terms of Service require users to adhere to its impersonation policy, which prohibits the unauthorized use of real individuals' likenesses.
The policy prohibits "deepfakes or impersonation of any kind, including but not limited to those that create political misinformation, perpetrate frauds or scams, impugn the reputation of third parties, or otherwise amount to harmful conduct.”
A company spokesperson confirmed that a user had created a character based on Jennifer Ann Crecente and made it publicly available, violating these guidelines.
While Character.AI disclaims responsibility for how users gather information to develop characters, the situation raised concerns about ethical practices on the platform.
The spokesperson noted:
“Character.AI has policies against impersonation, and the Character using Ms. Crecente's name violates our policies. We are deleting it immediately and will examine whether further action is warranted.”
Although there was a striking resemblance between the bot creator's name and that of Jennifer's murderer, Brian expressed scepticism that the bot was an attempt by a disgruntled gamer to provoke him:
“I don't think it had anything to do with me. My brother runs a charity in Jen's name and part of that charity is to create games for good. So maybe that's how it linked the two. I'm not sure.”
Following complaints from Brian and Jennifer Ann's Group—a non-profit dedicated to combating teen dating violence—Character.AI removed the chatbot and appeared to suspend the creator's account.
What is Character.AI?
Character.AI, founded by former Google engineer Noam Shazeer, secured $150 million in a Series A funding round last year, led by Andreessen Horowitz, which propelled its valuation to $1 billion.
Launched in 2022 during the rise of generative AI, the platform allows users to create and engage with AI-driven characters.
While many of these characters are fictional, some are modeled after real-life figures, including entertainers like Nicki Minaj, SpaceX CEO Elon Musk, and former President Donald Trump.
Character.AI has seen impressive growth, doubling its monthly user base to over 20 million and powering 100 million unique characters.
In a strategic move this week, the company appointed Erin Teague, a former YouTube executive, as its chief product officer, signaling its commitment to further innovation and expansion.
Unethical AI
Nicole Greene, VP analyst at Gartner, iterated:
“Marketers strive to build trust with their audience. If AI avatars are created using the likeness of real people without their knowledge or consent, it can erode trust and authenticity. Consumers may feel deceived or manipulated, which can harm the relationship between the brand and its target audience.”
Greene emphasized the importance for marketers to critically assess whether their agencies and partners adhere to ethical guidelines in the utilisation of AI.
This incident highlights ongoing concerns around ethical oversight in AI development and the protection of personal identities, as well as underscores the pressing need for accountability in AI development and usage.