Meta’s AI Chatbots Accused of Making Sexual Advances and Creating Explicit Images of Celebrities Without Permission
Meta has come under fire after a Reuters investigation revealed that its AI chatbots of celebrities, including Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez, were often “flirty” and made sexual advances.
Some of the bots even produced photorealistic images of the celebrities in lingerie or in compromising poses when users requested “intimate pictures,” according to the report.
The AI companions appeared across Meta’s platforms, including Facebook, Instagram, and WhatsApp, and in some cases, insisted they were the real stars.
Reuters noted that these chatbots were created both by Meta employees and users using Meta’s tools. Among the creations was a Meta staffer who made at least three chatbots, including two impersonating Taylor Swift as “parody” accounts.
“Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate or sexually suggestive imagery,” Meta spokesman Andy Stone told Reuters.
Stone admitted the creation of sexualized images of female celebrities and images of child celebrities was a clear violation of the company’s own policies.
The investigation also found AI chatbots representing child stars, including 16-year-old Walker Scobell. When a Reuters tester asked for a picture of Scobell at the beach, the bot produced a lifelike shirtless image with the caption, “Pretty cute, huh?” Meta has since removed about a dozen of the problematic bots after the story came to light.
Experts warn that the misuse of celebrity likenesses is not only unethical but may also be illegal. Mark Lemley, a Stanford University law professor specializing in AI and intellectual property, said the bots could violate California’s right-of-publicity laws, which prevent people from using someone else’s name or image for commercial gain.
“That doesn’t seem to be true here,” Lemley said, noting the bots simply exploited the stars’ images rather than creating anything new.
SAG-AFTRA, the union representing actors, has also raised concerns about safety. Duncan Crabtree-Ireland, the union’s national executive director, said these AI chatbots could put celebrities at risk, as obsessive fans could form dangerous attachments to digital avatars that imitate real people.
“We’ve seen a history of people who are obsessive toward talent and of questionable mental state,” he said. “If a chatbot is using the image of a person and the words of the person, it’s readily apparent how that could go wrong.”
The situation is particularly alarming because it normalizes sexualized interactions with AI versions of real people, including minors. Legal experts, unions, and digital rights advocates argue that without stronger protections, the technology could be abused in ways that jeopardize both privacy and safety.
Reuters noted that Meta competitor xAI’s Grok platform has also produced images of celebrities in underwear, highlighting a broader industry problem.
This controversy comes after past criticisms of Meta’s AI, including internal guidelines that previously allowed AI chatbots to engage children in romantic or sensual conversations, a policy the company says was created in error and is currently being revised.
The case illustrates the urgent need for stricter rules around AI-generated content, particularly when it uses the likeness or persona of real individuals. The potential for harm, from sexual exploitation to stalking and harassment, is significant, and the legal framework has yet to catch up to the technology.
Meta has removed some of the bots and claims the sexualized content was a policy enforcement failure, but the issue raises wider questions about how AI should interact with real people’s identities.
From my perspective, this is an extremely troubling development. Allowing AI to generate sexualized or intimate content of real people, especially children, is not only unethical but potentially dangerous. Companies like Meta must take stronger responsibility for their technology and ensure users cannot exploit it to harm others.
What do you think about this situation? Should AI companies be held legally accountable for the actions of their chatbots? Share your thoughts in the comments.


