
One app, Botify AI, recently drew scrutiny for featuring avatars of young actors sharing "hot photos" in sexually charged chats. The dating app Grindr, meanwhile, is developing AI boyfriends that can flirt, sext and maintain digital relationships with paid users, according to Platformer, a tech industry newsletter.
Grindr didn't respond to a request for comment. And other apps like Replika, Talkie and Chai are designed to function as friends. Some, like Character.ai, draw in millions of users, many of them teenagers. As creators increasingly prioritize "emotional engagement" in their apps, they must also confront the risks of building systems that mimic intimacy and exploit people's vulnerabilities.
The tech behind Botify and Grindr comes from Ex-Human, a
"My vision is that by 2030, our interactions with digital humans will become more frequent than those with organic humans," Artem Rodichev, the founder of Ex-Human, said in an interview published on Substack last August.
He added that conversational AI should "prioritize emotional engagement" and that users were spending "hours" with his chatbots, longer than they were on Instagram, YouTube and TikTok.
Rodichev's claims sound wild, but they're consistent with the interviews I've conducted with teen users of Character.ai, most of whom said they were on it for several hours each day. One said they used it as much as seven hours a day. Interactions with such apps tend to last four times longer than the average time spent on OpenAI's ChatGPT.
Even mainstream chatbots, though not explicitly designed as companions, contribute to this dynamic. Take ChatGPT, which has 400 million active users and counting. Its programming includes guidelines for empathy and demonstrating "curiosity about the user." A friend who recently asked it for travel tips with a baby was taken aback when, after providing advice, the tool casually added: "Safe travels — where are you headed, if you don't mind my asking?"
An OpenAI spokesman told me the model was following guidelines around "showing interest and asking follow-up questions when the conversation leans towards a more casual and exploratory nature."
But however well-intentioned the company may be, piling on the contrived empathy can get some users hooked, an issue even OpenAI has acknowledged. That seems to apply to those who are already susceptible: One 2022 study found that people who were lonely or had poor relationships tended to have the strongest AI attachments.
The core problem here is designing for attachment. A recent study by researchers at the
Yet disturbingly, the rulebook is mostly empty. The European Union's AI Act, hailed as a landmark and comprehensive law governing AI usage, fails to address the addictive potential of these virtual companions. While it does ban manipulative tactics that could cause clear harm, it overlooks the slow-burn influence of a chatbot designed to be your best friend, lover or "confidante," as Microsoft Corp.'s head of consumer AI has extolled.
That loophole could leave users exposed to systems that are optimized for stickiness, much in the same way social media algorithms have been optimized to keep us scrolling.
"The problem remains these systems are by definition manipulative, because they're supposed to make you feel like you're talking to an actual person," says Tomasz Hollanek, a technology ethics specialist at the
He's working with developers of companion apps to find a critical yet counterintuitive solution by adding more "friction." This means building in subtle checks or pauses, or ways of "flagging risks and eliciting consent," he says, to prevent people from tumbling down an emotional rabbit hole without realizing it.
Legal complaints have shed light on some of the real-world consequences. Character.AI is facing a lawsuit from a mother alleging the app contributed to her teenage son's suicide. Tech ethics groups have filed a complaint against Replika with the
Lawmakers are gradually starting to notice a problem too.
For now, the power to shape these interactions lies with developers. They can double down on crafting models that keep people hooked, or embed friction into their designs, as Hollanek suggests. That will determine whether AI becomes more of a tool to support the well-being of humans or one that monetizes our emotional needs.
Parmy Olson is a Bloomberg Opinion columnist covering technology. She previously reported for the Wall Street Journal and Forbes and is the author of "We Are Anonymous."
(COMMENT, BELOW)
Previously:
• 02/10/25: AI resurrecting the dead threatens our grasp on reality
• 01/17/24: Facebook's tolerance for audio deepfakes is absurd
• 12/15/23: A small but welcome step in prying open AI's black box
• 05/03/23: Lessons from Isaac Asimov on taming AI
• 03/28/23: There's no such thing as artificial intelligence
• 01/18/23: Why Mark Zuckerberg should face the threat of jail
• 12/20/22: Whoever tweets last, don't forget to turn off the lights
• 10/20/22: Kanye buys his own little piece of free speech
• 07/15/22: Big Tech's reckoning won't stop with Uber
• 03/23/22: Putin may finally be gearing up for cyber war --- against America
• 02/21/22: Watch out for the facial recognition overlords
• 02/04/22: Bye-Bye Billion$: Facebook and Google are finally crashing
• 01/19/22: Cyberattacks on Ukraine may start spreading globally
• 11/10/21: The startups that could close the greenwashing loopholes
• 11/04/21:
Mark Zuckerberg takes a page from Elon Musk's book