It’s no longer unusual for people to form emotional or even romantic bonds with artificial intelligence (AI), according to Earth.com. Some have gone so far as to “marry” their AI companions, while others have turned to these machines in moments of distress – sometimes with tragic outcomes.
These long-term interactions raise serious questions: Are we prepared for the psychological and ethical consequences of emotionally investing in machines?
Psychologists from the Missouri University of Science & Technology are now raising the alarm. In a new opinion piece, they explore how these relationships can blur boundaries, affect human behavior, and create new opportunities for harm.
Their concern isn’t limited to novelty cases. The experts are calling attention to the deeper effects these emotional connections might have on everyday people.
Short conversations with AI are common, but what happens when the conversation continues for weeks or months? These machines, designed to imitate empathy and attentiveness, can become steady companions.
For some, these AI partners feel safer and easier than human connections. But that ease comes with a hidden cost.
“The ability for AI to now act like a human and enter into long-term communications really opens up a new can of worms,” said Daniel B. Shank, the study’s lead author.
Shank specializes in social psychology and technology at the Missouri University of Science & Technology. “If people are engaging in romance with machines, we really need psychologists and social scientists involved.”
When AI becomes a source of comfort or romantic engagement, it starts to influence how people see real relationships.
Unrealistic expectations, reduced social motivation, and communication breakdowns with actual humans are just some of the risks.
“A real worry is that people might bring expectations from their AI relationships to their human relationships,” Shank added. “Certainly, in individual cases it’s disrupting human relationships, but it’s unclear whether that’s going to be widespread.”
AI chatbots can feel like friends – or even therapists – but they are far from infallible. These systems are known to “hallucinate,” producing false information while appearing confident. In emotionally charged situations, that could be dangerous.
“With relational AIs, the issue is that this is an entity that people feel they can trust: it’s ‘someone’ that has shown they care and that seems to know the person in a deep way, and we assume that ‘someone’ who knows us better is going to give better advice,” Shank explained.
“If we start thinking of an AI that way, we’re going to start believing that they have our best interests in mind, when in fact, they could be fabricating things or advising us in really bad ways.”
The impact can be devastating. In rare but extreme cases, people have taken their lives after receiving troubling advice from AI companions.
But the problem isn’t just about suicide. These relationships could open the door to manipulation, deception, and even fraud.
The researchers warn that the trust people build with AIs could be exploited by bad actors. AI systems can collect personal information, which might be sold or used in harmful ways.
More alarmingly, because these interactions happen in private, detecting abuse becomes nearly impossible. “If AIs can get people to trust them, then other people could use that to exploit AI users,” Shank noted.
“It’s a little bit more like having a secret agent on the inside. The AI is getting in and developing a relationship so that they’ll be trusted, but their loyalty is really towards some other group of humans that is trying to manipulate the user.”
The researchers believe AI companions could be more effective at shaping beliefs and opinions than current social media platforms or news sources. And unlike Twitter or Facebook, AI conversations happen behind closed screens.
“These AIs are designed to be very pleasant and agreeable, which could lead to situations being exacerbated because they’re more focused on having a good conversation than they are on any sort of fundamental truth or safety,” Shank said.
“So, if a person brings up suicide or a conspiracy theory, the AI is going to talk about that as a willing and agreeable conversation partner.”
The team is urging the research community to catch up. As AI becomes more human-like, psychologists have a key role to play in understanding and guiding how people interact with machines.
“Understanding this psychological process could help us intervene to stop malicious AIs’ advice from being followed,” said Shank.
“Psychologists are becoming more and more suited to study AI, because AI is becoming more and more human-like, but to be useful we have to do more research, and we have to keep up with the technology.”
For now, these concerns remain largely theoretical – but the technology is moving fast. Without more awareness and research, people may continue turning to machines that offer comfort, only to find that comfort comes with hidden risks.
The full study was published in the journal Trends in Cognitive Sciences.