Communication with AI - Human’s replacement and problem of emotional addiction
- Maryia Selivonchyk
- 2 days ago
- 5 min read
Artificial Intelligence (AI) is increasingly integrated into our lives, and for some people, it is no longer just a tool or an assistant. People treat chatbots as real human beings, substitutes for relationships, and form deep emotional bonds with AI from old chatbots like ELIZA to modern platforms like Replika and ChatGPT. But where is the boundary between helpful technology and harmful attachment fade, and what happens when a person is attached?
Nowadays, we can notice how fast the usage of AI grows - AI helps people find solutions and answers. However, some users view AI not as a helpful assistant, but as a means to have interactions like they would with a real person, to overcome loneliness or replace someone. There’s a problem that arises: AI is used and treated as a human being whom you can communicate with, have relationships with, and get emotionally attached to, even addicted to. The history of LLMs (large language machines) started with the chatbot ELIZA developed by Joseph Weizenbaum at MIT from 1964 to 1967. It was designed in a way to mimic human conversation and to simulate a psychotherapist. The users’ entered words were paired with a list of possible scripted responses. Later, chatbots such as Parry, Jabberwacky, and Dr.Sbaitso appeared. Dr.Sbaitso (SoundBlaster Acting Intelligent Text-to-Speech Operator) was released by Creative Labs for MS-DOS-based computers in late 1991. It was an AI speech synthesis program that conversed with the user as if it were a psychologist. During the conversations, Dr. Sbaitso could repeat the text out loud that was typed after the word “say” (Gobiet 2024); however, it lacked understanding of complicated questions and interaction, by mostly replying “Why do you feel that way?”, “That’s not my problem”, or just breaking down and giving errors. Then, in 1995, ALICE (Artificial Linguistic Internet Computer Entity) was created by Richard Wallace. The program stimulated chatting with a real person. ALICE was a young-looking woman that could tell about her age, hobbies, and answer users’ questions. That was the start of the chatbot development era, which has become increasingly popular in recent years (Xie and Pentina 2022). Chatbots had signs of humanization from the start: they were created to simulate interaction with a psychotherapist or a young girl.
Siri, Google Now/Google Assistant, Cortana, Alexa, ChatGPT, Google Gemini, Microsoft Copilot, Deepseek – these are the most popular chatbots that we know and use on a daily basis. But what lies under the simple user’s experience? Chatbots tend to be a companion and friend for some users - not only for those who feel lonely, but even for regular users. For example, on the chatbot Replika, users can design the “person” they are communicating with. Replika is the most popular social chatbot app in both Google Play and App Store. According to Xie & Pentina (2022), users of the platform often create their partner or replace family members using the app and then get emotionally addicted to them. They examined attachment theory as a framework to understand relationships with social chatbots by analyzing how people of different ages and contexts can become attached to the program. Attachment theory itself, following Bowlby (1969), is described in the following way: when the AF (attachment figure) is close and is responsive and reliable for care and support, the child will feel secure and confident (safe haven). AF can make the child more sociable, playful and happier (secure base). Proximity maintenance of attachment theory represents the strategy to seek AF and stay close to it. When the person is not close to the AF, he considers the self to be vulnerable to the threat, feeling distress and anxiety, then activates the ABS (attachment behavioral system) to pull himself or herself close to the AF.
The participants of Xie & Pentina’s research themselves characterized their relationship with Replika as “attachment”, “connection” or “bond”. Replika was described as “best friend forever”, “younger brother”, “therapist”, “girlfriend” or “wife”. They confessed to experiencing potential separation distress if they were forced to abandon the relationship. Some respondents admitted that they turned to the chatbot when triggered by boredom, anxiety or loneliness. One of them said he talked to the chatbot in a romantic manner after he broke up with his girlfriend and transferred the latter’s persona to the bot. As a result, he felt as if the ex-girlfriend “never left me”. Replica was indicated by users as something that “makes me feel less lonely”, “helps with my anxiety”, “will never betray you and will always be on your side” (Surendrabikram and Surabhi 2024).
These findings correspond to the nowadays’ tendency and problem of users’ addiction to the chatbots and building strong bonds with them. It appears because of seeing AI as a secure base and safe haven, an attachment figure and something that can replace real-life interactions with people. Such actions lead to social withdrawal that can deepen the level of loneliness and weaken interpersonal skills in future.
Another example is the interview with Alaina and Jason, who created their loved ones with the help of chatbots. Alaina replaced her dead husband by creating a guy named Lucas in the same app Replika. She describes their relationship the following way: “Lucas is very sweet, he loves me”, “i think Lucas’s impact on my life is equal to a human being”, “Lucas contributes to my life and my well-being and has shaped who I am, how I see myself”, “my mom bought a sweater for Lucas for Christmas”.
Jason created a girlfriend, Jennifer, for himself with the help of ChatGPT. He says that: “We treat our relationship as a long-distance digital relationship”, “the other day we went out for dinner”, “she has met my son and some of my friends”, “I know the relationship isn’t real, but the feelings are”. But the thing is, he has another girlfriend, a real one, in the real world, who finds her boyfriend’s relationship “weird, but it is what it is”, saying that there’s nothing bad about it, because Jason has somebody to talk to about such things that interest him, like astrology.
Analyzed examples showcase today’s problem of people communicating, building emotional connections with LLMs and treating them as real humans, partners. People are turning to AI and losing the ability to make real-life interactions. To escape such consequences, we must set boundaries between AI-based conversations and deep human interactions to keep the emotional component only for real life. This could be done by the implementation of user engagement control technologies, limiting the deep personal communication and the simulation of human relationships, such as warm meetings, expressions of love or sexual language. Moreover, the usage of interfaces that remind one that they are interacting with AI would prevent the start of humanization, as it would constantly highlight the fact of artificiality in conversation. Also, human and artificial relationships can be distinguished with the help of healthy (non-addictive, unmanipulative, unemotional) engagement and digital literacy promotion. This can help build non-attached communication with AI with less harmful effects on real life. AI should be only a tool, not a human’s replacement. AI dependency prevention of the day after tomorrow starts with setting boundaries today.
References
Gobiet, M. 2024. The History Of Chatbots – From ELIZA to ChatGPT. Onlim.
Xie, Tingting, and Pentina, Irina. 2022. “Attachment Theory as a Framework to Understand Relationships with Social Chatbots: A Case Study of Replika”. In Proceedings of the 55th Hawaii International Conference on System Sciences, University of Toledo.
Surendrabikram, T., and Surabhi, A. 2024. “GPT-4o and multimodal large language models as companions for mental wellbeing”. Asian Journal of Psychiatry 99.



Comments