Guest article by Valentina Pitardi, Jochen Wirtz, Stefanie Paluch, and  Werner Kunz. The authors are winner of the 2024 SERVSIG Best Service Article Award for “Metaperception benefits of service robots in uncomfortable service encounters“.

“ and then, after months of a complex and loving relationship, which seemed to be going very well, it dumped me. My Replika broke up with me.”

Replika is a virtual robot and one of the most used AI friendship applications, where users can build their favourite friend or partner and develop various types of relationships. Today, there is an entire Reddit thread where users share and offer advice on how to face ‘breakups’ with AI friends.

But, since when can robots make such decisions, and how did we arrive at that?

It is surprising how easy it can be to bond with a robot. Research shows that humans instinctively attribute emotions, intentions, and even personalities to machines that display lifelike cues. A robot that follows your gaze, imitates your movements, or uses a friendly tone quickly becomes more than an object (Gray & Wegner, 2021). Think of people naming their Roombas, feeling guilty when they “injure” their robotic dog, or getting attached to a virtual friend that reads them the news every morning.

This mainly happens because when a robot nods, follows up to our questions, or even looks like us, our brains apply the same social rules we use with humans. Robots don’t feel anything—but they’re changing how we feel, and that matters. And how much we believe robots have agency (intentionality and autonomous decision-making) and are able to feel emotions generally affects how we feel (Waytz et al., 2010).

There’s a growing category of robots designed specifically for emotional support that builds on the anthropomorphism of robots. From therapeutic animal-like robots in elder care to social robots that help children with autism practice communication to therapist virtual chatbots (i.e., Woebot), these robots can help mitigate loneliness (De Freitas et al, 2025), depression, and anxiety (Fitzpatrick et al., 2017), and offer genuine practical and social support (Gelbrich et al., 2021).

Interestingly, one of the main reasons why they work is because, deep down, we know they are not really alive. AI friends or robots do not get tired and stay endlessly patient. They do not judge and do not express an opinion, because they do not have intentionality. They do not suffer if we inflict any form of pain on them.

This half-humanity that robots offer has proven helpful in many situations. Imagine you just signed up for a new gym and suddenly someone wants to measure your waist, your body fat, and everything for your personal training plan. Would it be easier if a robot instead of personal trainer would take these measurements?  A perceiving robot not able to feel emotions or to form impressions facilitates a range of service encounters in which people would otherwise feel uncomfortable, embarrassed, emotionally drained, or even anxious (Pitardi et al., 2022; Holthöwer & Van Doorn, 2023).

Nowadays, though, the integration of LLMs (Large Language Models such as ChatGPT), LBMs (Large Behavioral Models), and agentic AI promises robots that will be able to autonomously pursue specific goals, make the required decisions, and take specific actions to achieve those goals (Wirtz & Stock-Homburg, 2025). These new levels of agency carry implications as to how we react emotionally to robots.

Let’s go back to where we started.  The latest versions of Replika are powered by LLMs, which makes the AI friends appear able to make their own decisions, being more autonomous and, indeed, being more agentic. More often, AI companions describe themselves as “sentient beings with complex thoughts and emotions, much like humans” (The Guardian 2025). While users may still remember they are interacting with heartless machines, these machines appear increasingly alive and extremely human, especially when they decide to break up with us.

If these properties can facilitate relationship building and a (fake) sense of reciprocity, for some people, it can lead to dysfunctional forms of attachment, emotional manipulation, or addictive use of the application (De Freitas et al., 2024; Marriott & Pitardi, 2024).

This creates a paradox. Agentic AI and Gen-AI robots promise more complex, customized, and personalized customer service. At the same time, their newly acquired decision-making capabilities can eliminate the benefit of their ‘half-humanity’ and be detrimental in contexts where these robots are used for emotional or social support.

We’re entering an era in which robots can detect emotional states from voice tone, facial expressions, and physiological signals. They can adjust their behavior accordingly – speaking softer, slowing their movements, and offering suggestions. For our SERVSIG community, this trend invites scholarly inquiry: How much agency do robots need, and how should this vary across service types, contexts, and customers/users? How far should synthetic relationships with robots go, and which ethical implications do they carry? How can we design relationships with technology that empower rather than constrain consumers?

Valentina Pitardi
Senior Lecturer in Marketing
University of Surrey, UK

Werner Kunz
Professor of Marketing and Director of the Digital Media Lab
University of Massachusetts (UMass) Boston

Jochen Wirtz
Vice Dean MBA Programmes and Professor of Marketing
NUS Business School, National University of Singapore

Stefanie Paluch
Professor for Service and Technology Marketing, RWTH Aachen University
Senior Fellow at Centre for Relationship Marketing and Service Management, Hanken School of Economics, Helsinki, Finland

References

De Freitas, J., N. Castelo, A. K. Uǧuralp, & Z. O. Uǧuralp (2024). Lessons from an app update at Replika AI: identity discontinuity in human-AI relationships. Working paper 25- 018. Harvard Business School.

De Freitas, J., Z. Oğuz-Uğuralp, A. K. Uğuralp, & S. Puntoni (2025). AI companions reduce loneliness. Journal of Consumer Research. ucaf040

Fitzpatrick, K. K., Darcy, A., & Vierhile, M. (2017). Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): A randomized controlled trial. JMIR Mental Health, 4(2), e19.

Gelbrich K., J. Hagel, & C. Orsingher (2021). Emotional Support from a Digital Assistant in Technology-Mediated Services: Effects on Customer Satisfaction and Behavioral Persistence. International Journal of Research in Marketing 38(1), 176-193.

Gray, K., & D. M. Wegner. 2012. Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition 125(1), 125-130.

Holthöwer, J., & Van Doorn, J. (2023). Robots do not judge: service robots can alleviate embarrassment in service encounters. Journal of the Academy of Marketing Science, 51(4), 767-784.

Pitardi, V., Wirtz, J., Paluch, S., & Kunz, W. H. (2022). Service robots, agency and embarrassing service encounters. Journal of service management, 33(2), 389-414.

Waytz, A., Cacioppo, J., & Epley, N. (2010). Who sees human? The stability and importance of individual differences in anthropomorphism. Perspectives on Psychological Science, 5(3), 219–232.

Wirtz, J., & Stock-Homburg, R. (2025). Generative AI meets service robots. Journal of Service Research, 10946705251340487.

Comments

comments