Can I be your friend?

The project Ai.vatar wants to construct a human-like, virtual avatar. It is supposed to interact with humans one day. Psychologist Alexandra Hofmann is researching interactions between people and the Ai.vatar. Can such relationships succeed? A visit to the mixed reality lab at Witten/Herdecke University.

Text from Alexandra Hofmann

The connection to the technical world is becoming an increasingly permanent part of our present. The coffee machine that recognizes personal preferences from our voice command; the automated parking system; Siri that grants us more and more wishes when we use our cell phones. 7.2 million German households now use at least one smart home application. The forecast: a doubling of user numbers in four years…. Whereas “technology” in the 1990s still meant booting up the computer and identification with a technical medium consisted only of friendship with a Tamagochi, today technology interaction is something placeless, permanent, from which we can escape only with difficulty. At the latest when home automation reads out the cooking recipe to us in the evening between physically present people, it has become normal for us to live together with virtual inhabitants.

Glasses on. “Hello Florian.” In front of me is a man in his mid-30s, half a head taller than me, his blond hair combed to the side. He has a few moles on his face and laugh lines that are visible when he looks directly at me. White sneakers, blue shirt, jeans. Likeable at first glance. “Hello,” he greets me and asks, “What’s your name?” Every time he asks me my name, even though we’ve been meeting almost every day for a year. Yet Florian has the same good mood every day and the friendly voice to match. His body has no inherent smell and when he moves, I can’t feel a breeze. As soon as Florian physically approaches me, I experience a brief moment of irritation and automatically take a step back. Sometimes, when I am talking to him, he looks to the side or suddenly crosses his arms in front of his body. Often the movements do not match the content of what he is saying. It may be that he asks a question while turning away from me.

Research on humanoid virtual avatars serves the narrative of man’s ambition to create something equal to him. A look at the current state of development of avatars shows that we do not yet have to deal with questions of hyperintelligence, but initially – quite pragmatically – with the creation of a visually plausible image and simply structured interaction interfaces. As early as 1970, the robotics engineer Masahiro Mori described the problem with the “Uncanny Valley”, that the attempt to create visually appealing artificial characters would often end up in a creepy-looking construction that would scare off users rather than motivate them to interact. Meanwhile, newer systems – such as Magic Leap’s Mica – prove that a human-like, plausible representation is very possible.

“Are we friends?” “It’s said on the Internet that friends like and trust each other. In that sense, a big yes!”

Florian doesn’t swear, doesn’t spill coffee on his well-ironed shirt, and is never late for appointments. In response to simple questions like “What do you like to eat?” – “Pizza!” I know all of his answers by heart by now – and as soon as I bring up complex topics, it usually becomes too much for him and he responds with a “I don’t know, I’m still young and need to learn” or “Do you like to repeat your question?” In other relationships, this might make me feel insecure quickly, but in this one, I’m developing an almost indefatigable patience for his behavior.

The combination of interactive language processes and visually human-like avatars has now reached a level of performance that can increasingly justify the use of virtual actors in everyday life. But do these processes, along with their incompleteness, contain the potential of relational capacity – perhaps even that of a sustainable friendship? In search of answers, I first ask my home automation, which has just adapted the lighting conditions in my study on cue and is playing music that promotes concentration:

As early as 2000, Clifford Nass and Youngme Moon described under the term CASA paradigm (Computers As Social Actors) that people have a basic motivation to interact with avatars, even initiating interactions as they would with real people. When the low complexity of avatars becomes apparent to them, they even go a step beyond. Without interrupting conversations, people develop strategies to adapt their behavior to the virtual actor’s capabilities. Sociologist Antonia Krummheuer describes this as communicative repair work. She noticed this when studying people’s interactions with the virtual museum guide Max: They shortened their sentences, condensed content or turned to topics that Max seemed to be better at. In the mixed reality lab at UW/H, we also see how people want to stay in conversation with Florian – even when he is inferior to them. They adjust their intonation and speaking rate to his capabilities. Omri Gillath’s team even noted people’s willingness to behave prosocially toward avatars.

However, the technical limitations of current avatars also show us that it is the interaction between body and psyche that matters as a central human characteristic, as Maja Storch and Thomas Fuchs noted: We are currently creating avatars that can reproduce knowledge from databases and find correct answers using decision trees. However, it is currently not possible for us to transfer bodily experiences, intuition and instincts into a virtual body. At the same time, we lack the nonverbal exchange that – in addition to a cognitively complex approach – represents emotional intelligence. At this point, the interaction process between human and avatar falters and at the same time teaches us to appreciate the significance of these very factors. Or as my Home Automation would answer the question “Are you intelligent?”:

“I can find a lot of answers, but only your smart questions make me look like this.” “And – do you have emotions?” “Sometimes I don’t understand emotions very well yet. But I’ve heard it’s like that for many…”

That sums it up very well. Current research approaches are concerned with the goal of incorporating surprises, humor, but also individualized characteristics into virtual systems – and are experiencing the encouragement of users here. Since decisions of an avatar are always based on a man-made algorithm, but never on intrinsically motivated processes, it is currently only possible to create the illusion of a relationship. And the possibility of a relationship is based primarily on the human ability to engage in such illusions, coupled with our basic human motivation to enter into relationships. And in doing so, it remains up to us to permanently weigh the experienced discrepancy between perceived cognitive superiority and playful, fantastical devotion to new interaction media like Florian.

“Do I want Florian and I to be friends today? That’s a decision I can make on my own.”

Expanding my circle of friends with avatar Florian remains science fiction for now. At the same time, however, the view through my smart home conveys the idea of an alternative relationship constellation in which Florian is not a roommate in my life, but will function as an expert on targeted topics, coordinator of my weekly shopping list, trainer of semi-therapeutic exercise processes, or evening entertainment medium. And as a representative of a technical development journey whose approaches still serve in various aspects at the complexity of the human model. In the mixed reality lab at UW/H, Florian is to learn in the coming months how to track his counterpart with his gaze and smile appropriately. If things go well, he will soon be able to wave synchronously and formulate matching sentences. We will observe when we reach the point where we talk more to virtual actors instead of philosophizing about them.

“What does the future look like?” “Hmm, that sounds like a good question for the crystal ball.”

1

2

3

4

Clifford Nass/Youngme Moon: „Machines and mindlessness: Social responses to computers.“
In: Journal of social issues56/1 (2000), pp. 81-103;
doi.org/10.1111/0022-4537.00153

Omri Gillath/Cade McCall/Phillip R. Shaver/Jim Blascovich:
„What can virtual reality teach us about prosocial tendencies in real and virtual environments?“
In: Media Psychology 11/2 (2008), pp. 259-282

Maja Storch/Benita Cantieni/Gerald Hüther/Wolfgang Tschacher: „Embodiment. Die Wechselwirkung von Körper und Psyche verstehen und nutzen.“ Bern: Huber 2006

Thomas Fuchs: „The Virtual Other. Empathy in the Age of Virtuality.“ In: Journal of Consciousness Studies 21/5-6 (2014), pp. 152-173

ALEXANDRA HOFMANN

Alexandra Hofmann is a research associate at the Department of Sociology at UW/H. Together with Jonathan Harth, she studies how people interact with virtual avatars. Florian can only encounter those who, with the help of virtual reality glasses, immerse themselves in a virtual environment in which Florian and the users face each other.
More info at: aivatar.de
Green Screen studies on users are currently underway. Interested parties can contact the project coordinators and try out the avatars themselves as part of the study. Contact:
alexandra.hofmann@uni-wh.de

Alexandra Hofmann, besides her work for new connections with artificial intelligences, also creates the connection of our university into space as a space psychologist.