humans are stealing robot jobs —

Artist transforms herself into a virtual assistant and obeys your commands

Would we rather have a human servant or Alexa? Lauren McCarthy decided to find out.

We love to talk about how our virtual assistants fail us. They allow parrots to order fire on Amazon and play porn channels to kids. Obviously, it's going to be quite some time before these machines will be as good as human assistants. That's why the quest to create personalities for assistive technology is a serious business. Google has a “personality team” working on providing Assistant with a more human-like personality. Now artists are testing the limits of these technologies too, asking what would happen if humans actually behaved the way our virtual assistants do.

Artist and UCLA professor Lauren McCarthy’s project, LAUREN, is a performance piece that examines how home automation affects social interactions within a home. The artist installs customized software and hardware in a willing participant’s home.

“The hardware is built with off-the-shelf smart devices—Nestcams, WeMo switches, Wink light bulbs, August door locks—while the software is built on top of an open source project called Home Assistant, which runs on Python 3 and integrates all the smart device components and provides access via an interface,” McCarthy said. She made physical modifications to the hardware to ensure the LAUREN system took on a visual uniformity.

The installation allows McCarthy to peer in and observe the world of the participant(s) for 24 hours a day, and she interacts with them by typing words into a synthesizer that speaks in a robotic voice. For three consecutive days, McCarthy becomes the voice and mind of the assistive technology, trying to interact with the household like Amazon’s Alexa or Google Home.

In one of McCarthy’s performances, she invited participants to stay at her house while she monitored their activity and anticipated commands from afar. When a participant brought a date over and proceeded to make out, LAUREN indicated she was able to see and hear everything. The couple proceeded anyway.

Through her observations, Lauren learns the nuances of her participants’ behaviors and tries to do things preemptively. For example, she turns down the lights when she sees someone preparing for bed. McCarthy’s work forces the participant to command another human but also to think about the difference between talking to a person and talking to a “smart” object. With LAUREN, McCarthy’s actions are meant to exceed the overtly synthetic and robotic presence.

Lauren doesn’t merely perform tasks—she is both emotionally benevolent and omnipresent. Her voice may sound robotic, but her responses are not just canned lines. She extemporizes and tries to convey emotion.

McCarthy’s installation makes us wonder whether we actually want technology to be human. Is the purpose of assistive technology just to ease the burdens of daily life by creating shopping lists? Or are we actually looking for “emotional” devices that can intuit our needs based on our actions and tone of voice?

Since Lynn Hershman Leeson’s groundbreaking 1999 work Agent Ruby, artists have been playing with the idea of artificial intelligence and how it involves teaching machines to interact with us just like humans. What makes McCarthy’s project extraordinary in its scope is the reversal of those roles. The artist’s performance becomes a commentary on what it might mean if a human were to mimic a machine. As the artist writes on the project site, “I’m not some automated system... I’m watching and anticipating... what would bring a smile to their face or surprise them?”

The labor of attention

In one of McCarthy’s previous works, Follower, the artist offered a service in which she followed an individual around in real life. It was a literal version of the “following” people do all day on Twitter, Instagram, and other social media platforms. Although the project was extremely simple in its scope, following a person around for one day demands intellectual and emotional labor, which may take its toll on the follower. That was precisely McCarthy’s point.

Human attention has its limits. Research suggests humans have the capacity to consume and process approximately 50 bits of information per second. If we factor in all of our senses, we are perceiving roughly 11 million bits of information per second from our immediate environment. That's a lot of noise assaulting our attention. A machine, on the other hand, processes at a far higher frequency, and attention is a non-issue. In some ways, a machine’s job is to provide full attention based on an ever-increasing memory bank of information to help inform future requests.

Trying to replicate the experience of being followed online in real life is exhausting and practically impossible. McCarthy recalled how intense the experience was, especially when she would lose track of her Followee:

What I liked best about performing Follower is how strange the relationship between me and the Followee feels. I spend all day at their whim. I’m trailing them in the cold rain, sweating in the sun, running after them down the street, only to see them catch a bus.

Attention is a form of labor, but it’s easy to forget that when following slips into the digital world. That’s why Follower focused on the time and energy investment of following someone in the real world.

Weird pleasure

Many of McCarthy’s works deal with the arduous nature of sustained looking and observation. But they also deal with the ways that looking and being looked at turn our everyday lives into a performance.

The common thread between LAUREN and Follower is how each provides a level of assistive technology that ultimately shapes the behavior of the participant, whether they like it or not. When we know another human is following our movement, activities, and desires, we perform. We become acutely aware of our actions and behave as we would around any other human being. Perhaps machine technology allows the illusion of freedom from etiquette and consideration. We don’t need to have manners or be mindful of how we treat a machine. So, then, how does a device like an Alexa prevent us from being domineering or commanding? What expectations form when we are tracked and followed?

But these devices also give us insight into what we want from a companion. Describing the experience of turning into a virtual assistant in LAUREN, McCarthy mused:

[Participants] are able to forget about me at times and relax into themselves and the feeling of home. It’s less drama [than Follower]. I’m not chasing them down wondering where they’ll go next or who they’ll meet. But it feels heightened in a different way. I begin to read into each small gesture. I wonder about the relationships between the people in the home and their guests and see them fluctuate through the days I’m with them. I hear them reference things that will happen later or have happened earlier, and I get a sense of weird pleasure knowing I am there for all of it.

By putting herself in the place of assistive technology, McCarthy provokes us to question its function and use in our daily lives. She also asks whether we really want our assistants to act more human—or if we want humans to become more like Alexa.

Dorothy R. Santos is a writer, editor, and curator whose research includes digital art, activism, computational media, and biotechnology.

Listing image by Lauren McCarthy

Channel Ars Technica