Anthropomorphising technology makes our interaction with machines easier, but humanoid robots also create false expectations from technology.
Watching the Jetsons while growing up painted a lovely image of the future. A humanoid robot like Rosie would be part of our family, just like her robot-boyfriend Mac. Depending on the events that might push her nerves, her button robot-eyes can show her anger, sadness or happiness. Today, we just have the voice of Rosie in our houses, now called Alexa.
She can’t be angry, but she is endlessly kind and always there for us. In a way, she is part of the household. She can read to your kids, or sing her own songs about technology and losing Wifi. You can ask her what love is, or tell her goodbye, in order to activate the surveillance system of your home, by saying: “Alexa, I’m leaving.”
Ultimately, any human connection we feel with virtual assistants like Alexa is not real, it’s just a program. But still, the interaction can mean something to us. This is the aim of projects such as IMB Watson’s CIMON, the floating virtual assistant in the shape of a head with a friendly face. Its task is to assist astronauts in space, but it is also meant to be a friendly colleague by expressing emotions. If an astronaut misses his family, CIMON can respond with: “I’m sorry, how can I help?”
Human faces sell
The grandson of the founder of Heineken figured out the power of a friendly face a long time ago. He thought the logo of the brand was too harsh and came up with a brilliant idea in 1964. He tilted the three letters ‘e’ in the name, so that they became smiley faces, giving a positive feeling to the brand.
Human brains are wired to recognise faces perfectly because we have evolved in complex social groups. In such settings, it is crucial to recognise a friendly face or the face of an enemy. Humans master this skill so well, that we tend to see faces in random objects. This is called pareidolia. You surely have seen faces in power plugs or houses, with windows as eyes and a door as a mouth.
Through this mechanism, we can create strong connections to objects. For example, we see a human face in a rock formation, so we create legends about it and give metaphysical meaning to it. Likewise, a gadget with a face gets valued more.
Robot Sofia became famous worldwide and received citizenship in Saudi-Arabia because she looks so much like a human. But talking to robot Sofia is quite disappointing. If you want to interview her, you have to send the programmers your questions first. Ask a random question and she will be left speechless.
A similar disappointment happens with virtual assistants. Although a chatbot gets more appreciation from customers when it has a human name and responds kindly, there is a point where this appreciation turns into disappointment. When a digital assistant seems too human, it creates expectations it can’t deliver. You’ll have a nice chat with customer service agent Marianne of your health insurance company when suddenly she doesn’t understand your request for advice on what package you should choose.
Leave Decision Making to Humans
At the Data Natives conference 2016, Superhuman Limited founder Louisa Heinrich said in her talk that we have a “gigantic, enormous, epic scale, massive expectation management problem” in tech.
“What we are closer to is more war games than Star Trek.”
One issue is that when humans encounter a machine that does something which before only humans could do, we ascribe human characteristics to it.
“Our lives are made out of thousands of tiny decisions that we make every day,” she tells. “Most of which we are not even aware of making. And even if we are aware of making them, we don’t know why.” Think of the shirt you put on this morning, do you know why you chose this one over your other shirts? Or how did you decide who to sit next to in the subway?
She illustrates how humans should be the ones making decisions, with an example of a high tech kitchen. On a sunny day, your kitchen appliances will go to war over whether the electronic window blinds should be open or shut. Some appliances want to keep the products cool, but some need solar power. Who can set the priorities in the kitchen?
We create the expectation that technology will handle situations and make judgments the same way as humans would. But this is not the case. Important decisions and responsibility are still on us.
What do you think? How do you envision a future with humanoid robots?
Author: Kim Deen, Writer & Editor at Data Natives