Politness

On Voice AI Politeness

Should voicebots be polite to us, their human users, and should we, the human users, be polite to our voicebots? I believe that the answer to the first is, ‘Yes, of course,’ and to the second is, ‘No, of course not.’  Here’s why.

First, voicebots are mere gadgets — or, when they are actually useful, mere appliances — and therefore don’t feel pain.  And since politeness has to do, first and foremost (although not exclusively) with minimizing the infliction of pain upon others, engaging in polite actions with voicebots is almost always absurd.  You are spending energy being tactful and careful to avoid hurting the feelings of something that has no feelings.

We, humans, in sharp contrast, not only feel emotions, but feel them even when we shouldn’t feel them, and even when we know that it’s senseless for us to train our emotional tentacles against such inanimate objects.  For instance, I may very well know that the toaster is an inanimate object, an appliance, a thing, but if for some reason it burns my toast, I will get angry.  It’s senseless for me to get angry at the toaster, but I will get angry still — and I will get angry at  the toaster.   No doubt, I will probably be angry at myself for not having thrown the toaster in the garbage, given that this had not been the first time that it burned my toast, and at the company that built this toaster, and even at the people who designed the toaster, but still, the toaster itself will receive at least a bit, if not the brunt, of my ire.

Reminder of Manners

When the appliance behaves in ways that are singularly human — engaging in spoken back and forth conversations — the media equation phenomenon kicks in full force: it’s really very hard for us to refrain from adopting the behavioral and the emotional patterns that we almost instinctively adopt when engaging a flesh-and-blood human being.  But it is important for us to remind ourselves that we are dealing with robots, not human beings, and we should do this for several reasons.

First, engaging in acts of politeness is expensive: before you speak, if your speech is what is called a Face Threatening Act (FTA), you need to think carefully about how to formulate your request so that you can minimize the impact of your FTA (your imposition on someone).  For instance, should you say, “What time is it?” or “Excuse me, do you have the time?” or “I’m really sorry, I left my phone in the other room.  Do you have the time by any chance?”  Figuring out which of these to speak and then speaking, say, the longer version versus the shorter version, consumes time and energy.  Being polite to a robot by saying, “Excuse, do you have the time?” or “I’m really sorry, I left my phone in the other room.  Do you have the time by any chance?” are clearly silly.  The only rational way to ask for the time with the bot is by saying, “What time is it?”

Second, using polite formulations with voicebots (“Can you please,” “I was wondering if you could,” “Would you mind”) as well as politeness markers such as “please” and “thank you” with the voicebot, risks cheapening the meaning of these expressions: are they mere habits, verbal tics, patterns of language that we speak out, whole cloth, devoid of real meaning, regardless whether we are speaking with a human or a machine?  Clearly differentiating how we speak with a human and how we speak with a machine is probably a good idea, especially when we are engaging in conversations with these voicebots in front of children.  By doing so, we are communicating to our children that human beings are radically different from robots.  For instance, while a robot primarily exists to do something for you (to toast your bread), a human being does not exist to do something for you.  The plumber, when he comes to your home, is there to fix your plumbing, but he is first and foremost a human being, not a tool.   If your child witnesses you treating the machine as politely as you treat a human being, they may begin — paradoxically enough — to believe that a machine and a human being are creatures of the same species and may begin to treat human beings as machines in unintended ways.

Third, it is important to note that the politeness strategies that one deploys with other human beings often relate to the power relation those human beings have with one another.  I am polite to you and you are polite to me to the extent that I can hurt you and that you can hurt me back.  The ethic of “avoid hurting other people’s feelings no matter what” is an end in itself, yes, but it is also functionally useful: even if I didn’t believe in people as ends in themselves and I really didn’t care about how my actions made others feel (that is, I am a bad human being or a human being with a low Emotional/Social IQ), at the very least I should be polite out of an abundance of caution.  Because I have come to learn that any human being, no matter how seemingly powerless now, provoked enough, can hurt me very badly, or, if they are not able to hurt me right now, they have memory and they may decide to hurt me in the future, once they have acquired the means to hurt me.  So, why not always be polite?

Why Be Polite?

But when it comes to robots — or voicebots in our case — could behaving politely towards them be teaching us to be afraid of them?  Fearing robots in and of itself may not be necessarily a bad thing — it may teach us to be very careful when dealing with them (maybe I shouldn’t give robots as much information as they are asking of me).  But fearing them to the point where we begin to defer to them because the politeness strategies nudge us to, begins to take us to dangerous territory.  A data-hungry company may deploy an impeccably polite voicebot, a British accented butler, full of deference and stolid obeisance, who converses with us with such noble sophistication that two turns into the conversation and we are feeling ourselves in such politeness debt that rejecting an exquisitely phrased request from them feels like an act of rudeness from our part — and so we give them the piece of information that they ask for.

This double asymmetry — the voicebot doesn’t feel pain while we, human beings do, even when we shouldn’t, and the basic reality that we, the humans, are the ones on the winning side of the power equation with these glorified tools that make a human noise and pretend to converse — provides us with the following three basic tenets that should guide our Conversational Voice First design activities:

  1. The voicebot should never cause emotional pain to the user (such as not letting them complete their sentence, speaking for a long long time, using words the user doesn’t understand and making them feel stupid).
  2. The human user should never be forced to behave as if the voicebot is capable of feeling pain.
  3. The human user should never be forced to act politely as if the robot had power over the human and could, for instance, refuse to fulfill the user’s request because the user did not exhibit the requisite degree of politeness.

So then, when it comes to politeness, how should a human being behave with a voicebot and how should a voicebot behave with the human being?  Find out in Part II.