trust deficit?

February 23, 2017

In 1942, author and professor Isaac Asimov introduced his Three Laws of Robotics, one of the most well-known attempts to establish workable rules integrating artificial intelligence, or AI, into society. Since then, many science fiction writers, philosophers, scientists, and others have grappled with the pros and cons of AI.*
Morgen Witzel discusses the tenuous aspect of trust between AI and humans, an imperative for any meaningful relationship in the world of business or otherwise. If lacking in trust, can the potential of AI be fully leveraged?

937rust is essential in business, as indeed it is in society as a whole. Without trust, we are unable to build relationships with each other; without relationships there can be no cooperation, no collaboration, no organization, no markets. Trust is everything; without trust, there is nothing.
What is the key ingredient of success for every great leader? Trust. People are willing to accept their commands and follow them. No one ever achieved greatness by driving unwilling followers towards a goal they did not believe in. What is the most important factor in every famous brand? Trust. We believe the promise this brand makes in terms of utility, quality, and safety.
Will the increasing use of Artificial Intelligence (AI) create more trust in organizations? Or will it hamper trust relationships? Will we ever truly trust artificial brains–or the organizations that use them? Logic, based on both psychology and biology, suggests we will not. For organizations using or contemplating using AI—especially in customer-facing roles—this is a major challenge. While there is no doubt that it can be a potentially powerful tool, AI will have to be used with great care, if the trust of key stakeholders such as customers and employees is not to be forfeited.

the function of trust
Before addressing this issue, we should stop to consider what trust is and how it works. Simply defined, trust is the notion that other people will (a) do what we expect them to do, and (b) will not do anything which is not in our own interests. They will not attack us, or steal from us, or spread rumors behind our back; instead, they will be there for us when we need them.
In her book Trust in Modern Societies, Barbara Misztal argues that trust is an essential part of community building. We will only accept into our communities those whom we feel we can trust; we will seek to repel those we cannot trust. Why is this so? Misztal says that one key factor is the element of predictability. We trust people when we know how they will behave. To do that, we need to know what social norms and values they hold, what they consider to be right and wrong. If those values and norms are not congruent with our own, then we become uncertain; we cannot predict how these people will behave, so we fear them. (As an aside, it is this kind of uncertainty about other people’s norms and values and the fear this provokes that is behind much racist behavior, across the world, and is a major factor in the Islamophobia currently sweeping the West.)
Trust is essential in brand-building. Charles Babbage, better known as the pioneer of the computer, wrote of this in his book The Economy of Machinery and Manufactures when he described the thought process that goes through the consumer’s mind when buying simple commodities such as tea and salt. In former times, these were sold in bulk; a customer went into a shop and asked for a pound of tea, an assistant opened a bin and measured out a pound, wrapped it up, and sold it. The customer had no opportunity to inspect the goods, to tell what quality they might be or if they had been adulterated (which they frequently were). This meant uncertainty, which in turn led to lack of trust: how do I know I am getting what I am paying for?
The response, said Babbage, was for responsible producers to mark their goods so they could be clearly identified. A trademark or brand mark was an assurance of quality; I might not be able to open up a packet of tea bags and inspect the tea itself, but if I see the Tetley or PG Tips brand on the label, I can be sure that I am indeed getting what I paid for. One 19th century food producer, Henry Heinz, developed this concept to a high art, introducing the Heinz brand as a general symbol of quality. I might pay a penny extra, but I know this product will satisfy my needs, safely, because it has the Heinz brand on it.
The same goes for the people we work with. They do not have brands on their heads—though there is a lot of discussion these days about employer brands—but we make very similar decisions about whether we can trust people before we work with them willingly. Successful work teams are those that exhibit strong social bonds. Those bonds do not necessarily extend outside the workplace. We do not have to become best friends with our work colleagues; we do not even necessarily have to like them, but we do have to trust them. We have to know that they will do what they say they will do; we have to know that we can rely on them, especially in times of difficulty, and we must be certain they will do nothing to harm our own self-interest.
Over and above that, we must feel an affinity with them. To repeat, we do not have to like them as individuals, but there must be some sense of commonly shared values.
And here is where the problems start with AI. On the surface, it would seem there is a clear opportunity for AI to provide trust and reassurance. An AI program can be engineered to be absolutely reliable and predictable; we know exactly what it will do at any given moment in time. That should be comforting and reassuring. We know too that AI programs will always be there; a well-designed system—take for example, some of the AI used by air traffic controllers—will have enough support elements to make sure the system never goes offline. And AI programs can work with a precision impossible in humans to deliver the highest possible quality.
But how do we know that AI systems will always work in our best interest? Someone, some human, designs these systems; other humans pay for them and have them installed to carry out particular functions. Will the AI systems work in our interests, or in the interests of their designers and owners? So long as all interests are aligned, there is no problem, but the potential for exploitation by the unscrupulous is clear. On one level, then, our trust issue is not with AI itself, but with those who control it.
Secondly, where is the affinity? As noted, it is not enough just to be able to rely on someone to trust them; we must also feel some kind of connection with them. That is where biology comes in.

trust and the brain
One of the most powerful neurotransmitters in the brain is oxytocin, a neuropeptide, sometimes also known as the ‘comfort hormone.’ Medical science has long known that gestures of affection and friendship release greater quantities of oxytocin in the brain. Tones of voice, physical gestures, kind or reassuring words, even something so simple as saying ‘thank you’ causes the brain to react in several ways. One is an increase in pleasure; these gestures make us feel happy. Another is an increased sense of security; when someone is nice to us, we feel safe. And finally, there is a sense of reciprocity; we are more likely to respond with similar gestures of our own.
More recent research has shown high levels of connection between oxytocin and trust. When other people behave in ways that indicate we can trust them, we experience more oxytocin release, and we become more ready to trust in turn. The opposite is also true. If we perceive that other people are behaving in ways we consider to be untrustworthy, less oxytocin is released. We become unhappy, insecure, and less likely to reciprocate; indeed, we are more likely to be ‘turned off’ by these people and not willing to accept them into our affinity group. We may try to reject them or expel them.
There is of course an interrelationship between biology and psychology here. What we consider ‘trustworthy behavior’ depends very much on our knowledge of other people and their values. What might be normal in one culture can be different in another. Our decision about what is trustworthy behavior also depends on our knowledge of the other person as an individual. Are they really being erratic and unpredictable? Closer knowledge of them might indicate that they are simply eccentric, or have a few unusual personality traits that we can overlook.
In An Inquiry Concerning the Principles of Morals, David Hume noted that we find it much easier to build trust, or ‘sympathy’ as he called it, with people who are close by us. We can observe them over time, learn that these apparently aberrant—to us—traits are in fact harmless, and accept them as individuals. We learn first that they pose not threat to us, second that they are in some way likeable, and third that they can be trusted. Thus, we welcome them into our affinity group. People who are far away, on the other hand, cannot be observed and therefore cannot be so easily trusted. Hume noted that this is one reason why we are more likely to give charity to those we know, while the stranger is more likely to be turned away.

loving the machine
And so we come to the implications for AI. We have touched on two important ones already:

  • How do we know that AI will work in our interests, and not in the interests of its creators and controllers–quite possibly in ways that will be harmful to us?
  • How can we trust AI when we feel no affinity to it?

The first issue is not strictly speaking a matter of AI, but of the relationship between us and its controllers, and here the previously existing rules of trust apply. Let us pass quickly on to the matter of affinity. How can we trust something as impersonal as a machine? How can it possibly give us the necessary gestures of affinity that we need in order to build trust?
This issue has long fascinated writers of science fiction; Isaac Asimov’s I, Robot is an example. The artificial intelligences in Robert Heinlein’s Friday do not even trust each other. In one telling passage, it is noted that airliners still pay pilots to sit in cockpits, even though planes are flown by AI. The reason is that [in the] event of an emergency, the pilot will keep trying to the very end to save the aircraft, while the AI might conclude that everything possible has been done, and give up.
Two issues need to be considered: proximity, and affinity. As David Hume pointed out long ago, we are more likely to trust someone if we see them regularly and can get to know them. The same is true of machines. People ‘can’ develop relationships with machines, and do. During the Second World War, bomber crews often gave nicknames to their aircraft, and endowed them with personalities. It is possible to develop a relationship with a machine so long as it is reliable and delivers quality, and can be seen and understood.
The machine also has to be capable of making the gestures we require in order to release oxytocin in the brain and engender feelings of security. In short, the AI must have a ‘personality’ with whom we can interact. Siri, the ‘intelligent personal assistant’ used by Apple on its personal devices, is an example of this. Some Japanese AI makers have gone further and developed AI that is capable of simple conversation. These may sound like stunts, but they are in fact attempts—so far, with only limited success—to ‘humanize’ machines and turn them into something we might want to develop a relationship with.
Until we are able to develop personal relationships with AI, the great majority of us will continue to distrust them; and that means in turns that companies using AI in customer-facing roles will lose opportunities to develop relationships and add value to brands. Even [if] the AI is not actually in customer-facing roles, awareness of its existence and role by consumers will still result in negative feelings. ‘Built by robots’ is a term of popular opprobrium, not approval. Trust is the key to everything, and until people trust AI, its utility will continue to be limited.

*https://hbr.org/2016/10/what-do-people-not-techies-not-companies-think-about-artificial-intelligence