A Question of Control

September 16, 2019

Morgen Witzel is a Management Historian, Author of 21 Books, and a Fellow of The Centre for Leadership Studies at The University of Exeter Business School.

New technologies always evoke fear—of the unknown possibilities it could hold and of the possible asymmetry it could lead to. It is not the newness of the technology that bothers us more, but the power it could lend to a select few. AI is no exception.

The sociologist Lewis Mumford had mixed views about the future. In his book Technics and Civilization, published in 1934, he foresaw the emergence of what he called the ‘neotechnic age’ when technology would be cheap and plentiful. The result, he believed, would be a levelling process in societies around the world. Technology equals power. When everyone had technology, everyone would be equal; no one would have more power than anyone else, and we would all have the freedom to be creative and develop our own lives as we wished.

His later view was quite different. In Myth of the Machine, published in 1967, Mumford foresaw a world in which control of technology, and therefore of power, ended up concentrated in the hands of a few people who used that power to dominate the rest. People became nothing more than cogs in a machine, living in what Mumford calls the ‘megatechnic wasteland’. People and robots will be interchangeable, both used and exploited to further the ends of the controlling few, and discarded when they are no longer needed. At the centre of the megatechnic wasteland, Mumford says, is ‘Organization Man—he who stands at once as the creator and creature, the originator and the ultimate victim, of the megamachine’.

Mumford was looking into the distant future, but today in 2019 that future is coming uncomfortably close. Advances in robotics, cybernetics, and artificial intelligence mean that machines have far more capabilities than we might have dreamed of even a generation ago. Never mind beating humans at chess; machines can now outwit us in a whole variety of ways. They can think faster than we can, and on more dimensions at once. There are even claims that researchers are on the verge of creating machines with emotional awareness. So where are we heading? For utopia, or dystopia? For a world of free technology, or the megatechnic wasteland?

The first thing we need to realize is that we do have a choice. We can choose what kind of world we want to live in, and we can make it so; all we need to do is exercise our free will. For all the talk of social media locking people in and creating vast networks of control and surveillance, I know many people who are not on social media, simply because it does not interest them. There is nothing there that they want. I also know people who rarely use mobile phones, even a few who do not own a mobile phone at all, again because they see no point in doing so. (I myself use my mobile phone rarely, and only give my number to a few people, on the grounds that it is not my preferred method of communication. If you want me, email me.) So, we need to decide what kind of world we want to live in.

If machines have feelings, if they can love and hate and have empathy with, even relationships with humans, then the boundaries start to break down.

The machines are coming

However, in a seeming paradox, we cannot escape the march of automation entirely. Increasingly, services once provided by humans are being provided instead by machines, from self-service checkouts in shops to driverless trains. How much impact do these things have on our daily lives? In the case of driverless trains, very little; we rarely see or interact with our train driver. Shops are a different matter. Some shoppers—not all, but a significant portion —like the human contact. One of the reasons they go shopping is to have a social experience, and interaction with a salesperson or checkout clerk is part of that experience.

Different cultures also have different attitudes toward technology. Self-service checkouts are becoming increasingly popular in Britain, but when Tesco tried to introduce them in America the result was a failure. American shoppers, at least in the areas where Tesco launched its Fresh and Easy chain, are more gregarious and want to talk to the store staff. This brings up a point that is not often discussed, namely that the nature of the human-machine combination is likely to be quite different in different societies. Even in a world full of machines, culture will still have an influence.

But where will the line be drawn? What will remain the province of human activity? Where are the boundaries beyond which the machines will not go? In much of the debate about AI, we see an anxiety—a perfectly natural anxiety—that the machines are taking over. We want to define a safe space, a set of activities and feelings and emotions that machines cannot replicate and that will remain ours. This is why the prospect of emotionally aware machines causes such alarm. If machines have feelings, if they can love and hate and have empathy with, even relationships with humans, then the boundaries start to break down. We are in Westworld territory, where nothing is safe from the machines.

Creativity is something that has been regarded as a safe space. No machine will ever be able to write like Rabindranath Tagore, or paint like Michelangelo. Is that true? Well, certainly it is not true yet. I have looked at some of the writing generated by OpenAI, the Elon Musk-backed project working on computer-generated writing, and it is quite poor. The individual sentences make sense as sentences, but there is no rhythm or narrative flow and the internal logic quickly gets lost. OpenAI may, in time, be able to produce a machine that can write award-winning poetry, but why would anyone want to bother? Human beings can already write poetry, and as any poet will tell you, they are extremely cheap; the royalties received by most poets are risible. The same goes for painting. Why spend tens of millions of dollars to develop a machine that can paint like David Hockney when we already have David Hockney?

That said, the boundaries around the creative space are porous, and machines can, and will, get through. Some conceptual artists are already using AI as part of their installations. That brings up a further point: do we necessarily need to see the relationship between human and artificial intelligence as a choice of either/or? Can we co-exist side by side, doing the same jobs?

Human beings have been making and using tools pretty much since the beginning of our evolution as a species, and AI is simply a smarter and more sophisticated tool.

Living with technology

Of course, we can. Human beings have been making and using tools pretty much since the beginning of our evolution as a species, and AI is simply a smarter and more sophisticated tool. If we stop being afraid of it and embrace it, then the possibilities are limitless. AI can help us cure illnesses, make better products, transport us to and from our destinations, and a thousand other things; and yes, in time, it might even help us to write better. Instead of guarding against the machines, we can even take them inside us. The cyberpunk author William Gibson imagined a world in which we interface with machines through implants hardwired into our brains, communicating directly with artificial intelligences. The day when this becomes possible may not be so far away. At Stanford University, the BodyNET project is already exploring ways in which technology can be implanted into the body, for example by having microprocessors woven into the skin.

Again, many people are terrified by this prospect. Visions arise of the cybermen from Doctor Who, or various other terrifying cyborgs from science fiction. In reality, we have been putting technology into our bodies for a long time, ranging from pacemakers to keep our hearts going to microtechnology delivering chemotherapy to cancerous cells. The door is already open, and many people have already walked through it.

Yet, like Mumford, we still persist in the view that technology is taking over, turning our world into a megatechnic wasteland. Why is this so? In order to chart a future and decide what role AI will play in our lives, we need to first decide why we are so frightened of it. The reasons are complex. There is, of course, the well-known human fear of change in general, especially change we do not understand, and any revolutionary new technology will always have its opponents, as Mary Shelley described so clearly in Frankenstein. Early automobiles were sometimes attacked and vandalized by people objecting to their noise and smells; television was perceived as an attempt to brainwash the masses, and the same is true of AI.

There is also economic fear; if technology begins replacing humans and putting them out of jobs, how will we live? How will we feed ourselves and put a roof over our heads? We have seen the spectre of technology-induced mass unemployment before; there is the famous example of the Luddites in the early nineteenth century who smashed up factory equipment in the belief that it was putting people out of fighting. In fact, as journalist Paul Mason found when he studied the introduction of Jacquard punch-card machines into the French silk-weaving industry, in his book Live Working or Die Fighting, automation made production cheaper, lowering barriers to entry so more entrepreneurs became involved. The actual result was a net increase in employment. Managed carefully, the introduction of AI could have a similar effect.

Our real fear of technology is not directed at the technology itself but at the people who control the companies that design and own and control it.

 The fear of domination

At the bottom of it all, though, is the concern that technology will come to dominate our lives and reduce our freedom. In his novel Player Piano, Kurt Vonnegut posited a world where automation ruled the workplace and people were slaves of the machines. Eventually, the humans rise up and smash the machines in a gesture of defiance, reclaiming their freedom. The fear that technology will somehow cancel out our freewill and reduce us to subordinate status is one of the major roadblocks to further advances. Even some of those involved in creating AI are worried about this. OpenAI, for example, has expressed concerns that its software could be used to create computer-generated fake news stories, which could then be spread across the internet.

But the problem here is not the technology. The problem is with the people who control it, or could potentially control it. The real revolt in Player Piano was not against the machines—they were merely symbols—but against the handful of oligarchs who controlled and directed them, just as the Luddites were rebelling not against the weaving looms, but the entrepreneurs who owned them. Our real fear of technology is not directed at the technology itself but at the people who control the companies that design and own and control it and who, by and large, have not yet woken up to the responsibilities of their position. Facebook is a prime example. Facebook’s founders and executives were, and to a large extent still are, fervent believers that their project is a force for good in the world. Only reluctantly have scandals such as Cambridge Analytica and the Christchurch mosque murders forced them to realize that their technology can be put to malicious uses.

The real debate should be not about the role AI will play, but who will control it. If we can ensure democratization of control, as Sir Tim Berners-Lee attempted to do with the World Wide Web, we can choose our own future. If we let control slip into the hands of a neotechnic oligarchy, then we run the risk that they will choose our future for us—or, looking at Facebook, the even darker risk that the whole project will be hijacked by other, more fanatical groups who will use AI to help further their own agenda, at the expense of the rest of us. That really would be a megatechnic wasteland.

At the end of Player Piano, the men who smashed the machines sit down and begin to rebuild them, not because they have been told to do so but because they want to. The march of technology is inevitable and will not be stopped, because as well as being afraid of technology we are also fascinated by it. The critical issue is control. So long as we are masters of our own destiny then, with appropriate safeguards, we can combine human artificial intelligence in ways that make sense to us. But if we let control slip into the hands of others, then the risk becomes real. It is up to us to decide what kind of future we want to live in.