In 1900, the UK had 3.3mn horses. These animals provided pulling power, transport and cavalry. Today, only recreation is left. Horses are an outmoded technology. Their numbers in the UK have fallen by around 75 per cent. Could humans, too, become an outmoded technology, displaced by machines that are not just stronger and more dexterous but more intelligent, even more creative? The threat, we are told, is remote. Yet this is a matter of belief. Maybe machines could do much of what we need to have done better than we could, with the exception of being human and caring as humans do.
Yet even if no such revolution threatens, recent advances in artificial intelligence are highly significant. According to Bill Gates, they are the most important development since personal computers. So, what might be the implications? Can we control them?
The natural starting point is with jobs and productivity. A paper by David Autor of MIT and co-authors provides a useful analytical framework and sobering conclusions on what has happened in the past. It distinguishes labour-augmenting from labour-automating innovation. It concludes that “the majority of current employment is in new job specialities introduced after 1940”. But the locus of this new work has shifted from middle-paid production and clerical occupations prior to 1980 to highly paid professional and, secondarily, low-paid services thereafter. Thus, innovation has increasingly been hollowing out middle-income jobs.
Furthermore, innovations generate new kinds of work only when they complement jobs, not when they replace them. Finally, the demand-eroding effects of automation have intensified in the past four decades, while the demand-increasing effects of augmentation have not. None of this is very cheering, especially since overall productivity growth has been quite modest since 1980.
So what about the future? On this, an analysis by Goldman Sachs is both optimistic and sobering. It argues that the “combination of significant labour cost savings, new job creation, and a productivity boost for non-displaced workers raises the possibility of a labour productivity boom”. This would be similar to what ultimately followed the emergence of the electric motor and personal computer. The study estimates that generative AI, in particular, might raise annual growth of labour productivity in the US by 1.5 percentage points. The surge would be bigger in high-income countries than developing ones, though timing is uncertain.
Globally, it suggests, 18 per cent of work could be automated by AI, again with larger effects in high-income countries. In the case of the US, the estimated share of work exposed to AI ranges from between 15 and 35 per cent. The most vulnerable jobs will be office and administrative, legal and architecture and engineering. The least exposed will be in construction, installation and maintenance. Socially, the impact will fall most heavily on relatively well educated white-collar workers. The danger then is of downward mobility of the middle and upper-middle classes. The social and political impact of such shifts appear all too evident, even if the overall effect is indeed to raise productivity. Unlike horses, people will not disappear. They have votes, too.
Yet these economic effects are very far from the whole story. AI is a much bigger change than that. It raises deep questions of who and what we are. It might be the most transformative technology of all for our sense of ourselves.
Consider some of these wider effects. Yes, we might have unbribable and rational judges and better science. But we might also have a world of perfectly faked information, pictures and identities. We might have more powerful monopolies and plutocrats. We might have almost complete surveillance by governments and companies. We might have far more effective manipulation of the democratic political process. Yuval Harari argues that “democracy is a conversation, and conversations rely on language. When AI hacks language, it could destroy our ability to have meaningful conversations, thereby destroying democracy.” Daron Acemoglu of MIT argues that we need to understand such harms before we let AI loose. Geoffrey Hinton, a “godfather” of AI, even decided to resign from Google.
The problem with regulating AI, however, is that unlike, say, drugs, which have a known target (the human body) and known goals (a cure of some kind) AI is a general purpose technology. It is polyvalent. It can change economies, national competitiveness, relative power, social relations, politics, education and science. It can change how we think and create, perhaps even how we understand our place within the world.
We cannot hope to work out all these effects. They are too complex. It would be like trying to understand the effect of the printing press in the 15th century. We cannot hope to agree on what is to be favoured and what is to be prevented. And even if some countries did, we would never stop the rest. In 1433, the Chinese empire halted attempts to project naval power. That did not stop others from doing so, ultimately defeating China.
Humanity is Doctor Faustus. It, too, seeks knowledge and power and is prepared to make almost any bargain to achieve it, regardless of consequences. Even worse, it is a species of competing Doctor Faustuses, who seek knowledge and power, as he did. We have been experiencing the impact of the social media revolution on our society and politics. Some warn of its consequences for our children. But we cannot halt the bargains we have made. We will not halt this revolution either. We are Faustus. We are Mephistopheles. The AI revolution will roll on.
Follow Martin Wolf with myFT and on Twitter