When the great and the good of Silicon Valley pitched up in Buckinghamshire for Rishi Sunak’s AI safety summit, they came with a simple message: trust us with this new technology. Don’t be tempted to stifle innovation by heavy-handed rules and restrictions. Self-regulation works just fine.
To which the simple response should be: remember 2008, when light-touch supervision allowed banks to indulge in an orgy of speculation that took the global financial system to the brink of collapse.
In the years leading up to the crisis, banks had developed products that were both lucrative and – as it turned out – highly toxic. The drive for profits trumped prudence. Only in retrospect were the dangers recognised of allowing the banks to mark their own homework. Financial regulation was subsequently tightened, but only after a deep recession from which the global economy has never fully recovered.
Sunak should learn from that experience. The focus at the Bletchley Park summit has been on the existential threat posed by AI – the risk that if left unchecked, the machines could lead to human extinction. That’s a worthy discussion point, especially given the rapid advances in creating super-intelligent machines. Elon Musk might be right when he says AI poses a “civilisational risk”.
But, as the TUC and others pointed out earlier this week, the focus on the longer-term challenges should not come at the expense of responding to a number of more immediate issues. These include the likely impact of AI on jobs, the increasing market dominance of big tech, and the use of AI to spread disinformation.
No question, AI could have massive social and economic benefits. By allowing workers to do their jobs more efficiently, it has the potential to raise productivity and growth. It could make for better medical diagnoses and treatment. It could raise agricultural yields and help in the battle against global heating.
But like all new technological breakthroughs, AI will also be disruptive, destroying jobs as well as creating them. There will be losers and winners. This is already an age of workplace insecurity and squeezed incomes. A savage shakeout of labour markets might well reinvigorate capitalism, but only at the expense of fanning populist anger. A world in which the gains from AI go disproportionately to employers rather than employees, and to big tech rather than smaller rivals, is not going to be sustainable.
The truth is that nobody knows for sure what the economic impact of AI will be. History suggests that previous technological advances have eventually created more jobs than they have destroyed, and replaced routine, labour-intensive tasks with better-paid supervisory and managerial work. But if the new generation of machines can think as creatively as humans, the historical precedent might not be that useful a guide to the future. Quite possibly, humans will be left with the low-paid, labour-intensive jobs – tending gardens or cleaning homes, for example – while machines do the jobs that have traditionally required human ingenuity and brain power.
One study from the US found that there were large negative effects for jobs and wages in those parts of the country most heavily exposed to industrial automation in the 1990s and 2000s. A recent survey of European economists was notable for the lack of certainty about the likely impact of AI on growth and employment. There are plenty of estimates of the proportion of jobs that will be affected by AI, but these are simply informed guesses. The honest answer to a question likely to be on the lips of millions of workers: “Will AI make me better or worse off?” is: “Sorry, we don’t know.”
That said, it is not too late to put in place measures that would tip the balance in favour of AI bringing net benefits. While there has been plenty of attention given to language models such as ChatGPT, the reality is that it will take years for economies to adjust to AI. There are huge sunk costs from past investment in plants and equipment that companies are not going to write off overnight.
The government should use the available breathing space to come up with a plan for how it is going to manage, supervise and regulate this fast-growing sector of the economy. As Carsten Jung, an economist with the IPPR thinktank, rightly says: “Self-regulation didn’t work for social media companies, it didn’t work for the finance sector, and it won’t work for AI.”
Far from stifling innovation, the right sort of regulation would help channel investment into the areas where AI could do the most good. Pharmaceutical companies are regulated to try to ensure new drugs pose no health threat and the same principle should apply to AI. The government needs to provide clear direction of where the opportunities lie. Britain has a thriving AI sector, with needs and interests that are different to those of the US giants. Nurturing these companies will require UK-specific standards.
Given the high probability that many jobs will cease to exist, there needs to be a strengthening of protection for those displaced, with investment in skills and training to make workers more adaptable. Competition policy should be robust enough to prevent a small number of mega-companies exploiting their market power.
Finally, it is important to think about how to divide up the spoils from AI. This is already a highly unequal world and AI threatens to make it even more unequal by concentrating ever more wealth and power in the hands of a privileged minority. There is, once again, a danger that the pursuit of profit comes before the public good. As in 2008, that way lies disaster.