The challenge of regulating artificial intelligence is sometimes compared to the management of nuclear energy: there are valuable civil applications alongside terrifying military ones, and a credible risk of existential calamity if it all goes wrong. But nuclear weapons are expensive and hard to acquire. By contrast, AI can distribute awesome power at relatively low cost. This adds unprecedented complexity to the task facing attenders at an AI safety summit that Rishi Sunak is hosting this week at Bletchley Park.
The prime minister wants to position the UK as a global leader in the field. It is a creditable diplomatic endeavour, partly vindicated by the “Bletchley declaration” in which 28 countries agree to a sustained global dialogue on managing emerging AI risks. Significantly, both the US and China have signed.
But there are limits to how much control Britain can have over such a sensitive agenda, as was demonstrated in a speech by the US vice-president, Kamala Harris, delivered before the summit. Ms Harris announced the creation of an AI safety institute based in Washington, and a more western-focused political declaration on military uses of the new technology.
The intervention was a reminder that the US is a superpower, jealous of its technological primacy and disinclined to outsource discussion of matters pertaining to its vital strategic interest. Ms Harris’s speech also expressed an important difference of emphasis between the White House and Downing Street on the best approach to AI safety.
The prime minister’s declared preference is “not to rush to regulate”. This is an ideological choice – “a point of principle” – that government should always prefer measures that encourage innovation and avoid ones that risk stifling it. Mr Sunak queries the merit in laws “for things we don’t yet understand”.
The White House is not so sanguine. As Ms Harris put it: “We reject the false choice that suggests we can either protect the public or advance innovation. We can – and we must – do both. And we must do so swiftly, as this technology rapidly advances.”
Consistent with that urgency, President Joe Biden signed an executive order on Monday, mandating a wide range of controls to protect American citizens against cavalier or wilfully malevolent use of AI. It is a sprawling, detailed agenda covering the threat of cyberweapons proliferation, consumer protection, the prospect of AI machines undermining workplace rights, the risk that algorithms derived from biased datasets embed social inequalities in automated systems, and more.
The EU is developing a similarly broad law. But the British approach looks lackadaisical, despite Mr Sunak’s personal focus on the topic. His doctrinal aversion to economic intervention runs deeper than his interest in technology. His deference to the enterprising spirit of tech bosses trumps concern for the impact their products are already having on the frontline of digital disruption, on precarious jobs, on workers, in schools, or indeed at the ballot box via electoral manipulation.
Mr Sunak seems rather too attuned to the libertarian ethos of Silicon Valley, where grandiloquent fretting about existential risk can be a substitute for practical real-world mitigation. The prime minister lacks the political urgency about regulation that rightly animates the White House. He has invested much personal authority in putting Britain at the forefront of this vital debate. And yet he looks as if he is being left behind.