EU ‘in touching distance’ of world’s first laws regulating artificial intelligence

The EU is within “touching distance” of passing the world’s first laws on artificial intelligence, giving Brussels the power to shut down services that cause harm to society, says the AI tsar who has spent the last four years developing the legislation.

A forthcoming EU AI Act could introduce rules for everything from homemade chemical weapons made through AI to copyright theft of music, art and literature, with negotiations between MEPs, EU member states and the European Commission over final text coming to a headon Wednesday.

“Artificial intelligence does have a profound impact on everything we do and therefore it was time to bring in some safeguards and guardrails on how this technology will evolve for the benefit of our citizens” said Dragoș Tudorache, a Romanian MEP and co-rapporteur of the parliamentary committee steering through the legislation, in an exclusive interview with the Guardian.

Speaking in his Brusselsparliamentary office, Tudorache said: “I’m more optimistic than I am pessimistic about AI. I would be a pessimist if we did nothing about it.”

Tudorache said there was a chance he could get a final text agreed for the AI Act by Wednesday. It would then be formally adopted by parliament and, bar any hiccups, become law early next year.

“We are in touching distance,” he said. “A good 60-70% of the text is already agreed.”

Dragoș Tudorache, an MEP and co-rapporteur of the AI committee in the European parliamentDragoș Tudorache, an MEP and co-rapporteur of the AI committee in the European parliament: ‘It means AI companies can’t wash away their responsibility.’ Photograph: Lisa O’Carroll/The Guardian

One of the remaining areas of contention is the use of AI-powered live facial recognition. Member states want to retain this right, arguing it is vital for security on borders but also to avert public disorder. But MEPs felt real-time facial recognition cameras on streets and public spaces was an invasion of privacy, and voted to remove those clauses.

They also voted to remove the right of authorities or employers to use AI-powered emotional recognition technology already used in China, whereby facial expressions of anger, sadness, happiness and boredom as well as other biometric data is monitored to spot tired drivers or workers.

Tudorache hinted at a compromise in the making on all the remaining contentious areas.

“There is a plausible scenario that we keep talking until the middle of the night and close the file on 25 October,” he said.

The handful of subjects that have not yet been agreed were all “intrinsically linked” so there would be no opportunity for simple trading of text between the political interests, he said. Everyone would have to concede something in order for the surveillance clauses to get over the line in a package.

Apart from real-time surveillance concerns, one of hottest topics preoccupying regulators are the unknown threats that AI could pose, threats that developers don’t even know about, such as the ability to create pathogens and other biohazards.

“You could grow your own little monster in your kitchen,” said Tudorache of AI’s capacity to give members of the public the tools to create biohazards.

The person who builds the bomb can be picked up by police under existing laws in all countries. But under the AI Act, the developer or owner of the AI tools will also be accountable and could be fined up to 6% of their revenue or banned from the EU entirely.

“It means AI companies can’t wash away their responsibility. They won’t be able to say: ‘Well, it is the user who is responsible for taking my model and doing something bad with it.’ If their AI model is capable of producing something that is illegal, then they will have legal responsibility for it,” said Tudorache.

He said that this would be a strong deterrent against the potential downsides of AI. The rules would “also help these companies”, he added.

“AI companies themselves say that they see their models as creating very serious risks, some of them to mankind. And I use their own words, not mine,” he said.

skip past newsletter promotion

Sign up to This is Europe

The most pressing stories and debates for Europeans – from identity to economics to the environment

Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.

after newsletter promotion

“There are now AI safety summits left, right and centre, one in London in the next two, three weeks,” he said, referring to an international AI safety summit to be convened by the UK’s prime minister, Rishi Sunak, next week, where organisers plan to delve into the challenge of regulating unknown threats.

Increased accountability and transparency that will be required under the AI Act “is not only an obligation that puts a burden on them, I also see it as a good opportunity for them to build confidence in their models” and in the public, Tudorache added.

Other elements of the act unlikely to be affected by the final round of negotiations include protection of the creative sector.

AI companies will have to submit lists of data sources to the European Commission as part of a regular reporting requirement, which Tudorache hoped would act as a deterrent to the use of data and creative content without recompense.

The idea is to enable musicians, scientific researchers or authors to easily see if their work has been plagiarised and give them legal protections.

The AI Act will also include obligations for tech companies to regularly publish data on the amount of electricity they consume amid reports it took thousands of computers six months to train ChatGPT.

“A training run eats a lot of energy and there is very little public data available to see what the overall toll is,” said Tudorache.

Ireland, one of the countries that does track energy usage, reported that electricty consumption by datacentres increased from 5% of the national total in 2015 to 18% in 2022.

“I want transparency on energy. Energy is an open market so the AI Act won’t stop energy use, but if there is an onus on companies publishing data on energy use, that way you can build awareness and shape public policy,” he said.

Leave a Comment