Moderating horror and hate on the web may be beyond even AI

Way back in the mid-1990s, when the web was young and the online world was buzzing with blogs, a worrying problem loomed. If you were an ISP that hosted blogs, and one of them contained material that was illegal or defamatory, you could be held legally responsible and sued into bankruptcy. Fearing that this would dramatically slow the expansion of a vital technology, two US lawmakers, Chris Cox and Ron Wyden, inserted 26 words into the Communications Decency Act of 1996, which eventually became section 230of the Telecommunications Act of the same year. The words in question were: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The implications were profound: from now on you bore no liability for content published on your platform.

The result was the exponential increase in user-generated content on the internet. The problem was that some of that content was vile, defamatory or downright horrible. Even if it was, though, the hosting site bore no liability for it. At times, some of that content caused public outrage to the point where it became a PR problem for the platforms hosting it and they began engaging in “moderation”.

Moderation, however, has two problems. One is that it’s very expensive because of the sheer scale of the problem: 2,500 new videos uploaded every minute to YouTube, for example; 1.3bn photos are shared on Instagram every day. Another is the way the dirty work of moderation is often outsourced to people in poor countries, who are traumatised by having to watch videos of unspeakable cruelty – for pittances. The costs of keeping western social media feeds relatively clean are thus borne by the poor of the global south.

The platforms know this, of course, but of late they have been coming up with what they think is a better idea – moderation by AI rather than humans: vile content being detected and deleted by relentless, unshockable machines. What’s not to like?

Even if Meta employed half a million human moderators it wouldn’t be up to the task

There are two ways of answering this. One is via HL Mencken’s observation that “For every complex problem there is an answer that is clear, simple, and wrong.” The other is by asking a cybernetician. Cyberneticsis the study of how systems use information, feedback and control to regulate themselves and achieve desired outcomes. It’s a field that was founded in 1948 by the great mathematician Norbert Wieneras the scientific study of “control and communication in the animal and the machine” and blossomed in the 1950s and 1960s into a novel way of thinking about the human-powered machines that we call organisations.

One of the great breakthroughs in the field was made by a British psychologist, W Ross Ashby. He was interested in how feedback systems can achieve stability and came up with what became known as “the law of requisite variety” – the idea that for a system to be stable, the number of states its control mechanism can attain (its variety) must be greater than, or equal to, the number of states in the system being controlled. In the 1960s, this was reformulated as the notion that for an organisation to be viable, it must be able to cope with the dynamic complexity of its environment.

Sounds theoretical doesn’t it? But with the arrival of the internet, and particularly of the web and social media, Ashby’s law acquired a grim relevance. If you’re Meta (née Facebook), say, and have billions of users throwing stuff – some of it vile – at your servers every millisecond, then you have what Ashby would have called a variety-management problem.

There are really only two ways to deal with it (unless you’re Elon Musk, who has decided not to even try). One is to choke off the supply. But if you do that you undermine your business model – which is to have everyone on your platform – and you will also be accused of “censorship” in the land of the first amendment. The other is to amplify your internal capacity to cope with the torrent – which is what “moderation” is. But the scale of the challenge is such that even if Meta employed half a million human moderators it wouldn’t be up to the task. Still, even then, section 230 would exempt it from the law of the land. Beating Ashby’s law, though, might prove an altogether tougher proposition, even for AI.

Stay tuned.

skip past newsletter promotion

Analysis and opinion on the week’s news and culture brought to you by the best Observer writers

Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.

after newsletter promotion

What I’ve been reading

Artificial realities
AI Isn’t Useless. But Is It Worth It? is a typically astute assessment by Molly White of the “innovation at any cost” strategy of Silicon Valley.

Yes, we Kant
Political philosopher Lea Ypi’s Kant and the Case for Peace is a thoughtful essay in the Financial Times.

Work in progress
A perceptive essay on the benefits of trade unions by Neil Bierbaum on his Substack blog: For Work-Life Balance, Look to the Labour Unions!

Leave a Comment