At least some backlash against the internet has been down to one of the basest human instincts: self-interest. Incumbent journalists, politicians, and other public figures are disgruntled that cheaper communication has opened new channels, undermining their role as gatekeepers of public opinion.
The right sort of people have not shied away from slating what the US presidential candidate Hillary Clinton termed “deplorables”, nor their enablers among Silicon Valley executives who have made billions off the internet economy in the last few decades.
So it was that the British comedian Sacha Baron Cohen inveighed against “the six people who decide what information so much of the world sees”, namely Facebook’s Mark Zuckerberg, Google’s Sundar Pichai, Alphabet’s Larry Page and Sergey Brin, YouTube’s Susan Wojcicki and Twitter’s Jack Dorsey.
These “high-tech robber barons … who care more about boosting their share price than about protecting democracy”, were accused by Baron Cohen at an Anti-Defamation League conference of helping conspiracy theories to spread, enabling foreign interference in Western elections, boosting hate crime, and facilitating genocide.
In response, the Ali G creator wanted more regulation for “the largest publishers in history”. They should refuse to publish political ads that lie. They should use algorithms to remove white supremacists from their platform. And they should work with campaign groups like the Anti-Defamation League to guard against millennial ignorance.
This is all portrayed as a defence of the Age of Reason, as we’re entering a time “knowledge is delegitimised and scientific consensus is dismissed”, according to Baron Cohen. But while his speech was amusing and even persuasive in places, on examination it was an attempt to stir moral panic so that a particular worldview can impose itself on others.
Problems overstated
Throughout his speech Baron Cohen made specific claims about the deleterious effects of social media. He claimed that “hate crimes are surging”, that “lies spread faster than truths”, and that fake news outperforms the real sort. And he added that “nothing generates more engagement than lies, fear and outrage”. This is all disputable.
In England and Wales, to take his home country, recordings of hate crime went up by 10% between 2017-18 and 2018-19. (Crime stats are recorded separately for other regions in the UK.) This rise was widely reported when the figures came out in October, and older figures have long been a talking point within the Brexit debate, seen as a sign that rancorous debate is inspiring real life violence.
The Office for National Statistics acknowledged spikes in hate crime occurred around the UK’s referendum on EU membership in 2016, as well as around terrorist acts in 2017. However, it argued that increases in hate crime records “have been mainly driven by improvements in crime recording by the police” as well as a “growing awareness of hate crime”.
Similarly the Federal Bureau of Investigation’s (FBI’s) annual release of US hate crime statistics this year prompted some worried headlines. But the long-term picture of US hate crime, as documented in a report by the Center for the Study of Hate and Extremism, as well as California State University, is more mixed.
The FBI data cited shows hate crime shuffling up and down, mostly between 5,500 and 8,000 incidents a year. It’s up if you look at the last five years, but down if you compared 2018 with much of the noughties. Notably the introduction of social media networks in the mid-noughties is impossible to spot in the charts.
Baron Cohen is on firmer ground when he claims that Facebook “facilitated” the Rohingya genocide in Myanmar. A UN report on the subject had said: “Facebook has been a useful instrument for those seeking to spread hate, in a context where, for most users, Facebook is the internet.” While Facebook has agreed it failed to prevent its platform being used to incite offline violence, much of the UN report focused on specific cultural, legal and governance factors that prompted the genocide, including the role of military and security forces. That is to say Facebook was a conduit for existing prejudices, not the cause of them.
And on Baron Cohen’s claim that lies outperform truths, the online transcript of his speech linked to a study published in the Science journal in March 2018. The paper argues that false news and rumours spread faster and further on Twitter than true news, based on analysis of around 126,000 stories tweeted by roughly 3 million people more than 4.5 million times. “When we estimated a model of the likelihood of retweeting, we found that falsehoods were 70% more likely to be retweeted than the truth,” was one example given by the authors.
But it is not clear from the paper whether retweets or other engagement equated to belief in the information given; indeed an analysis of the replies to tweets suggested that “trust” words in tweets were more likely to be associated with true rumours than false rumours, which inspired surprise, corroborating the paper’s suggestion that the spreading of tweets was linked to how novel they were.
A YouGov survey of 22 countries released in May showed widespread distrust of information on social media. Britain was the most sceptical country, with 83% of people not trusting social media information much or at all. This was compared to 70% of those polled in the US who expressed the same view. Fake news might be easily spread on social media, but that doesn’t mean people believe it.
Baron Cohen is correct that emotional drives engagement online, but this is to merely repeat what any advertiser, journalist or politician knows. Asking for news consumers to be emotionally disengaged is asking for news consumers who aren’t human. Banning all false information, if that were even technically feasible, would not change that.
Internet guardrails
Baron Cohen is right about some others things though. Silicon Valley companies are in a sense the greatest publishers in history. In the case of Twitter, Facebook, and YouTube, their content platforms are publishing tools with huge reach, a kind of digital printing press that can combine words, images, sound and video. People are using these publishing tools to publish things, including false information.
Another finding from the Science study was that humans were more prone to pushing fake news that the much-feared Russian bots. “Although the inclusion of bots, as measured by the two state-of-the-art bot-detection algorithms we used in our analysis, accelerated the spread of both true and false news, it affected their spread roughly equally,” the paper said. “This suggests that false news spreads farther, faster, deeper, and more broadly than the truth because humans, not robots, are more likely to spread it.”
What this evokes is not the AI ganging up on us, but a low-level pamphlet war redolent of the American and French revolutions. Baron Cohen and his sympathisers want to regulate how people are debating one another through rules imposed on social media companies.
And how would this work? Baron Cohen wants social media companies to kick Nazis off their platforms like a restaurant owner might remove a goose-stepper from their eatery. He wants elected representatives to have “some say” in how the platforms are run. More broadly he wants social media companies to be held liable for their posts, with CEOs even facing jail time in some instances.
Of course, social media networks are already subject to rules written by democratically elected politicians, and politicians can write more rules if they want. Google has been forced in the EU to de-list search links if a user successfully argues they are irrelevant or outdated, based on a court ruling citing existing European law. Likewise, social media networks have been pressed to remove terrorist content from their platforms.
But Baron Cohen glosses over some of the tradeoffs involved in implementing his regulatory programme. His flippant aside that politicians being caught up in anti-white supremacist algorithms “might not be a bad thing” comes as social justice activists argue for ever broader definitions of bigoted speech. In its sharpest interpretation, this is a call to de-platform people from almost ubiquitous internet services for questioning the dogmas of our time.
Ironically Baron Cohen cites the “watershed” in British TV, a 9pm deadline before which adult content can’t be broadcast, as a model for the rules social media firms would have to abide by. Does he really think the whole world should be treated as children online?
He echoes this trend in his criticism of Zuckerberg’s willingness to host “a diversity of ideas”, including by allowing Holocaust denial on Facebook. Baron Cohen, like many critics of free speech, does not want a diversity of ideas: he wants a narrow subset of ideas, interpretations and information to be presented before the public, worrying that millennials’ widespread ignorance about Auschwitz renders them unable to tell a lie from a lie.
“Democracy, which depends on shared truths is in retreat, and autocracy, which depends on shared lies, is on the march,” Baron Cohen said. In fact, democracy is frequently a contest over what is true, what is uncertain, what is good, and much else besides. It is an argument.
The Baron Cohen view is that this cannot be allowed to continue without the guardrails of old media being reinstalled. I believe his fears are overblown, and his prescriptions risk censoring dissent and denying the potential of the most open media mankind has ever created. They should be resisted.