The combination of free speech and the internet should provide an unprecedented democratising effect on public discourse. After all, anyone with a decent idea can now reach out to millions of people worldwide, regardless of their wealth, respectability or social status. The potential for innovation is endless.
And yet, looking at the world today you would be hard-pressed to find a clear exemplar of this democratising effect. It appears that new technology has also created new forms of censorship. Control of public speech is now so subtle-fingered that it’s often hard to recognise as censorship or even detect when it’s happening at all.
To understand this new phenomenon, it’s worth taking some time to consider how social-media algorithms work and why they’ve become so important to our society.
Ideas spread through social networks and the fastest social networks are those found online, managed by large corporate platforms like Facebook, WeChat, Twitter and YouTube. These sites all curate what’s seen by the user into a ‘feed’. In order to create the feed, posts are ranked automatically based on numerous statistical parameters: the number of views, likes, comments and shares; the ratio of these quantities to each other; the upload date; the topics and tags assigned to the post; and so on. Network spread is accelerated by the number of followers of the poster and of the commenters and sharers. So far, this is common knowledge – but the algorithm doesn’t stop there.
It’s a trivial piece of programming to scan each post for keywords and assign a score to the post according to its content. Some words are coded as ‘negative’ or ‘positive’, or linked to different emotions like anger, outrage, joy, pride and so on. Based on this score, you can assign a different behaviour to how the social network treats the post. The post might be ‘throttled’ and shown to a disproportionately small number of accounts or it might be ‘boosted’ and shown to a large audience.
Instead of emotions, algorithms can also score posts on their political alignment with a range of contemporary pieties, such as racial or social justice, lockdown advocacy, or climate change. Individual accounts could then be given scores based on the type of posts they make, ensuring that the most egregious or inflammatory posters are quietly and gently smothered into irrelevance. Everything is automatic. No humans are involved. You, the poster, would have no idea whether censorship was happening or not.
The mechanism described above need not be the exact approach used by Twitter, Facebook or any other site. Consider it an illustrative example of how an engineer like myself could easily build multilayered and highly sensitive speech control into the networks of public discourse, to run a controlled speech environment that seems ostensibly like free speech.
Ultimately, all meaningful public discourse is now finely manipulated by the hidden algorithms of these social-media corporations. This is a reality of life in the 2020s. And with private companies manipulating public speech in these arbitrary and unaccountable ways, governments around the world are eager to get a slice of the pie.
Bearing the new algorithms in mind, consider how a government might suppress an idea that’s hostile to its interests. In the 1500s, the king’s men would march off to all the troublesome printing presses and intimidate the publishers with threats of vandalism, imprisonment or execution. It is against these weapons that the great Enlightenment arguments for free speech were constructed. Indeed, smashing up publishers was a risky move, creating martyrs and stirring opposition to absolute rule among the educated classes.
But in the 2020s, no such kerfuffle is necessary. State censorship has become astonishingly easy. The government need merely express its views to the management of a social-media company via their private channels, and every post sharing a particular idea will be throttled, demoted or blacklisted. Even if you can post the idea, the prominence of its spread has been hamstrung. It is thus the perfect crime, costing governments nothing, creating no martyrs and leaving opponents and their followers with paranoid doubts as to whether they were suppressed in the first place.
Different governments achieve this in different ways. The US is a world leader in invisible censorship, helped by the fact that almost all major social networks are Silicon Valley entities (enjoying close ties to the US intelligence apparatus). The most visible incidences of US censorship on social media concerned sensitive information about the Biden family during the 2020 US Elections, and the control of narratives surrounding the COVID-19 pandemic and lockdown measures.
Across the pond, the EU has passed into law a Digital Services Act (DSA), which came into effect last month (25th August 2023). The law empowers a large taskforce on disinformation, answerable directly to the European Commission, to immerse itself in public discourse control and censorship on all major social networks. Twitter is required to meet regularly with this taskforce and answer to demands of the Commission regarding ‘misinformation control’ or face fines and other sanctions from the EU.
Critics of the EU will note that the EU parliament is again sidelined by this troubling new institution. And like the GDPR regulation of 2016, this is liable to become a global standard in the relationship between state institutions and the internet.
What terrible danger demands such a robust approach to information control, you might ask? The usual suspects appear in a list of disinformation trends compiled by the EU-funded fact-checking hub, EDMO:
- ‘nativist narratives’ and opposition to migration;
- ‘gender and sexuality narratives’ that cover trans issues;
- the ‘anti-woke movement’ that ‘mocks social-justice campaigns’;
- ‘environment narratives’ that criticise climate-change policies.
Each of these problem issues is subjective and political in nature. It appears that the EU is concerned with changing the views and opinions of its 450 million subjects to match the ‘social justice’ ideology of their leadership – which is precisely the opposite of democratic governance.
The arguments of classical liberal thinkers are outdated when it comes to combating this new form of censorship. It is true that whenever an idea is silenced, the community is made poorer by not having heard its voice – but can that argument be made with the same vehemence when the idea is merely muffled or massaged into a lower engagement ratio by a tangled web of hidden algorithms? Is there an essential ethical difference between government interference with public discourse through social-media algorithms and the interference of an agenda-driven Californian software engineer who happens to work at one of these companies? Most media outlets don’t even describe this process as censorship, after all: it’s just ‘content moderation’.
Proponents of subtle censorship will point to the numerous social goods that might conceivably come from light-fingered thought control on social media. These include the suppression of enemy state propaganda, the neutralisation of dangerous conspiracy theories, and the management of violent sectarian ideology that could cause social harm or terrorism. But aside from the foiling of vague and nebulous threats, whose impact can never be reliably predicted, it is hard to see what conceivable gain comes from surrendering our right of free public discourse to unelected state organs like the European Commission taskforce.
The danger we face is that our present situation could rapidly evolve towards the total engineering of public discourse on social media. Western governments have shown an alarming desire to create populations that are docile, disorganised and progressive-thinking, rather than trusting the democratic process to produce good ideas through argumentation and open debate. Subtle censorship on social media has the potential to nudge us into a dystopia, where people are only permitted to organise around an elite-approved set of curated ideas.