Author


(Also published by PANDA and at PANDA substack)

We are living through an era in which freedom of expression is evaporating. Numerous scientists have been censored during Covid-19, having been labelled as spreaders of ‘disinformation’ or ‘misinformation’, whilst we are now seeing the emergence of online safety/harm legislation that might further curtail freedom of speech. But who has been responsible for the lion’s share of propaganda during Covid-19 and what is the cogency of legislation and doctrines aimed at censoring what is alleged to be ‘misinformation’ or ‘disinformation’?

It is now widely reported that, since the start of the Covid-19 event, there has been both extensive propaganda that has increased people’s fear levels, and censorship of oppositional voices. This has occurred via a combination of deliberate censorship, smear campaigns, incentivisation, and coercion. Specific examples have included corporate and NGO sponsorship of mainstream/legacy media, OFCOM guidelines limiting acceptable news content, ‘Big Tech’ censorship of expert opinion, smear campaigns, and coercion via mandates.

As if this were not enough, a recent article in the British Medical Journal advocated for even higher levels of authoritarian control over so-called ‘misinformation’ – or opinions that challenge the narratives promoted by authorities. The article reflects an academic orthodoxy that has become dominant since the beginning of the Covid-19 event. This orthodoxy has promoted the use of behavioural science techniques to immunise the public against alleged ‘conspiracy theories’ and ‘misinformation’ to ensure that citizens comply with the directives of governments and the World Health Organisation (WHO).

Of course, if it turns out that the Covid-19 orthodoxy was wrong, articles such as ‘Using social and behavioural science to support COVID-19 pandemic responses’ might end up reading as the longest intellectual suicide notes in history for those academics involved.

One component of censorship involves the widespread adoption of concepts such as misinformation, disinformation, and malinformation, as well as the involvement of fact-checkers in day-to-day decision making. Misinformation is commonly defined as inaccurate information disseminated by mistake. Disinformation is inaccurate information circulated with the intention to mislead people. Malinformation refers to accurate information used in a way that misleads people. These three concepts have been widely employed in campaigns such as ‘Verified’, in which the UN aimed to counter Covid-19 misinformation, and a WHO campaign that asked people to report misinformation. A panoply of fact- checking NGOs, such as the Institute for Strategic Dialogue and NewsGuard, work with these concepts.

In practice, fact-checking may involve deciding on the veracity of opinions, arguments, and facts. One well-known fact-checking organisation, NewsGuard, will give a news site a negative rating when the site ‘repeatedly states as fact claims that are contradicted by an abundance of scientific evidence’ or when it ‘repeatedly promotes conspiracy theories that cannot be disproved but have no basis in fact and are contradicted by an abundance of credible evidence’. To make such a decision, the fact-checker may either draw on a source deemed to be authoritative, such as the WHO or a Government department, or make their own determination of the facts.

Although at first glance such a process might appear reasonable, when it is used to censor information or downgrade access to it, it falls foul of John Stuart Mill’s argument regarding the erroneous assumption of infallibility. In On Liberty Mill defends free speech on the grounds that:

“the opinion which it is attempted to suppress by authority may possibly be true. Those who desire to suppress it, of course deny its truth; but they are not infallible. … All silencing of discussion is an assumption of infallibility”

In their recently published study of censorship involving academics during Covid-19, Shir-Raz et al (2022) invoke a similar point:

“Censorship of opposing or alternative opinions and views can be harmful to the public (Elisha et al 2022), especially during crisis situations such as epidemics which are characterized by great uncertainties, since it may lead to important views, information and scientific evidence being disregarded.”

The case of the Great Barrington Declaration illustrates Mill’s point well. Led by three eminent scientists – Professors Jay Bhattarcharya (Stanford), Martin Kuldorff (Harvard), and Sunetra Gupta (Oxford) – the declaration advocated for a focused protection strategy that avoided society-wide lockdowns. Upon its publication in October 2020, Francis Collins and Anthony Fauci of the National Institutes of Health in the USA referred to the three scientists as ‘fringe epidemiologists’ and discussed the need for a ‘quick and devastating published take down of its premises’. Their caricature of these eminent scientists hardly squares with the long-established norms of scientific rigour. The authors were subsequently smeared and marginalised. It is notable that the BMJ article mentioned above actually cites two Byline Times articles authored by the same individual who claimed that the Great Barrington Declaration was a ‘gigantic fraud’ and who then wrote a number of articles discrediting it.

By 2022, however, events and substantial empirical evidence have proven the authors of the Great Barrington Declaration correct. Consequently, two of its authors are now taking legal action with respect to alleged co-operation between tech companies and the White House to censor their scientific arguments. In Mill’s terms, Fauci and Collins appear to have been making an erroneous assumption of infallibility: They believed that they were right and the ‘fringe scientists’ were wrong. They then acted to censor those views, only for many scientists to subsequently show that they themselves were wrong. Professor Bhattacharya is so confident that events and data now prove his position to have been correct that he has stated that:

Censorship kills science, and in this case, I think censorship killed people.”

In addition, it is worth noting that John Stuart Mill also pointed out the importance of allowing false or incorrect ideas to be allowed to circulate: ‘If wrong, they lose [through censorship], what is almost as great a benefit, the clearer perception and livelier impression of truth, produced by its collision with error’. To that one can add that censorship of ideas forces them underground where they cannot be so readily exposed to challenge and correction.

Suppression of information and arguments, defined as misinformation or disinformation, has been widespread during Covid-19. Countless people, both credentialed experts and others, have become victims, with many social media platforms removing content and de-platforming individuals. ‘Big Tech’ has frequently co-operated with authorities in this process. The question now is whether this phenomenon is going to become worse with the implementation of legislation aimed at controlling so-called ‘online harm’. These legislative moves appear to be operating across many countries; for example, the Digital Services Act (DSA) is now in place in the European Union, and the Online Safety Bill in the UK.

One area of public debate has concerned the identification of speech that is harmful but which, in any other setting, would be considered legal. Lord Sumption has argued that such a move might potentially cover an ‘almost infinite’ range of material. According to recent reports, the concept of ‘legal but harmful speech’ is to be dropped from the UK Online Safety Bill.

A more general concern is the identification of disinformation as a form of online harm. Legislation that conflates misinformation or disinformation with harm, and which then leads to the censorship of any information or opinions categorised as such, will potentially fall foul of Mill’s warning regarding the erroneous assumption of infallibility. Fact-checkers or authorities might end up censoring information believed to be incorrect and harmful, but which, as it subsequently turns out, was correct. The example of the Great Barrington Declaration described above is, of course, a case in point.

Currently the UK’s Online Safety Bill is set to integrate an ‘Advisory Committee on Disinformation and Misinformation’ with the Bill. At the same time, the EU’s 2022 Code of Practice on Disinformation is set to become a Code of Conduct within the framework of the Digital Services Act. The EU legislation also refers to the importance of supporting and drawing on fact-checkers. Legislation relating to mis- or disinformation that leads to social media companies further censoring content will make an already bad situation much worse, resulting in ever more frequent suppression of important, legitimate, and potentially correct information.

It is essential that lawmakers are aware of this obvious problem. As described above, esteemed scientists were censored because their arguments did not fit the narrative, only to be proven correct by subsequent events. There is a powerful case to be made that any legislative drives should robustly defend freedom of expression, rather than encourage corporate-funded fact-checkers to identify alleged ‘disinformation’ to be censored.

In the final analysis, it is likely that the rationale for controlling mis/disinformation across social media misdiagnoses the real problem. Arguably, the lesson of Covid-19 is that the biggest source of false or misleading information is governments and other authorities and their use of propaganda. Compounding this is the failure of mainstream/legacy media to adequately challenge and resist this propaganda. Drives to develop legislation that protects against online harms divert attention from the biggest and most influential sources of manipulative and deceptive information.

It must be remembered that these developments are taking place within a wider coercive context in which freedom of expression is being suppressed across democracies. This includes the drive to prosecute Julian Assange, the Wikileaks founder. It also includes proposed new laws that intend to punish doctors for spreading information that governments regard as false – which simply means holding a view of reality outside some pre-existing ‘consensus’ defined and controlled by governments.

In conclusion, legislation seeking to regulate online harms will likely exacerbate problems associated with the widespread adoption of the concepts of mis/dis/malinformation, combined with the fact-checking industry, and censorship of information.

Insufficient protection of the core democratic principle of freedom of expression undermines open and rational debate in our public sphere. It grants authorities substantial power to control public debate. This in turn undermines rational, evidence-based policy formulation, enabling and reinforcing harmful and destructive policies.

Finally, as is arguably now being seen in relation to Covid-19, employment of propaganda and censorship can result in an enormous loss of public trust, especially when it transpires that people were misled, and that the earlier claims and advice of governments prove to have been erroneous.

(This article is developed from a talk delivered at the UK Houses of Parliament for the All-Party Parliamentary Group (APPG) Pandemic Response & Recovery, Thatcher Room, Portcullis House, Westminster, 14 November 2022.)

Suggested further initial reading, available online:

‘The Perils of legally defining disinformation’ O Fathaigh et al, Internet Policy Review: Journal on Internet Regulation 4 November 2021, 10(4).

‘Censorship and Suppression of COVID-19 Heterodoxy: Tactics and Counter-Tactics’ Shira-Raz et al, Minerva, 1 November 2022.

‘Whose Disinformation Is It Anyway? BBC vs Critical Academics’, Tim Hayward, Propaganda in Focus, 2022.

Counter-Disinformation Fails: Feedback from a Target’ Tim Hayward, Propaganda in Focus, 2022.

(Featured Image: “Three wise monkeys” by Anderson Mancini is licensed under CC BY 2.0.)

Author

  • Piers Robinson

    Dr. Piers Robinson is a co-director of the Organisation for Propaganda Studies, convenor of the Working Group on Syria, Media and Propaganda, associated researcher with the Working Group on Propaganda and the 9/11 Global ‘War on Terror’, member of Panda and BerlinGroup21. He researches and writes on propaganda, conflict and media and was Chair/Professor in Politics, Society ad Political Journalism, University of Sheffield, 2016-2019, Senior Lecturer in International Politics (University of Manchester 2010-2016) and Lecturer in Political Communication (University of Liverpool, 1999-2005).