A Tweet Read Around the World
On August 21, 2021, the U.S. Food and Drug Administration (FDA) posted a now infamous (and retracted) tweet: “You are not a horse. You are not a cow. Seriously, y’all. Stop it.” The tweet was a public admonishment to those considering the use of ivermectin — a well-known anti-parasitic drug with a long safety record — for the prevention or treatment of Covid-19. Though ostensibly humorous, the message carried the full weight of institutional authority and epitomized a growing phenomenon during the Covid era: the use of ridicule, institutional power, and epistemic gatekeeping to police dissent.
The tweet did more than mock the public. It launched an era in which debate was not merely discouraged but delegitimized, and in which dissent — especially although not only about scientific matters — was no longer something to be engaged, but something to be pathologized and even criminalized.
I believe that this tweet symbolized a major cultural, social, and political transformation. For this reason, and as we approach its fourth anniversary, I would like to revisit that moment. My goal is not to analyze ivermectin per-se — a drug whose benefits and risks are beyond the scope of this piece, though well-documented in the literature. Rather, I wish to interrogate the rhetorical and political machinery that turned a policy debate into a moral panic, and a segment of the public, including highly trained scientists, into “fringe” social actors and objects of scorn.
In doing so, this machinery legitimized a “state of emergency” that — as Italian philosopher Giorgio Agamben warned — made it permissible to perpetrate mass violations of fundamental rights and liberties in the name of protecting the “greater good”, science, and civilization itself. Notably, the pressure to comply continues to this day in the post secondary sector in at least six US states — albeit predictably framed as necessary to “the protection of health and wellness of all students.”
What Motivated My Inquiry?
At the time of the FDA tweet, I was fully aware of the multiple scientific and ethical blunders – mounting vaccine harms; suppression of legitimate medical alternatives, and so on — committed en route to the launch of the global vaccination campaign. I had also noticed — how could I have not — a troubling pattern: terms such as “misinformation,” “vaccine hesitancy,” and “anti-vaxxer” were increasingly used as rhetorical weapons rather than analytical tools.
These terms, far from being neutral descriptors, assumed what they purported to demonstrate – that dissenting views were not only wrong, but irrational, dangerous, or malicious. So, informed by Carol Bacchi’s “What is the Problem Represented to Be?” (WPR) approach, I began examining how dissent was framed, pathologized, and ultimately medicalized in the official discourse. I analyzed a curated sample of articles published in the outlet The Conversation – an influential platform for academic commentary — and found that critics of the Covid response were often caricatured as threats to democracy, public health, or scientific rationality itself.
Although I had not originally used the work of Brian Martin, professor of social sciences (who also holds a PhD in Theoretical Physics) to frame that first analysis, I now see his work on the suppression of dissent in science as an indispensable lens for understanding what followed: de-platforming, professional retaliation, and the epistemic isolation of dissenting healthcare professionals. My current work incorporates Martin’s insights more explicitly and critically – a necessary evolution in response to the real-world escalation of suppression I documented.
Dissecting The Conversation: From ‘Hesitant’ to ‘Heretic’
My study of The Conversation – rejected by “critical” journals and in the end included as a chapter of a book edited by dissenting medical practitioners and scientists — revealed a troubling continuum of moral labeling. On one end was the “vaccine hesitant,” presented as misguided but redeemable through “education” and “culturally appropriate” information. In the middle stood the concept of “misinformation,” a term used to mark communications as suspect – not because they were necessarily false, but because they deviated from the sanctioned narrative. And on the other end were the “anti-vaxxers” — portrayed as dangerous, irrational, and often malevolent.
But why were “anti-vaxxers” irrational — comparable to “snake-oil salesmen” whatever their scientific credentials? Because, according to authors in The Conversation, no reasonable person would ever question the sanctity of Covid vaccines — or, for that matter, of any vaccine dating back to Edward Jenner’s 1796 smallpox inoculation. As aptly noted in a recent book on “vaccine ideology,” a naïve reader of this academic outlet could be forgiven for concluding that vaccines, unlike any other pharmaceutical, are beyond reproach — and that any mention of risks, or even raising the question of risk-benefit balance, was, is, and will always be, nothing short of heresy.
Notably, however, none of the articles I identified and analyzed provided empirical evidence to substantiate the characterization of dissenters as asserting falsehoods or acting maliciously. Instead, they relied on ad hominem tropes, appeals to authority, and circular reasoning – e.g., “claims against official policy are misinformation because misinformation is anything against official policy”.
Incidentally, this sentiment has been echoed by YouTube “Community Guidelines” – which include the removal of “content that contradicts health authority guidance on treatments for specific health conditions – and by the screening process of prestigious preprint outlets – which consider unfit for publication manuscripts with “content or conclusions that may cause harm to the public”. What harm? Well, for instance, reducing “compliance with critical public health measures.”
Be that as it may, the framing of the “problem” of misinformation in The Conversation conveniently absolved authors and institutions from engaging with the substance of dissenting arguments. It allowed policy failures and adverse outcomes to be blamed not on flawed assumptions or insufficient evidence, but on those who asked inconvenient questions.
Medicalization of Dissent and the Rise of the ‘Misinformation Expert’
For my subsequent study — “Trust us: We are the (Covid misinformation) experts” — I had already put together a small but enthusiastic team of students and young investigators that made it possible for me to conduct a scoping review of the peer-reviewed literature and its growing industry of “misinformation experts” operating across the medical and the social scientific literature.
In this review, I found a similar circular epistemology: misinformation experts claimed epistemic authority not by demonstrating domain-specific knowledge (e.g., in virology or epidemiology) but by asserting their expertise in identifying misinformation — a self-referential loop. Their publications rarely engaged the substance of the claims they sought to discredit and often failed to meet the basic standards of evidence they demanded from others.
Manufacturing Expertise: NGOs and the Suppression Apparatus
In a third layer of inquiry, we analyzed 42 documents produced by five prominent NGOs involved in shaping misinformation policy during the Covid response — including the Center for Countering Digital Hate and the Aspen Institute. These organizations, often lavishly funded by governments and private foundations, promoted sweeping policies to track, flag, demonetize, and deplatform voices that deviated from the official line.
Using once again Bacchi’s WPR framework, we showed that these organizations represented the problem of misinformation as the mere presence of noncompliant voices — dissenters whose refusal to conform exposed the limits of institutional control. The solution, predictably, was the expansion of surveillance and censorship powers that threaten science and democracy under the façade of defending both.
Conveniently, none of the documents engaged with alternative interpretations or the empirical claims made by those they sought to silence. Instead, they relied on appeals to “infodemiology” — a term popularized at the onset of the Covid event by the World Health Organization, which even holds international conferences to discuss this “scientific discipline.” For readers wondering what this “discipline” might be about, it was conceived at the turn of the 21st century as “the study of the determinants and distribution of health information and misinformation” that would allow a cadre of experts — “infodemiologists” — to study “misinformation “like a “disease”. But why a “disease”? Well, because “unfiltered (health) information” would make solutions to problems “more difficult.”
What Dissident Canadian Healthcare Workers Saw (and Said)
As I wrote in “Missing Conversations”, if there was one group consistently erased from the official record, it was health workers themselves — mainly those who had not been persuaded by official policy. In a series of studies — including a critical policy analysis of the expert literature on “vaccine uptake” among health workers, and surveys and qualitative analyses of responses from such workers in Canada — my team and I documented the personal and professional experiences of close to one thousand workers affected by vaccine mandates.
These workers were not “hesitant” in the sense implied by the literature. Many had taken vaccines, under duress, only to experience adverse events. Others had refused them on scientific or ethical grounds – citing skepticism about transmission claims, exclusion of natural immunity from accepted “vaccine exemptions”, and importantly, severe violation of the right to informed consent – which according to at least one major association of Canadian physicians excludes coercion by definition.
What participants in our studies shared was not ignorance, but insight. Their dissent was based on close clinical observation — of themselves, their colleagues, and their patients. These were not so-called “conspiracy theorists” — incidentally, another derogatory label that dispenses those who deploy it from the responsibility of providing evidence. Instead, they were practicing professionals who had long been celebrated as “heroes,” and were now being recast as heretics.
Critically, the knowledge they offered was evidence-based — no less than the case studies that populate respected medical journals. It was grounded in observation, pattern recognition, and a sense of ethical responsibility. To dismiss it as “lived experience” in the pejorative sense – as something separate from scientific knowledge — is to embrace an epistemology that falsely partitions knowledge along a hierarchical typology. All empirical knowledge begins in experience. What distinguishes it as “scientific” is not its origin, but the rigour with which it is examined.
Suppression as a System of Power
This is where Brian Martin’s framework becomes indispensable. In his seminal 1999 article Suppression of Dissent in Science, Martin explains how institutional actors preserve legitimacy by silencing those who threaten an alleged consensus. Dissenters are not typically refuted — they are marginalized, discredited, or punished. Suppression takes many forms: denial of publication, loss of employment, professional censure, and reputational destruction.
In the Covid era, suppression has not been the exception but the rule. Physicians who have questioned mandates have lost their licenses. Nurses who have refused Covid injections have been terminated. Researchers who have raised safety concerns have been blacklisted or smeared as promoting “fringe theories”. The public has been told that these individuals are “anti-science,” even as their arguments draw on the same literature and logic that once defined the scientific method.
Is Loss of Trust the Real Problem?
The FDA’s tweet was not an aberration. It revealed a pattern as old as science itself: inconvenient evidence is managed not by addressing it, but by controlling who can speak. Authoritarian regimes do this openly; democracies cloak it in euphemisms about safety, civility, or “combatting misinformation.” The Covid era simply refined and expanded these tools.
This essay is not a lament for a “lost civility” — all too often meaning lack of consent with elite discourse.), a consent that likely never existed. It is a reminder that science — and democracy — are only as robust as the dissent they can tolerate. One cannot claim to practice science while punishing disagreement. One cannot ask younger generations to “speak truth to power” while denigrating those who do. And one cannot uphold ethics if consent is achieved through coercion.
The medicalization of dissent, under the guise of fighting misinformation, has made serious inquiry harder, not easier. When disagreement is framed as a threat, institutions become incapable of self correction. Public trust is not restored through messaging campaigns, but through openness, accountability, and the willingness to admit error.
As August 21 approaches again, the question is not whether the public has trusted institutions, but whether institutions have acted in ways that deserve this trust. And the issue is not even whether the public trusts, but whether those in power govern transparently, allow dissent without punishment, and build policies that can withstand — rather than silence — scrutiny.
- *Republished from the author’s Substack @ https://claudiachaufanmdphd.substack.com/p/youre-not-a-horse-misinformation*[↩︎](about:blank#fnref1)
(Featured Image: “Centaur with Eros on its back at The Louvre” by nan palmero is licensed under CC BY 2.0.)