In the very near future disinformation will target you autonomously in fractions of a second. As the quantum computing age draws near, Baudrillard’s notion of hyperreality will coalesce with Edward Bernays’ notion of a crystalized public opinion: creating consensus through individualized reality creation. The methods of the deployment of the past will give way to an infrastructural inevitability: GenAI, guided by algorithmic troves of personalized information, will be weaponized into personally targeted mis/disinformation campaigns that cost pennies on the dollar. Propaganda will no longer be an international foxtrot of moving actors and action but will become an autonomous force plowing into the synapses of the masses on micro scales. As this occurs, humanity’s long-smoking gun will be turned back on the gunman — obliterating the boundary between the real and its simulation through recursive, accelerated, and increasingly personal targeted information operations.
Your Data Used on You
This process has already begun and – for the time being – is being spat forth into the world at the speed of an AI prompt. According to the findings of Goldstein et al. (2024), Generative AI is capable of “generating text that is nearly as persuasive for US audiences as content we sourced from real-world foreign covert propaganda campaigns.” In other words, the digital machines we have created to help us are already being used to persuade our ideas and actions. As public facing models increase their aptitude, more advanced private models will likely follow. This extends far beyond text-based modalities and into the realms of photo and video generated by generative artificial intelligence (GenAI) models.
In fact, AI-generated propaganda is already being deployed en masse. The 2024 election in the United States was rife with examples flooding social media feeds and creating alternative narratives to not only the election, but also real-world events. In more recent world events, the conflict between Israel and Iran, and the infamous black bags being thrown from a window of the White House, have also produced increasingly potent misinformation across the web. There are currently varying degrees of virility with AI-generated propaganda. The newness of the technology and the experimentation of finding ways of making it effective to skulk into the minds of people still maintains a firehose approach. Meaning that it is created with a widespread in mind: reach as many people as possible and hope that it lands with some of them. If attempts at virility are stymied – it fails. But what if AI-generated propaganda were to be tailored to individual users en masse? One of the key elements in potentially deploying tailor-made propaganda at scale is already part of how the internet-connected world works: Surveillance Capitalism.
Surveillance Capitalism
Surveillance Capitalism, a term coined by Harvard professor Shoshana Zuboff, refers to an economic paradigm in which extracted behavioral data is not only used to observe individuals for commercial reasons, but also, to forecast and influence future actions. When used at scale patterns of influence emerge. Patterns which help to seize the attention of consumers and a means toward crystalizing public opinion. And although it is generally geared toward commercial ends its use can reinforce geopolitical ends.
To put it into a simple (yet whimsical) metaphor, imagine a gold hoarding dragon. Now, let’s imagine the dragon’s gold as all the data points that make up your own and others’ online shadows. Consider all the user agreements you never read, the carelessly accepted cookies when you visit a website, the photos shared over social networking services (and perhaps your phone’s photo roll), the hundreds (probably thousands) of little pieces of “gold” you have left scattered across your own digital dwellings. The dragon certainly knows the value of its gold, but perhaps it wants to invest in a nice broach or a cute little crown to lure more victims toward its hoard. So, the dragon sends off gold to a smithy. Let’s call the smithy Mr. Data Broker. It is their job to make this raw ore into something useful. They find the right concoction of ores and gold pieces to make into something useful for the dragon. Once made they sell it back to the dragon and voilà more people come and get gobbled up by the dragon … leaving even more gold for the cycle to continue. Now this is a rather silly example, but you should have the gist of it. So how does this play out in a real-world scenario?
Among the most famous cases is that of Cambridge Analytica which faced steep fines and closure due to its involvement in selling detailed psychological profiles of voters. Cambridge Analytica relied on data collected through social media users via an online quiz (see the aforementioned cute little broach). These participants were paid to take a personality test and to allow their data to be harvested. Along with participants’ data, their online friends’ data was also harvested: Creating a giant gold hoard (think Smaug’s hoard in the film adaptation of The Hobbit). This data was then collected, analyzed, and utilized to make detailed psychological profiles. After which these psychological profiles were further utilized to create predictive algorithms to help push hand-tailored political advertisements. Giving just the right amount of push to lure people into other – perhaps more dangerous – dragon lairs. Once this information became public (via the whistleblower Christopher Wylie and investigative journalism) waves of mistrust and reform spread as governments began creating tighter restrictions on user data and rippled into public distrust in social media enterprises.
Out of the rubble of this event, it came to light that Cambridge Analytica was able to create detailed psychological maps of U.S. voters using up to 5,000 data points. Others have utilized much fewer data points to find, track, and de-anonymize the digital footprints of people in datasets.(1,2) To put this into perspective researchers Luc Rocher, Julien M. Hendrickx and Yves-Alexandre de Montjoye (2019) note that “99.98% of Americans would be correctly re-identified in any dataset using 15 demographic attributes.” It only takes a few pieces of gold you’ve left behind to identify you from the hoard. This is pertinent because data leaks onto the open web are becoming increasingly common – giving even more precious detritus for dragons to sniff out.
In 2023 alone data breaches affected over 350 million people. These leaks include governments, medical institutions, and commercial interests. All the places in which our digital footprints are left, the places of our likes and dislikes, our personal information, and hints at our respective personalities.(1, 2, 3) All information digital dragons of the 21st century can use to move bodies and minds. Enough treasure to continue luring people for generations to come. And this treasure can continue to be stockpiled through the shift of an increasingly digitally-connected “control society.”
Societies of Control
In the first half of the year 1990, Gilles Deleuze’s “Postscript on the Societies of Control” put forth the idea of a grand shift in the ways in which power could be exerted within a society. The tried-and-true method of “disciplinary control” relied on people feeling boxed in within the framework of their lives. Power was exerted through the familial structure, schooling, their respective fields of work, and of course the institutional structures of religion and government. Deleuze put it this way, “The individual never ceases passing from one closed environment to another” (p. 3). These closed environments kept people in line through the overbearing weight of authority. And propaganda of these types of societies reflected this idea. The imagery of people watching from the shadows, the overwhelming numbers of participants in propagandistic film making all cheering to the same banner, and of course the idolization of symbols (including the idolization of leaders). All added to the height of the walls in which people were boxed in.
Deleuze (among several notable others) noted a shift in this paradigm as western societies began to move toward what he considered a “society of control.” To put it simply: free range societies in which the borders of those within it seemed invisible. Rather than being shuffled from one closed room to another, people within this societal structure were given the opportunity to think they have the opportunity to go wherever and do whatever they deemed worth their time. Of course, the catch is that these free-roaming animals can and are channeled into very specific directions for ends sometimes not-so-easily deciphered from within. With the borders spread wide and the walls knocked down one may think they are free: And in a way, they are. But, with this freedom comes the debacle in which one is never really “finished” with anything. The blending of work, familial life, recreation, and the institutions which make them all possible merge and blur together akin to Claude Monet’s impressionistic diffusion of forms: Giving the semblance of a whole through the signaling of pieces to fall in where they fit.
Deleuze saw the writing on the wall, so to speak, of the emergent technologies of the late twentieth century and their implications in this process. As Deleuze notes, “Types of machines are easily matched with each type of society — not that machines are determining, but because they express those social forms capable of generating them and using them” (p. 6). With heightened use of personalized computers, the birth of mass internet communication, and eventually the ubiquity of social media, the channels upon which the herd treads became deeper and more easily followed. Deleuze’s vision of the future called him to caution “the young” that it is “It’s up to them to discover what they’re being made to serve” (p. 7). A condition that can be rather difficult in a hyper-connected media ecosystem. Who is the wizard behind the curtain or the protector of the dragon’s hoard?
Today one’s data shadow can be hard to pin down. Nearly everything is connected digitally. From banking, to medical, to the social, we are leaving data points for others to hoover up and weaponize. As James Brusseu (2020) notes, “Today, ‘control’ is no longer an abstract concept in political philosophy; it is localizable as specific technologies functioning where personal information is gathered into contemporary data commerce” (p.2). And although it may begin with commercial interest (especially advertising), it is and will continue to be used as a form of control in political and social theaters en masse as well. It is of note to mention that this data is not limited to what one does publicly or under an “anonymous” screenname. All online performances leave a mark and with enough data and computational power one’s own habits will be the thing that roots them out.
With so much data being produced both individually and collectively many still feel a sense of anonymity when it comes to their data on the closed web, their smartphones, or “secured” behind cybersecurity systems. Though as seen in the prior section of this work, it takes a relatively small amount of data points to identify someone from anonymized data. With this in mind, many have turned to encryption and “secured” mobile devices to remain anonymous. Through surveillance technologies such as Paragon Solutions’ Graphite and NSO Group’s Pegasus data from these sources are also up for grabs. As reported in The Guardian, these technologies are working in the real world “By essentially taking control of the mobile phone, the user – in this case, ICE – can not only track an individual’s whereabouts, read their messages, look at their photographs, but also open and read information held on encrypted applications, like WhatsApp or Signal.” This greatly enhances the ability to capture and analyze personal information for the production of hand-tailored propaganda: Adding even more alluring treasure from which to forge its lures. A treasure that was meant to be hidden away behind vaulted doors and algorithmic locks. But, just as safecrackers and lockpickers of the past found their calling, the digital wizards of today and tomorrow will also rise to the challenge.
The move deeper into the grip of a “controlled society” has already begun through the mechanisms of the Internet Age especially in the realm of commerce. In the not-too-distant future through the combined power of AI and quantum computing hyper-attuned forms of propaganda will soon make their mark on the world.
Quantum Computer and GenAI share a nightcap: Quantum Propaganda is Born 9 Months Later
So how will this play out in the real-world? With increasing use of AI based digital platforms (including Large Language Models (LLM) and other forms of Generative AI (GenAI) in daily life, the ability to digitally influence individual realities is now coming within reach. On the average day, an estimated 402.74 million terabytes of data come into existence globally. From which there are fresh opportunities to gain access to, gather, sort, and deploy this information in strategic or malevolent ways (for corporations, governments, and institutions). The problem lies in contemporary computing technologies’ inability to sift through so much information in a timely manner.
What would take a personal laptop a million years to calculate, a super computer can do in one year, and a quantum computer could do in a matter of hours. The difference is in how they are computing. Classic computers rely on binary code (ones and zeros) processed as bits. Which leads to the solving of a problem or the running of a program to be hashed out as a single solution. Super computers work around this by using parallel processing to speed up the process and work through multiple iterations of the same problem simultaneously. Yet still, one “answer” is still spat forth from the abyss. This displays how computational power can speed up the process, but with physical and political limitations on the hardware the process may not be getting too much faster or offering multiple opportunities for a range of potential outcomes.
Quantum computers, on the other hand, utilize the properties of quantum physics to surpass the binary option with the inclusion of a superposition quantum bit (qubit). This allows for the traditional zeros and ones along with their new pal in a superposition of both one and zero simultaneously. This allows for much faster computation along with the ability to produce a larger range of potential answers to questions to complex problems consisting of massive amounts of data, a serious advantage over classic computing.
Together, AI and quantum computing will play a symbiotic role in the propaganda of the future. AI is already becoming a key player in personalizing social media feeds, advertisements, and recommendations based on previous data acquisition. Once combined with quantum computing, this will be taken to a whole new level by creating multiple models of you based off of your own data. Through these models, “what if” scenarios can be computed leading to the likelihood of which message (text, image, video, etc.) will be the most likely to nudge you in the direction of choice. Once more, the speed of this coupling will also allow for instantaneous interaction in which the rhetorical methods of tone, style, and content can be adapted to channel you into whatever direction has been deemed the objective. It can utilize pathos at the moment it guesses will be the best or try to exert logos to help you see the “correct” path of action, or even build ethos and trust over time. Their forms will be just as malleable as their message. They could come in the form of AI agents, synthetic online influencers, chat bots, personal assistants, etc. They will be able to target whomever through whatever channel opens up as a vulnerability.
Simulation of Shattered Realities
The combination of quantum computing and GenAI will reshape the reality of the people who fall victim to its lures. Currently (and historically) propaganda has relied on, at the bare minimum, an acorn of truth, the logic of which relies on the idea that some event occurred. But, here is how you should interpret it.
A well-trod example can be found among World War II posters capturing the hearts and minds of the women of the United States. These ran on the idea that a world war was occurring, that a labor force was necessary for the country and the war effort, and that by joining the workforce women were not only necessary, but also, that labor outside of the traditional roles of femininity could be personally empowering. All of which were true at the conception of the information campaign or became true through its performance.
Though the post war years quickly retracted that promise (for a time) for many – though certainly not all – women. These analog campaigns moved slowly but were overwhelmingly successful. They relied on collective ideas of reality to be implanted, shared, agreed upon, and then acted upon. This tried-and-true formula of the 20th century can still be deployed (including through digital means) today to varying levels of success.
Quantum propaganda of the future will side-step the collective nature of previous epochs of propaganda creation to focus on individuals. These individuals will then be collectivized through chaotic information ecosystems. The inability to concur on reality will result in a collective shattering of reality. In short, a state of the “hyperreal” will become normalized for people under the persuasion of up-to-the-moment propaganda campaigns. The hyperreal, a concept coined by Jean Baudrillard, is a state in which the signs, symbols, and simulations of reality are replaced by others creating a “copy” that appears to be more real than reality. Or as Baudrillard (2010) writes, “It is the generation by models of a real without origin or reality” (p. 1557). Through the combination of emergent quantum computing (parsing data at incredible speeds) and GenAI (to transform said data into multimodal rhetorically significant consequentialities) a state of individualized hyperreal created en masse will become a real possibility. And what is more is that this is a process that has certainly already begun with contemporary digital propaganda (Rodriguez, 2025).
As GenAI allows for more technological access to propagandists of varying scales of influence the notion of what is real and what could be real are already being blurred. One real-world example mentioned earlier hails from the Israel-Iran conflict in 2025 in which AI-generated photos and text along with de-contextualized videos spread across the Internet at thunderclap speed. These social media posts generated confusion across media ecosystems in both the countries involved in the conflict as well as others across the globe. Collectively, we are experiencing the beginnings of this phenomenon to a slow, but steady, drumbeat. Digital media has aided in generating misperception of political-reality leading to increasing polarization across the spectrum of politics and eroding levels of interpersonal trust. All of which is being increasingly aided by GenAI.
Propaganda of the future will continue to play off of Baudrillard’s concept of the hyperreal eventually rising to the level of mimicry altering the suspense-of-disbelief felt today and instead generating a sense of imaginary-belief. An understanding of reality that does not contain an acorn of truth, but a synthetic tree of potential realities to choose from. A state Baudrillard ascribed as the epitome of the contemporary hyperreal found most notably in – of all places – theme parks such as Disneyland. He states, “The Disneyland imaginary is neither true nor false; it is a deterrence machine set up in order to rejuvenate in reverse the fiction of the real” (p. 1565). The incoming barrage of digital media will mimic this phenomenon individually, but on a scale that is hard to fathom at the moment. Algorithmically-tailored propaganda will blanket digital spaces creating a fourth order simulation: A world which only exists in and through the data and rhetoric of the individual and its interactions with advanced digital-machines. The obfuscation of reality in which one’s own phenomenological experience is tinted and morphed by whomever controls the deterrence machine’s mechanisms: for commerce, for political ends, and for crystallization of personal and public opinion in real-time.
On the Horizon
Much of what has been discussed here is still in the process of becoming the digitally mechanized monster that it will become. We are all still feeding the dragon’s hoard, Mr. Smithy is still forging our gold into desirable and usable materials, and propagandists are adapting their methods and machines to reflect our increasingly tailored digital lives. The only remaining factors are the time-frames involved in the advancement of quantum computing, the cost of such mechanical wonders, and the infrastructure necessary to harness the tried and true skills of the propagandists. Such advancements are years off, but they are coming. A hundred-foot wave of hyperreality approaches from the horizon. The dingy of the “real” we cling to is already springing leaks and the chaos of the information ocean beckons us from below.
This technology could easily be used for malevolent means. But it could also lead to more effective methods of information dispersal, real-time information in emergencies, adaptive education, among other specialized and tailored experiences … if adequate and well-considered guardrails are set in place to do so. Though perhaps this may prove to be nothing more than a pipedream. Instead, we may just have the opportunity to live an information nightmare – stuck between visions of the Matrix and the Truman show – living in a false reality debasing people from their natural state of kinship and communal advancement for clicks, votes, digital dollars, and the desires of the dragon.
References
Baudrillard, J. (2010). “Simulation and simulacra,” in The Norton Anthology of Theory and Criticism, 2nd Edn. ed V. B. Leitch (New York, NY: Norton), 1557.
Brusseau, J. (2020). Deleuze’s postscript on the societies of control: Updated for big data and predictive analytics. Theoria, 67(164), 1-25.
Culnane, C., Rubinstein, B. I., & Teague, V. (2017). Health data in an open world. arXiv preprint arXiv:1712.05627.
Goldstein, J. A., Chao, J., Grossman, S., Stamos, A., & Tomz, M. (2024). How persuasive is AI-generated propaganda?. PNAS nexus, 3(2), p. 34.
Deleuze, G. (1992). Postscript on the Societies of Control. October, 59, 3–7. http://www.jstor.org/stable/778828
Rocher, L., Hendrickx, J. M., & De Montjoye, Y. A. (2019). Estimating the success of re-identifications in incomplete datasets using generative models. Nature communications, 10(1), 3069.
Rodriguez, M. The Simulation of Modern Conflicts: Disentangling Hyperreality and “Fake News” in the Ukrainian and Palestinian Contexts. Arab Media and Society.
Wack, M., Ehrett, C., Linvill, D., & Warren, P. (2025). Generative propaganda: Evidence of AI’s impact from a state-backed disinformation campaign. PNAS nexus, 4(4), p. 83.
Zuboff, S. (2019). The age of surveillance capitalism: the fight for the future at the new frontier of power. Profile Books. Kindle.
(Featured Image: “Artificial-Intelligence” by El contenido de Pixabay se pone a su disposición en los siguientes términos (‘Licencia de Pixabay’). En virtud de la Licencia de Pixabay, se le otorga un derecho irrevocable, mundial, no exclusivo y libre de regalías para usar, descargar, copiar, modificar o adaptar el Contenido con fines comerciales o no comerciales. is marked with CC0 1.0.)