Parasocial Propaganda

Today, we face a meaningfully different class of social media platforms and problems from those that dominated the news cycle in the days of the original Zuckerberg hearings.

We now find ourselves in what might be called the “Social 2.0” era, typified by platforms like TikTok, which have profoundly reshaped how (and from whom) we get our information — and, as a result, how we are made vulnerable to new forms of manipulation.

Unlike earlier “Social 1.0” platforms (e.g. the traditional incarnations of Facebook and Instagram) which relied most fundamentally on connections with known individuals, Social 2.0 prioritizes algorithmic content from unknown creators based on predictive algorithms. This shift has altered our social epistemology, conditioning us to increasingly trust strangers whose identities and motives are unclear in place of people we know and can generally hold accountable in our real lives.

At the heart of this epistemic transformation lies the concept of parasociality — the illusion of intimacy and long-term trust generated by repeated exposure to and emotional resonance with social media influencers. Crucially, Social 2.0 doesn’t require much exposure to begin to generate a modicum of trust. On TikTok for example, a majority of the content we are served by the algorithm comes from strangers. It is fair to say the algorithm understands our proclivities better than we do, and it presents us with people who we may be inclined to perceive as authentic and trustworthy. The result is that our brains are pummeled with the fleeting proclamations of total strangers about everything from politics to health to culture. Sometimes, these polemicists reappear in our feeds. Other times, they disappear into the algorithmic abyss, never to be seen again.

Of course, this does not mean that many of us blindly accept every statement from every stranger we encounter on TikTok — we remain capable of skepticism, doubt, and even outright distrust of the strangers we now see in our feeds. However, what has undeniably changed is our comfort and familiarity with routinely encountering — and even instinctively agreeing with — complete strangers in fleeting interactions. This habitual engagement can subtly erode our vigilance over time, gradually making us more susceptible to manipulation, even if we’re not consciously aware of it.

Consider how false or misleading medical information spreads so effectively on platforms like TikTok. Users whom we’ve never met confidently dispense advice that contradicts established medical consensus on everything from vaccines, nutrition, and mental health. While some audience members rightly apply a critical lens to these influencers’ scientifically unfounded claims, others accept them not because of their expertise, but simply because their presentations feel authentic, relatable, and emotionally compelling. Trust here is increasingly intuitive and wholly disconnected from traditional markers of credibility and reliability.

While the rise of influencers has kicked off a boom in all kinds of entertaining and stimulating content, it is hard to ignore the troubling implications of this shift. As Social 2.0 platforms have reshaped our informational diets, they’ve dramatically weakened traditional epistemic safeguards such as personal accountability, reputational risk, and relational trust.

In this new epistemic landscape, propaganda and other forms of false or misleading information flourish, driven not by expert deceit but by ordinary people whose authority derives solely from algorithmic exposure.

Even more concerning is the potential for exploitation of this new epistemic landscape by malicious actors. Imagine a world (not too far away, as this AI model developed by researchers at TikTok’s parent company suggests) where AI can convincingly simulate trustworthy, relatable-seeming strangers to deliver carefully crafted propaganda tailor-made for each of us. What about a universe in which foreign or domestic owners of social media platforms decide to leverage their platforms for projects of societal manipulation? Without strong epistemic defenses, we are profoundly vulnerable to all kinds of subtle manipulation at massive scale.

Addressing these challenges does not necessarily mean abandoning Social 2.0 entirely. Instead, we must cultivate a heightened skepticism about the content algorithms surface and the incentives of those who produce it. The first steps will be increased media literacy, algorithmic transparency, and deliberate efforts to rebuild trusted offline relationships. We must become more skeptical about what we encounter on the platforms and who it comes from, recognizing that social media has evolved far beyond what most of us initially understood and as such exploits new vulnerabilities that threaten our grasp of reality.

Understanding and adapting to this quiet overturning of the epistemic order — this era of parasocial propaganda — is essential if we are to reclaim control over our attention and preserve our autonomy in an increasingly algorithmically mediated world.

Previous
Previous

Civics in the Simulation

Next
Next

Reconsidering the Comments Section