How Modern Manipulation Actually Works
From Freud’s nephew to the “AI will replace you” narrative
In 1929, Edward Bernays — nephew of Freud and often called the father of modern public relations — engineered one of the most iconic manipulations of the twentieth century.
At the time, American women were discouraged, even shamed, from smoking in public. For the tobacco industry, this meant half the population wasn’t buying. Bernays recognized the opportunity immediately and hired a psychoanalyst to uncover what cigarettes symbolized at an unconscious level.
The conclusion was simple and powerful: cigarettes represented power and male privilege. If women could be persuaded that smoking wasn’t merely acceptable but liberating, a cultural taboo could be transformed into a market.
So Bernays staged a spectacle.
On Easter Sunday, 1929, he recruited debutantes and fashionable young women to join New York’s Easter Parade. At a carefully timed moment, they pulled out Lucky Strike cigarettes and lit up — directly in front of photographers Bernays had alerted in advance. Reporters had already been fed the framing he wanted: these weren’t women smoking; they were women carrying “Torches of Freedom.”
The photographs spread across newspapers nationwide. The narrative was irresistible. Lighting a cigarette became an act of independence. Women weren’t breaking a social norm — they were breaking chains. Smoking rates rose. Sales soared.
This wasn’t Bernays’ first success, nor would it be his last.
Years earlier, he had helped the U.S. government sell American involvement in World War I — a deeply unpopular idea at the time — by reframing the conflict as a moral crusade to “make the world safe for democracy.” Later, he helped turn bacon and eggs into America’s default breakfast with a single headline: “4,500 physicians urge a hearty breakfast — including bacon.”
Different industries. Different goals. The same underlying method.
What Bernays understood was something unsettling about mass psychology: people don’t simply form opinions. They absorb them — often unconsciously, through symbols, stories, and emotional cues.
Now nearly a century later, Bernays’ fingerprints are still visible — on our feeds, in political debates, across media, and even inside the workplace. But the machinery of manipulation has changed shape. It hasn’t merely grown more pervasive; it has grown more intimate.
Where Bernays crafted messages for a mass audience — a single narrative broadcast to millions — modern propaganda operates on an entirely different scale. Rather than offering a shared story, the system adapts itself to each individual, delivering a version of reality calibrated to their fears, desires, and preexisting beliefs.
And it begins by making people unsure of where they stand.
The Conditions That Make Manipulation Possible
Modern propaganda rarely begins by trying to convince people of a particular belief.
Its first move is more subtle—and far more corrosive: it weakens the conditions that make truth-seeking possible in the first place.
Think of it as preparing the soil.
When people feel grounded—when there’s a shared sense that facts can be checked, institutions can be trusted (however imperfectly), and disagreements can be worked through using evidence—manipulation has natural limits.
But destabilization removes those limits.
It does this by flooding the environment with more information than the mind can reasonably process, while simultaneously undermining confidence in any source that might help separate signal from noise.
Over time, this produces a profound psychological shift.
What once felt like a shared reality begins to fracture. People stop asking what is true and start asking who can I trust—and eventually, why trust anyone at all. Once that state takes hold—once the belief settles in that everything is manipulated, everyone is lying, nothing is knowable—the public becomes highly malleable.
Because when no truth feels solid, any narrative that offers certainty, belonging, or emotional relief becomes compelling.
This is the terrain modern propaganda prefers to operate on.
And there are five core mechanisms used, again and again, to create it.
1. Overwhelming with contradictory claims
Consider the early narratives around COVID’s origins.
Before evidence had time to emerge, the public was inundated with mutually exclusive explanations: lab leak, bioweapon, government cover-up, foreign sabotage, “it’s all media hype.”
The contradictions weren’t accidental. Their cumulative effect was to exhaust attention and erode confidence. Faced with too many competing explanations, many people stopped trusting any of them.
The result wasn’t a false conclusion—it was cognitive exhaustion. And eventually, paralysis.
2. Creating hyper-uncertainty
Scroll through any major platform today and you’ll notice it.
AI-generated videos, synthetic images, misleading headlines, decontextualized clips. Every claim seems to require verification. Every source feels questionable.
The message is not “this is false” but “nothing is reliable”. Over time, the effort required to distinguish truth from fabrication begins to outweigh the perceived benefit of trying at all.
3. Fracturing trust in institutions
Across multiple countries—among them the U.S., Brazil, and India—coordinated campaigns have targeted election systems themselves.
Claims circulate that voting machines are rigged, or that entire democratic processes are illegitimate. Even when courts reject these accusations, the damage lingers. The perception of institutional failure persists.
The goal is not to prove corruption in a specific case, but to plant the sense that no institution can be trusted at all.
4. Producing cynicism and fatigue
As scandals accumulate—some real, some exaggerated, some entirely fabricated—many people arrive at a familiar conclusion:
“Everyone is corrupt. Nothing will change.”
This cynicism is not neutral. It drains energy, discourages engagement, and makes withdrawal feel rational. And a public that no longer expects accountability is easier to manage.
At some point, scandal stops functioning as a warning and starts to feel like background noise. Corruption becomes something people factor in, not something they resist.
I’m currently visiting Brazil, and I’ve noticed this logic surfaces in every conversation about politics. When people are asked why they’re willing to vote for a candidate who has already been accused of stealing, the answer is strikingly consistent: “He steals, but at least he gets things done.”
5. Drowning factual content in noise
During elections or geopolitical crises, information channels are often flooded with volume rather than argument:
low-quality memes
misleading articles
troll replies and automated amplification
The intention is to saturate the feed. When every search result, timeline, or comment section dissolves into chaos, truth doesn’t disappear—it simply becomes harder to reach.
Once fragmentation is widespread, chaos becomes an opening.
Manipulative actors step into that opening by offering relief—a reality that feels stable enough to cling to when everything else feels uncertain.
What follows is a gradual loss of agency, as people outsource their search for truth.
The Manipulative Mechanisms
Now that the terrain has been prepared, manipulative actors can deploy their techniques to get what they want: mobilize crowds, increase profits, and sway public opinion.
At first, I considered writing a section focused solely on identifying these actors. But I quickly realized that doing so would turn into an entire essay on its own. So instead, I’ll keep this at a high level and focus on what matters more—the techniques themselves.
Here’s a short list of the actors most often pulling the strings:
Platform architects: those who decide what algorithms reward, amplify, and suppress.
Narrative engineers: PR firms, political consultancies, media strategists, and cultural trend forecasters who shape framing rather than facts.
Volatility merchants: media businesses built on retention, traders, betting platforms, and subscription models that monetize fear and uncertainty (insurance being a clear example).
Personal brands: individuals who convert attention into revenue through courses, sponsorships, or affiliations—and who, these days, seem particularly fond of betting platforms.
Power consolidators: Big Tech, politicians, global corporations, and financial institutions.
But the specific actors matter less than the methods they rely on. While the players change, the techniques remain remarkably consistent.
What follows isn’t an exhaustive list. It’s a pattern—a recurring set of moves that appear in different combinations, across platforms, industries, and political contexts, often in plain sight.
1. Emotional front-loading (the 3 second rule)
Most high-performing content no longer begins by explaining anything. It begins by triggering something.
The first line is engineered to produce an emotional spike before context or nuance has a chance to appear. Shock, threat, moral certainty—anything that activates attention in the first two or three seconds. Once emotion arrives first, cognition follows in a diminished role. The rest of the content doesn’t need to be especially rigorous. The audience is already reacting before they’ve had time to think.
You’ll recognize it in lines like:
“Nobody wants to admit this, but…”
“This is why you’re failing and don’t know it.”
“If you believe X, you’re being played.”
2. Identity locking (you’re either with me or against yourself)
Instead of addressing an argument, the content addresses identity.
Viewers are subtly told who they are by agreeing. Intelligence, awareness, or moral seriousness become conditional on alignment. Disagreement no longer feels like a difference of opinion; it feels like self-betrayal or moral failure. Once someone accepts the identity being offered, they begin defending the message on the creator’s behalf.
Common signals include:
“Smart people understand this.”
“This is obvious if you’re paying attention.”
“If you disagree, you haven’t done the work.”
3. Compression of complexity into certainty
Large, multifaceted problems are presented with clean, confident conclusions.
Tradeoffs disappear. Uncertainty is treated as weakness. And the message offers the relief of simplicity—one cause, one explanation, one takeaway. Nuance slows people down. Certainty keeps them moving.
It often appears as:
“The real reason X happened is Y.”
“Everything boils down to this one thing.”
“People complicate this, but it’s actually simple.”
4. Algorithmic pacing
Its rhythm is designed for machines.
Short sentences. Frequent line breaks. Visual spacing optimized for scrolling rather than reading. Momentum is prioritized over meaning, and the goal is to keep the reader moving before reflection has a chance to set in. The voice might even sound natural, but the pacing is artificial.
You can see it in:
one-sentence paragraphs stacked back to back
ideas broken mid-thought to maintain motion
content that’s easy to scroll but hard to recall
5. Engagement traps
There’s a running joke about Reddit: if you want the right answer, don’t ask the question—post a confidently wrong answer instead. Within minutes, people rush in to correct you—and in the process, provide the accurate information you were looking for.
The same dynamic shows up in content framed to provoke response rather than clarity, often in lines like:
“Most advice on X is wrong.”
“People aren’t ready for this conversation.”
“This changes everything.”
These mechanisms rarely appear in isolation. They stack and reinforce one another.
Once momentum takes over, a narrative can spread with remarkable speed, regardless of how grounded it is in reality.
Some stories are better suited to this machinery than others. One, in particular, activates these mechanisms with remarkable efficiency.
The “AI is going to replace you” Narrative
AI isn’t framed as a technology that will reshape certain kinds of work over time. It’s framed as an approaching force — a wave you either ride or drown under.
You’ve seen it everywhere:
“Your job won’t exist in five years.”
“Adapt fast or be obsolete.”
“If you’re not using AI for everything, you’re already falling behind.”
The emotional front-loading is immediate. Fear arrives before understanding. The question isn’t what will change, but will you survive it.
From there, identity locking takes over. The audience is then sorted into categories: the enlightened early adopters, the doomsday advocates, and the naïve laggards. To question the framing is to reveal yourself as out of touch. Agreement becomes a marker of intelligence and foresight. Skepticism becomes self-sabotage.
At the same time, complexity collapses into certainty. Entire professions are declared finished in a sentence. No distinction between tasks and roles. No discussion of adoption curves, organizational friction, regulation, or human preference. A messy, uneven transformation is flattened into a single, confident conclusion: replacement is inevitable.
Half truths do much of the work here. Yes, AI can automate tasks. Yes, some jobs will change dramatically. But crucial context is consistently missing: where AI struggles, how incentives shape adoption, how institutions respond, how long transitions actually take.
What’s also left out is how fragile the underlying reality still is: AI companies struggling to turn profits, slowing marginal gains in new models, and no real consensus on what AGI even means.
Big numbers reinforce the message. Exponential growth. Trillion-dollar markets. But beneath the spectacle, the tangle of AI deals among the same handful of tech giants suggests that what we may be seeing is, in fact, a bubble:
So let’s get this straight: OpenAI is now taking a 10% stake in AMD, while Nvidia is investing $100 billion in OpenAI; and OpenAI also counts Microsoft as one of its major shareholders, but Microsoft is also a major customer of AI cloud computing company CoreWeave, which is another company in which Nvidia holds a significant equity stake; and by the way, Microsoft accounted for almost 20% of Nvidia’s revenue on an annualized basis, as of Nvidia’s 2025 fiscal fourth quarter.
And most of what circulates on this topic is delivered with algorithmic pacing by influencers and media channels monetizing uncertainty.
So the result isn’t an informed public conversation about technology and work. It’s a narrative that offers clarity through fear and belonging through compliance.
What matters isn’t whether the claim ultimately proves true or false, but how believing it reshapes behavior in the meantime. When inevitability becomes the frame, people begin to abdicate their agency.
Gradually, a dangerous belief settles in: “there isn’t time to understand things deeply—only time to react.”
Confidence begins to erode. Taste and intuition are treated as unreliable. Even lived experience starts to feel less relevant.
For creatives, the question slowly shifts from “What do I want to make?” to “How do I stay relevant?” Just spend a few minutes scrolling and the consequences become obvious. Art turns into content. Writing becomes output. Even those who resist AI adoption start positioning their work defensively.
And with that, deeper questions fade from view:
What if the problem isn’t intelligence outpacing humanity, but speed overwhelming discernment? What if the real risk isn’t that machines become more like us, but that we slowly redesign ourselves to be more like machines?
Once the “AI is going to replace you” narrative governs how people think, something essential is lost: not jobs or relevance, but the ability to decide—deliberately and collectively—how our tools should fit into human life, rather than the other way around.
Reclaiming Our Agency
Once the machinery of manipulation becomes visible, narratives like “AI is going to replace you” lose their prophetic aura and begin to look like what they are: engineered.
But awareness, while necessary, is rarely sufficient.
The pace of life, the number of commitments we carry, and the sheer volume of messages competing for our attention make it easy to slip back into autopilot. Add to that the modern disciples of Bernays—endlessly optimizing for attention and market share—and it becomes clear how easily our own sense of truth can be outsourced.
That’s why I try to be deliberate about how I spend my time and how I allow my mind to engage with the world around me.
I create boundaries to filter out as much noise as possible. No TikTok. No Instagram. No notifications on my phone. No traditional news cycle pulling my attention toward urgency without understanding.
I also choose what I engage with intentionally. Books that invite reflection rather than demand agreement. Thinkers who leave space for independent thought instead of rushing toward conclusions. And I pay close attention to incentives—if someone is sponsored by a betting platform, for instance, I know I’m not the audience they’re actually serving.
Most importantly, I cultivate slow practices—ways of working that resist speed, allow for depth, and create room to think carefully about what actually matters.
Writing is the one I return to most. It’s where I slow down enough to notice what I believe, question what I’ve absorbed, and clarify what feels true. And it’s also where I feel a responsibility—not just to myself, but to others navigating the same terrain.
Because the forces of manipulation can be powerful.
But they only work with our compliance.


Brilliant analysis on how manipuliation works through emotional triggers. The Bernays example really hammers home how these tactics exploit unconcious patterns rather than conscious deliberation. I've noticedd similar dynamics in workplaces when leadership frames AI tools as "unavoidable progress" and skepticism gets reframed as resistence. Putting slow practices at the core cuts through alot of the performative urgency that dominates right now.
Brilliant as always. For young ones growing into this world, finding a centre in oneself to trust must be extraordinarily difficult.