Social media, dopamine, and AI know exactly how to hook you. But they can also help you defend yourself – if you know how to use them.
The Decision Was Made Before You Made It
Do you really choose who you vote for? Or was your decision "warmed up" over weeks of memes, clicks, TikToks, and videos with captions like "this is really happening"?
Today, elections aren't won by platforms or debates – they're won by feeds. That's where you daily feed your brain with content that reinforces your beliefs and gives you small shots of pleasure. It's not the electoral debate that generates dopamine – it's scrolling through Facebook.
The algorithm doesn't just know what you like. It knows when you're tired, when you're afraid of inflation, and when you feel anger toward "those others." It knows which words to use so you won't turn away – and which images will make you click further.
This isn't conspiracy theory. This is the business model of platforms that live off your attention and sell it to advertisers, politicians, and organizations.
Now imagine that one candidate understands this mechanism better. They hire specialists in microtargeting, behavioral science, people who can design a campaign at the neurological level. What do you do if your candidate tells the truth but doesn't click as well? Nothing. Because you're already losing.
Your Brain Doesn't Want Truth – It Wants Confirmation
Psychological research leaves no illusions: confirmation bias isn't a bug – it's the fundamental way the mind operates. Your brain prefers to feel safe rather than be objective.
Research from Stanford University (2020) shows that over 70% of people regularly reject information that doesn't fit their views, even when it's verifiable.
This works automatically: when you see material consistent with your worldview, you stop, click, feel satisfaction. When you see something contradictory – you ignore, avoid, or comment angrily.
Enter dopamine. The reward neurotransmitter. Every confirmation of beliefs is a micro-reward – a small impulse of satisfaction that makes you want more. Like scrolling, like nicotine, like gambling.
Over time, the brain starts "starving" for confirmation. That's why we fall into opinion addiction so easily. A TikTok user who daily hears "they'll take everything from you" – after a month believes it's fact. Because their brain has already become comfortable with it.
And if someone tries to say "that's not true" – resistance appears. Motivated reasoning. Defense system: "that's manipulation, leftist propaganda, fake news." Instead of opening up to new data, the brain defends its addiction.
Algorithms Feed on Emotions, Not Truth
Social media algorithms weren't designed to protect democracy. They were created to keep you at the screen as long as possible.
According to a 2018 Facebook report, posts that trigger anger generate 70% more engagement than neutral messages. This means negative, controversial, often manipulated content has a better chance of going "viral" than reliable analysis.
Here's how it works:
Outrageous post = more comments
More comments = greater reach
Greater reach = greater influence
The result is the creation of information bubbles – environments where you only see what you already know and like. When everyone around you "knows" that elections are rigged – you start treating it as the norm.
A 2022 Nielsen report shows that 80% of users only use social media within their own ideological bubble. And this leads to radicalization. Because when you're surrounded by "your people," opposing voices don't exist.
Politics Without Platform: Feed Is Enough
Donald Trump perfectly understood the rules of this game. In 2020, his tweet about alleged election fraud reached over 100 million users.
In Brazil, Jair Bolsonaro spread fake news through WhatsApp, reaching 48 million people daily.
In Poland? The far-right Konfederacja party. Their pandemic-related content achieved 150% more engagement than neutral materials from other parties (Brand24, 2022). Their strength was emotion, memes, controversy.
Reach > content. Feed > platform.
Today, it's not political platforms that shape election results – it's memes, shorts, and viral videos. And those who know how to design this content have real influence on the future of entire nations.
Society in Breakdown Mode
When algorithms and the brain form an alliance – the entire society suffers.
Affective polarization – emotional hatred toward "the other side" – has reached record levels. A 2022 Pew Research Center report shows that 85% of Americans believe society has never been so divided.
The effects:
- Loss of common language – we don't talk, we label
- Loss of trust – in media, science, institutions
- Normalization of radicalism – extremes become everyday
- Political violence – from comments to Molotov cocktails
Capitol 2021. Brazil 2023. These aren't accidents – they're consequences.
But There's Another Side to the Algorithm
The AI that manipulates today can also protect. But it needs to be taught a different game.
Education as Infrastructure
Finland proved that education works: where students learn to verify sources, resistance to fake news is 60% higher than in the rest of Europe. This isn't a one-time campaign. It's a system.
Tools Must Be Usable
Not everyone will click on an analysis on a fact-checking website. But most will click on a large, simple tile saying "Will they really take away my pension?" and read a short answer. Design can be democratic – or elitist. Let's choose wisely.
AI as a Local Helper
Imagine a chatbot you could text on WhatsApp: "Will they close our school?" and get a simple, verified answer. Possible? Yes. Needed? More than ever.
Local Trust > Celebrity Reach
We don't need another celebrity campaign. We need a teacher, mayor, or priest who says: "Don't even believe me. Check for yourself." And that would make sense.
A Culture of "I'll Check"
This isn't a trend. It's civic competence. If we don't build a society that verifies – we'll just be the audience of our own manipulation.
The Way Forward
Polarization isn't the fault of one party. It's a side effect of technology we've handed over to emotions and interests.
But if the system works – that means it can also be reprogrammed.
AI doesn't have to manipulate. It can teach, correct, protect. But that depends on us – on whether we use it as a tool or ignore its potential.
The question isn't: will AI manipulate us?
The question is: who will teach their AI to work on our behalf?