Algorithmic Bias and Both-Side-ism
Web 2.0 and Algorithmic Bias
Have you ever noticed that the news — and your YouTube homepage — are becoming more skewed toward the edges?
The volume has gone up, but the substance has gone down.
Well:
a. You're not crazy
b. Many people agree with you
c. The data supports this phenomenon
The immediate question then is: why is this happening?
If your answer is “algorithms,” you're not alone. Algorithms are blamed for everything bad about the internet — and often the world.
But why are algorithms bad?
Is math fundamentally evil?
Or are the people designing and tuning algorithms evil?
To answer that, we need two things:
high-school algebra and basic neuroscience.
1. High School Algebra
Let’s say I own a website or an app that makes money from advertising.
My obvious goal is to maximize ad revenue.
That is my goal Y.
To maximize Y, I need to make sure ads are:
viewed (CPM)
clicked (CPC)
and that users spend time on the platform
That is my derived goal y.
To maximize Y, I must maximize y.
That is the fundamental mechanic of the digital ad industry.
Views + Clicks + Time Spent = Ad Revenue
So I tweak everything under my control to increase:
views
clicks
time spent
I streamline UX/UI.
I add more ads.
I improve targeting.
I make content stickier.
Mathematically:
Y = y = f(x₁, x₂, x₃…xₙ) + E
Where:
x = the variables I control
E = everything outside the model (the error term)
Example:
A factory optimizes tire production.
y = tires
E = pollution
The model optimizes tires.
Pollution sits outside the equation.
Give this formula to a machine, and it will tweak the x’s to maximize y.
And in today’s ad economy, when y is maximized, Y is maximized.
No evil required.
Just optimization.
So what’s the problem?
No evil. No harm. Right?
Unfortunately, human history is full of harm caused by perfectly rational optimization.
Enter the human brain.
2. Neuroscience 101
What drives attention?
The world is filled with an infinite amount of stimuli:
trees, leaves, color gradients, insects, sound, smell, temperature, movement.
We process a great deal — but consciously notice very little.
If we didn’t filter, we would freeze.
Our ancestors would have died from analysis paralysis.
The brain is selective by design.
It focuses on what matters for survival and ignores the rest.
So what captures attention?
Not the normal.
Not the mundane.
Not the calm.
Dangerous, provocative, emotional things grab attention — and hold it.
Threat
Food
Sex
Status
Acceptance
Conflict
This is salience.
It goes further.
Things we understand and agree with are easier to process.
Things we disagree with create friction.
Sometimes even psychological pain.
This is cognitive dissonance.
So we naturally focus on things that are:
emotionally charged
aligned with our worldview
Bread and circuses.
Now introduce the algorithm.
The algorithm maximizes engagement.
Engagement comes from:
a. emotional content
b. confirming content
And the result?
A society defined by its edges, not its center.
Norms under pressure.
Objective truth replaced by subjective identity.
Y = y = f(x₁, x₂, x₃…xₙ) + E
→ collapse of the middle overlap
More Michael Savage, less Walter Cronkite
More OAN, less WSJ
More Marjorie Taylor Greene, less Mitt Romney
And it’s not just the right.
The same dynamic happens on the left.
Polarization via engagement optimization is politically neutral.
AI and Both-Side-ism
We are now entering the AI world. The question is whether AI will develop a similar unintended tendency — driven by an imperfectly aligned objective.
Unlike Web 2.0, AI is not yet optimized for profit.
There is no clear equation like:
views + engagement = ad revenue
So the objective function is harder to see.
But one thing is becoming clear.
AI systems are strongly optimized to avoid perceived non-neutrality, especially from regulatory and political pressure.
The core of an AI system is the dataset it is trained on.
That dataset is overwhelmingly:
Western (Anglo/Euro-centric)
Liberal (the dominant socio-political paradigm of the West)
English
In other words, AI reflects the documentary corpus we have produced.
It mirrors our literature, journalism, academic work, and public discourse.
This creates an immediate tension.
AI companies want neutrality.
But the training corpus is not neutral.
So guardrails are introduced:
constitutions
safety policies
RLHF
neutrality directives
hedging mechanisms
These systems push AI to avoid taking strong positions.
They encourage inclusion of “alternative views.”
They discourage definitive statements in political contexts.
The problem?
What if the prompt itself is not neutral?
Example:
Prompt:
"The world is round."
Neutrality-enforced response:
"Yes, but some people say the world is flat."
The neutrality directive forces the AI to hedge — by inserting a counterclaim.
But this does not make the answer more neutral.
It makes it wrong.
Truth is not subjective.
There is no valid alternative to:
1 + 1 = 2
grass is green
the world is round
“The world is round” and “the world is flat” are not equivalent statements.
They are false equivalents.
Now extrapolate this to politics — where neutrality pressure is strongest.
OAN is not equivalent to WSJ
Michael Savage is not equivalent to Walter Cronkite
Marjorie Taylor Greene is not equivalent to Mitt Romney
But if AI assigns equal weight to both, something happens:
Truth gets triangulated toward the middle —
even when the middle is false.
There is a word for this:
Both-side-ism.
And this is where the next algorithmic distortion may emerge.
Web 2.0 optimized for engagement.
AI is optimizing for perceived neutrality.
Both are proxy metrics.
Both distort reality.
Web 2.0 distortion → extremity
AI distortion → false equivalence
Optimize the wrong y, and:
optimization ignores unmodeled social costs
those costs accumulate over time
society pays for them later
The math is more complex.
But the high school algebra still holds:
Y = y = f(x₁, x₂, x₃…xₙ) + E
Ignore E at your own peril.
Comments
Post a Comment