Saved from Thinking? Living in an Algocracy
- Mar 25
- 3 min read
Updated: Apr 11
Welcome to a system where power lies not with the people or even just with elected governments, but with those who produce or control the algorithms that mediate our choices, our knowledge, our very perception of the world. We’re drifting into a world where AI doesn’t just serve power. It is power.
by Florence Kim

The other day, I caught myself doing something I swore I’d never do: I opened ChatGPT to compare the programmes of the Democrats and the Republicans. I wanted numbers—on education spending, healthcare, social safety nets. Within seconds, it spat out figures, bold and convincing. But something felt off. I double-checked a few stats. They were wrong.
And yet, it had felt right. Seamless. Credible. Efficient. It saved me time. It almost saved me from thinking.
Outsourcing Our Minds
That’s when it hit me: this is the beginning of what I’d call an algocracy—a system where power lies not with the people or even just with elected governments, but with those who produce or control the algorithms that mediate our choices, our knowledge, our very perception of the world.
We’re drifting into a world where AI doesn’t just serve power. It is power.
And I asked myself: do we leave AI governance to those who build it or those who use it? Either way, we’re not in the room where the code is written, but we live by its rules. And the room is where the decisions are made—about what we see, what we believe, what we vote for, even what we write. That’s not a neutral tool. That’s an invisible regime.
The Algorithm Wears No Uniform
An authoritarian regime is bad enough when it has borders, a flag, a face. But an algorithmic regime? That’s borderless, faceless and—worst of all—dressed up as freedom.
It whispers: you’re in charge. You choose what to read, click, watch. Except you don’t. The choices have been chosen. The beliefs have been reinforced. The questions were never asked.
We're told we’re more informed than ever. But are we really analyzing, debating, learning—when the algorithm does it for us?
Critical Thinking: A Civic Superpower
This is where critical thinking becomes a civic superpower. Media literacy isn't optional anymore. It's a necessity for survival in a digital world where fake certainty is served with perfect grammar and flawless confidence. We need to teach young people—and remind ourselves—how to interrogate sources, trace logic and detect manipulation.
And we need to stop romanticizing convenience. If everything is frictionless, nothing is questioned. And if nothing is questioned, power wins.
Policy Innovation For A Post-Truth Era
Because the new "medium" between citizens and governments is not the town hall or the newspaper, but a handful of for-profit tech giants fine-tuning the feed. The digital civic space? It's becoming a vacuum. We’ve called it public. But it’s private. We’ve called it civic. But it’s coded.
We need urgent policy innovation:
A global regulatory framework to define and monitor algorithmic influence on democracy, with real teeth.
An international body tasked with identifying and prosecuting algorithm-enabled crimes—think mass disinformation campaigns, algorithmic discrimination or voter suppression.
Public investment in media literacy education, starting from early schooling and continuing throughout life.
Mandatory transparency on how recommendation engines work and what data they use.
Digital impact assessments for all large-scale AI deployments, just as we do with environmental and human rights risks.
We must also fund independent watchdogs—human-led, not AI-prompted—to monitor AI’s impact on civic life and foster a new generation of thinkers who can challenge both tech and authority.
Reclaiming Authentic Intelligence
Let’s be smarter than AI. Let’s use what we have left of authentic intelligence to feed it, rather than to be fed by it. The algocracy is dangerous not because it seizes power violently, but because it hands it back to us with a smile, then quietly takes it away. Recognizing it is step one. Resisting it—with tools only humans still have—is step two.
I asked ChatGPT to comment on this blog: “AI can calculate. But it can’t care. It can mimic truth. But it can’t defend it.”
Let’s not be saved from thinking. Let’s start thinking again.
This article is original work by Florence Kim. If you wish to quote or reference it, please attribute accordingly. For direct quotes, please cite '[article title]' by Florence Kim, aidvocacy.org, [consultation date]. For online references, kindly include a link to the original
Comments