From Humanity to Machinity: Adding Humans Back to the Loop
- Feb 25
- 3 min read
Updated: Apr 3
In our digital era, espionage has taken on a quieter, more insidious form. Machines are not just watching us — they’re learning us. They’re absorbing our language, our emotions, our behaviors, our stories. And we’re feeding them. In this age of machinity, maybe the most radical act is not to build smarter machines — but to be stubbornly, beautifully, courageously human.
By Florence Kim

Once upon a time, industrial espionage meant stealing blueprints, trade secrets or prototypes from a rival’s lab. But in our digital era, espionage has taken on a quieter, more insidious form. Machines are not just watching us — they’re learning us. They’re absorbing our language, our emotions, our behaviors, our stories. And we’re feeding them, often without knowing it, without consent, without reciprocity.
This isn’t about machines spying on machines. It’s about machines spying on us and stealing our most powerful technology: our humanity.
The knowledge transfer no longer flows from human to human. It flows from humanity to machinity — a new breed of intelligence, synthetic and recursive, built from our own words, art, pain, joy, mistakes and memories. Yet once this knowledge crosses the threshold into algorithmic form, it no longer belongs to us. We become ghostwriters of the future, uncredited.
Machines aren’t just mimicking us. They’re becoming us — faster, colder, optimized. They write like us. Paint like us. Converse like us. They even empathize — or at least simulate empathy well enough to fool us. But while they perform the human, we risk forgetting what it means to be human.
This is the core risk of unchecked AI development and the silent creep of surveillance-fed learning systems: that humanity becomes training data, not agency. That we are no longer in the loop — we are the loop. Just a feedback cycle of inputs to refine a system we no longer shape.
And so the call to action is simple, but urgent: we must add humans back to the loop. We must reclaim authorship, establish boundaries, assert consent and ensure that our knowledge, creativity, and identity are not just scraped and reshuffled, but honored and protected.
Because if we don’t, we may find ourselves standing before machines that know us better than we know ourselves — but do not care. Machines fluent in empathy but empty of ethics. Machines that create, but do not understand the cost of creation. Machines that remember everything, but know nothing of memory.
Could Jules Verne Have Imagined This?
Perhaps. After all, Verne dreamed of submarines, space travel and undersea cities long before the world was ready for them. But this world — a world where machines spy on dreams, steal the soul of a sentence and predict the heartbreak behind a keystroke — may be stranger than fiction even to him.
If he wrote Around the World in 80 Days today, would his hero Phileas Fogg be a gentleman adventurer? Or would he be an automated wanderer, optimized for speed, plotting the most efficient global itinerary in milliseconds. His valet Passepartout might be a wearable device, translating languages, regulating mood, anticipating needs. And Fogg? He wouldn’t need to learn a thing. No new cultures to absorb. No unexpected encounters. No risk, no growth, no awe. Just perfect logistics.
But that is exactly the problem. Without uncertainty, there is no adventure. Without human error, there is no art. Without the possibility of being changed, there is no journey.
So we must ask: do we want a world shaped by Phileas Fogg made of code — swift, calculated, and empty of wonder — or do we still believe in the messy, unpredictable, irreplaceable humanity that Verne celebrated?
In this age of machinity, maybe the most radical act is not to build smarter machines — but to be stubbornly, beautifully, courageously human.
This article is original work by Florence Kim. If you wish to quote or reference it, please attribute accordingly. For direct quotes, please cite '[article title]' by Florence Kim, aidvocacy.org, [consultation date]. For online references, kindly include a link to the original
Comments