Enroboted Humans: A Conceptual Exploration

The key point is this: we are not talking about robots as human-like, but the other way around: humans who have been “enroboted.”

This echoes the use of the word “enslaved” instead of “slave”, which would needlessly be a dehumanising word to describe a person who was in bondage. A counterargument to the use of “enslaved person” for “slave” is that stressing the humanity of African Americans who were in bondage “implies a degree of autonomy that was simply never there.” However, the linguistic shift still centres their humanity rather than their condition.

With this analogy in mind, we redefine our thinking task as, “What if humans are being enroboted?”

This aside from the question of whether there really exist robots that are indistinguishable from humans, so-called humanoid robots.

What are humans, what are robots, what are hybrids?

Original Questions:

  • What does it mean to be a human?
  • What makes us human?
  • What is the difference between a human and a robot?
  • What is an “enroboted” human?
  • What about biases in humans, enroboted humans and robots?
  • Are relations between humans the same as relations between robots?
  • What is the difference between relations between humans and relations between humans and robots, and vice versa?
  • When will a “technology-enhanced human” stop being human and be characterised as a robot? Will they occupy an ambiguous status? Will they face discrimination from both unmodified humans and fully autonomous robots?

Additional Questions:

  • At what point does reliance on algorithmic decision-making transform a human into something enroboted?
  • Can humans retain their humanity while outsourcing memory, judgment, or emotional responses to technological systems?
  • Do enroboted humans experience consciousness differently, or do they merely perform consciousness?
  • Is unpredictability a defining feature of humanity that disappears in enroboted individuals?
  • What role does creativity versus pattern-replication play in distinguishing humans from enroboted humans?
  • If a human follows rules perfectly and consistently, does that make them more robot-like or simply disciplined?
  • Can enroboted humans form genuine emotional bonds, or only simulations of connection?Do hybrids (part-human, part-robot) represent a transitional stage or a stable third category?
  • What happens to human rights and dignity when someone becomes functionally enroboted?

How can we recognise enroboted humans?

Original Questions:

  • What kind of observable behaviour might indicate being enroboted?
  • In which situations do we see the dehumanising effects of actions by possible enroboted humans?
  • In what way does this behaviour differ from real robots, machines and algorithms?
  • Do enroboted humans inherit behaviour from real robots, including their built-in biases?
  • Or is it more correct to say that copying robot behaviour by humans is making them enroboted?
  • In what situations does that manifest itself?

Additional Questions:

  • Do enroboted humans display reduced capacity for improvisation or contextual judgment?
  • Can we identify enroboted behaviour by measuring response time consistency across similar situations?
  • Do enroboted humans show diminished emotional range or flattened affect?
  • Are enroboted humans more likely to prioritise efficiency over ethics in decision-making?
  • Do they exhibit script-like language patterns, using corporate jargon or algorithmic phrasing?
  • Can enroboted humans recognise and respond to irony, ambiguity, or paradox?
  • Do they struggle with situations requiring empathy that conflict with established protocols?
  • Is there a difference between someone who has become enroboted and someone who is temporarily “in robot mode”?
  • Do enroboted humans lose the ability to question authority or established systems?

What causes enroboted humans?

Original Questions:

  • How does it happen: “enroboting humans”?
  • Where does it happen?
  • To whom does it happen?
  • Under what circumstances does human enroboting occur?
  • Which factors promote this?
  • Is the enrobotization of humans inevitable and a normal human phenomenon?
  • Is the enrobotization of humans a normal human development, triggered by the advent of modern times?

Additional Questions:

  • Does prolonged exposure to algorithmic management systems (like gig economy platforms) enrobot workers?
  • Are certain professions or work environments more conducive to enroboting than others?
  • Does social media’s reward system for predictable content creation enrobot users?
  • Can education systems that emphasise standardised testing and uniform responses create enroboted students?
  • Does the gamification of daily life (fitness trackers, productivity apps) contribute to enroboting?
  • Are people who experience trauma or extreme stress more susceptible to becoming enroboted as a coping mechanism?
  • Does poverty or precarity increase vulnerability to enroboting by reducing choices and autonomy?
  • Can enroboting be reversed, or is it a one-way transformation?
  • Do corporate cultures that demand “culture fit” and conformity systematically enrobot employees?

Analysis: Two Most Exotic and Interesting Questions

After reviewing all the questions above, the two most exotic and thought-provoking are:

1. “Do enroboted humans experience consciousness differently, or do they merely perform consciousness?”

This cuts to the core of what enroboting actually does to us. Consider someone who has so internalised corporate metrics that they genuinely experience self-worth through KPIs. Do they feel differently, or have they just learned to translate all experience through an algorithmic lens?

The distinction matters enormously. If enroboting changes subjective experience itself, it’s a transformation of consciousness. If it only changes how consciousness is expressed, the human core remains intact but trapped. The terrifying possibility: enroboted people might not recognise their own transformation—the process includes losing meta-awareness about what’s happening to them.

This creates a practical crisis: if we cannot distinguish genuine from performed consciousness, how do we identify enroboting? And if the enroboted person feels satisfied with their algorithmic existence, can we call it dehumanising?

2. “Can enroboting be reversed, or is it a one-way transformation?”

The answer likely exists on a spectrum. A warehouse worker under algorithmic management for three months might quickly recover autonomy in different circumstances. But what about someone raised from childhood in environments rewarding perfect compliance, emotional suppression, and metric-driven achievement?

The brain rewires itself based on repeated patterns. Enroboting may strengthen circuits for rule-following while pruning those for improvisation and contextual judgment. At some point, this pruning may become irreversible.

There’s also an identity dimension. Enroboted humans often take pride in their consistency and efficiency. Reversal would require not just new circumstances but identity transformation—something humans resist fiercely.

The haunting possibility: enroboting may create a stable state where the person functions well by conventional metrics, experiences no distress, and has no motivation to change. Reversal would then require intervening in someone’s life against their perceived interests.

If enroboting proves difficult to reverse, we’re creating two populations: those who retain human spontaneity, and those transformed into human-shaped automatons. The social stratification implications could prove more rigid than any historical class system.

Future Implications: A World of the Enroboted

By analogy with other normalised systems later recognised as harmful, it is conceivable that there are already large numbers of enroboted people living, and that we find this completely normal, ethically unquestioned, and socially fully accepted.

If this trajectory continues unchecked, several futures become plausible:

The Stratified Society: We may be heading toward a world divided not by wealth or education, but by cognitive architecture. The enroboted majority—optimised for consistency, compliance, and metric-driven behaviour—could form a stable working class perfectly suited to algorithmic management. Meanwhile, a smaller class retains the “luxury” of spontaneity, contextual judgment, and creative unpredictability. This stratification might be more rigid than any previous social divide because it’s literally embedded in how people think.

The Recognition Crisis: As enroboting becomes widespread, we may lose the ability to recognise it. If everyone around you operates algorithmically, algorithmic behaviour becomes the baseline for “normal.” Future generations might look back at early 21st-century humans the way we look at pre-industrial peoples—quaint in their inefficiency, chaotic in their unpredictability. The enroboted human becomes simply “human,” and what we once valued as humanity becomes an evolutionary dead end.

The Authenticity Market: Paradoxically, widespread enroboting could create a premium market for genuine human spontaneity. We might see the emergence of “authenticity reserves”—spaces where people pay to interact with un-enroboted humans, the way we currently visit natural parks. Human unpredictability, once free and universal, becomes a luxury good. The wealthy might even employ “spontaneity coaches” to help them unlearn their robotic patterns.

The Hybrid Advantage: Alternatively, partial enroboting might prove evolutionarily advantageous. Those who can toggle between robotic efficiency and human creativity, code-switching between modes as situations demand, could dominate. This requires meta-awareness that most fully enroboted humans lack. The future might belong not to the most human or most robotic, but to those who can consciously navigate between states.

The Breaking Point: Most intriguingly, there may be limits to how much enroboting a society can sustain. Fully enroboted populations cannot innovate, adapt to novel situations, or generate the creative destruction that drives cultural evolution. Societies that enrobot too thoroughly might collapse when faced with unprecedented challenges, while those that maintain pockets of genuine human unpredictability survive and adapt.

The questions explored above provide a framework for investigating these possibilities and their implications for human dignity, autonomy, and social organisation. But perhaps the most urgent question is this: are we already living in the early stages of one of these futures, unable to recognise it because we ourselves are gradually becoming enroboted?

AI was used to explore this topic. I hope Gijs, who wrote the draft, would approve of this final version.

One Reply to “”

Leave a reply to Thinkibility Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.