Why the best way to understand the self is to build a robot one – Aeon

Researchers, philosophers and roboticists have recently argued that attempting to build a sense of self in embodied machines is a powerful way to test and refine theories of human selfhood. Drawing on works from William James (1894) through Daniel Dennett (1991) to contemporary robotics experiments, this approach treats the self as a structured, testable model rather than a mystical inner homunculus. Labs have shown robots can learn body maps, predict consequences of their actions and even adopt altered body representations in rubber-hand–style tests. These results sharpen questions about which aspects of human selfhood require biology and which can be re-created by sensory, motor and cognitive systems.

Key takeaways

  • William James (1894) framed the self as having two sides: an experiencer and the experienced; this duality still structures modern debate.
  • Neurological data link specific deficits (e.g., right temporal–parietal border damage) to disturbed body ownership and insular cortex damage to depersonalisation, showing selfhood is multi-component and brain-distributed.
  • Developmentally, infants exhibit a basic self/other boundary and agency early, while a narrative, persistent self emerges around ages four to five with language and memory maturation.
  • Roboticists use methods such as motor babbling and predictive models to let machines learn body morphology and agency; Bongard, Zykov and Lipson (Science, 2006) demonstrated continuous self-modeling enabling adaptive locomotion.
  • Robots have reproduced phenomena like the rubber-hand illusion and mirror-based self-recognition by correlating proprioception with vision, validating mechanistic accounts of body ownership and agency.
  • Disembodied large language models (LLMs) can mimic self-referential talk but, lacking embodiment, likely role-play subjectivity rather than instantiate it.
  • Critics such as Anil Seth argue that biological features (metabolism, autopoiesis) may be central to subjective experience; this remains contested.

Background

Philosophical and empirical inquiries into selfhood converge on the idea that the self is not a single, localized spectator sitting inside the skull. Daniel Dennett argued in 1991 that positing a little inner perceiver leads to regress and does not solve the problem of consciousness. Rather than deny selves entirely, many contemporary thinkers treat the self as a model or organisation implemented by interacting brain and body systems — an account developed further by Thomas Metzinger and others in the early 2000s.

Neuroscience supports a distributed view: lesions near the temporal–parietal junction can cause patients to deny ownership of limbs, insular damage can alter interoceptive feelings and depersonalisation, and changes in frontal or temporal regions can affect continuity of identity or perspective-taking. Developmental psychology complements this by showing how components of selfhood appear across infancy and childhood: basic body-boundary and agency cues are present very early, while autobiographical narrative and adult-like temporal continuity consolidate with memory and language skills around school age.

Main event

Robotics offers a synthetic route to test hypotheses about self-construction. A common experimental strategy is motor babbling: the robot produces spontaneous movements and uses the resulting sensorimotor correlations to infer the shape and capabilities of its body. In a high-profile demonstration, Bongard, Zykov and Lipson (2006) used evolutionary algorithms and continuous self-modeling to allow a star-shaped robot to discover and exploit its own morphology for locomotion, recovering function after damage.

Researchers have taken this further by combining proprioception, touch and vision to let robots segment what parts of sensory streams belong to themselves versus the external world. Humanoid platforms such as iCub have been trained to incorporate an artificial hand into their body model under synchronous visuotactile stimulation, producing behaviour and internal model changes analogous to the human rubber-hand illusion. In parallel, predictive models derived from comparator theories of agency have enabled robots to distinguish their own mirror reflections from another robot’s movements by matching predicted and observed consequences of actions.

Groups have also started to build episodic-like memory for robots using generative retrieval models that reconstruct past events from cues rather than replaying raw logs. Coupling such memory systems with minimal self-models gives machines a primitive form of persistence: they can reference prior episodes when planning and thereby display a rudimentary continuity over time. Other projects have mapped simplified robot morphologies onto human partners to support imitation and joint tasks, probing the computational basis of perspective-taking and social cognition.

Analysis & implications

These synthetic experiments do two important things: they operationalise otherwise vague philosophical claims, and they test whether proposed mechanisms are sufficient for the phenomena we label as selfhood. If a system with sensors, effectors and predictive architectures can reproduce body ownership, agency and even some continuity across time, that supports the hypothesis that many features of the self are emergent consequences of embodied information processing rather than inexplicable inner essences.

At the same time, robotics highlights limits. Many robot systems are narrowly tuned benchmarks and fail outside those specific tasks; integrating multiple self-related subsystems into a robust, real-time cognitive architecture remains a major engineering and scientific challenge. Moreover, disembodied LLMs can convincingly talk about selves, producing an illusion of subjectivity without the sensorimotor grounding that seems central to embodied accounts of experience.

There is also a deep theoretical divide about whether replicating functional markers of selfhood suffices to explain subjective experience (what philosophers call qualia). Anil Seth and others emphasise biological properties — metabolism, self-maintenance and specific subcellular processes — as potentially critical. Opposing views, like J. Kevin O’Regan’s sensorimotor contingency theory, shift the explanatory burden toward structured interaction patterns between agent and environment, which robots can reproduce if equipped appropriately.

Comparison & data

Feature Minimal human/animal self Adult human self Robot implementations
Boundary (self/other) Early infancy; tactile/proprioceptive cues Stable bodily boundary integrated with narrative Motor babbling + tactile/visual correlations
Agency Detected via sensorimotor prediction Integrated with intentions and plans Comparator models, predictive learning
Persistence in time Emerges with episodic memory Autobiographical narrative (age 4–5 onward) Reconstructive episodic-like memory models
Biological substrate Present (metabolism, autopoiesis) Central to debates on experience Absent; mechanical/electrical maintenance only

The table summarises convergences and gaps. Robots reproduce many computational features linked to selfhood (boundary, agency, memory scaffolding), but they lack biological maintenance systems that some theorists argue are essential to subjective experience.

Reactions & quotes

Expert voices illustrate the debate and its stakes.

“There is no single inner ‘I’ tucked away inside the brain.”

Daniel Dennett (philosopher)

Dennett’s remark underscores why a localisation approach to the self is widely rejected: positing a central observer produces an infinite regress and fails to explain how diverse brain systems yield unity. This motivates the model-based view of selfhood that roboticists are testing by building distributed, interacting systems.

“Experience is shaped by the sensorimotor contingencies our bodies enable.”

J. Kevin O’Regan (psychologist)

O’Regan’s perspective reframes subjective feel as the product of embodied interactions; it is precisely this thesis that robotics directly operationalises by reproducing those contingencies in machines with sensors and effectors.

“Synthetic self-models allow us to falsify or refine mechanistic claims about body ownership and agency.”

Robotics researchers (summary)

Practitioners report that robots often validate core theoretical claims, but also reveal where theories are incomplete, for example when robots require additional predictive layers or memory reconstructions to match human-like behaviour.

Unconfirmed

  • Whether current or future LLMs (including the referenced GPT-5 statement) possess any genuine subjective experience is unresolved and remains a contested empirical and philosophical claim.
  • Some described lab results were reported as “soon-to-be-published” or in-progress; independent replication and peer review may be limited or pending for those specific experiments.
  • The precise causal role of cellular metabolism, mitochondria-level processes or autopoiesis in producing subjective qualia has not been established and remains theoretical.

Bottom line

Building robot selves is a productive scientific strategy: it turns philosophical ideas about selfhood into concrete architectures and experiments, allowing researchers to test sufficiency and necessity claims. Robots have already replicated many markers of the minimal self—body maps, agency signals and adaptive body ownership—strengthening the view that a substantial portion of selfhood is an organised, embodied model.

Nevertheless, important gaps remain. Integration into a full, robust cognitive architecture and the question of whether biological organization contributes something irreducible to experience are open. For scholars and engineers alike, the synthetic approach does not settle every metaphysical question about the first-person perspective, but it provides rigorous tools for narrowing the space of plausible answers and for guiding targeted empirical work.

Sources

Leave a Comment