What happens when robots begin to ask ‘who am I’?

We need a sense of self when we are taking action, but also when we are anticipating the consequences of potential actions, by ourselves or others.

Given that we want to incorporate robots into our social world, it’s no wonder that creating a sense of self in artificial intelligence (AI) is one of the ultimate goals for researchers in the field. If these machines are to be our carers or companions, they must inevitably have an ability to put themselves in our shoes. While scientists are still a long way from creating robots with a human-like sense of self, they are getting closer.

Researchers behind a new study, published in Science Robotics, have developed a robotic arm with knowledge of its physical form – a basic sense of self. This is nevertheless an important step.

There is no perfect scientific explanation of what exactly constitutes the human sense of self. Emerging studies from neuroscience show that cortical networks in the motor and parietal areas of the brain are activated in many contexts where we are not physically moving. For example, hearing words such as “pick or kick” activate the motor areas of the brain. So does observing someone else acting.

The hypothesis emerging from this is that we understand others as if we ourselves were acting – a phenomenon scientists refer to as “embodied simulation”. In other words, we reuse our own ability to act with our bodily resources in order to attribute meanings to the actions or goals of others. The engine that drives this simulation process is a mental model of the body or the self. And that is exactly what researchers are trying to reproduce in machines.

The team behind the new study used a deep learning network to create a self-model in a robotic arm through data from random movements. Importantly, the AI was not fed any information about its geometrical shape or underlying physics, it learned gradually as it was moving and bumping into things – similar to a baby learning about itself by observing its hands.

It could then use this self-model containing information about its shape, size and movement to make predictions related to future states of actions, such as picking something up with a tool. When the scientists made physical changes to the robot arm, contradictions between the robot’s predictions and reality triggered the learning loop to start over, enabling the robot to adapt its self-model to its new body shape.

While the present study used a single arm, similar models are also being developed for humanoid robots through the process of self-exploration (dubbed sensory motor babbling) – inspired by studies in developmental psychology.