A key challenge for robotics researchers is developing systems that can interact with humans and their surrounding environment in situations that involve varying degrees of uncertainty. In fact, while humans can continuously learn from their experiences and perceive their body as a whole as they interact with the world, robots do not yet have these capabilities.
Researchers at the Technical University of Munich have recently carried out an ambitious study in which they tried to apply “active inference,” a theoretical construct that describes the ability to unite perception and action, to a humanoid robot. Their study is part of a broader EU-funded project called SELFCEPTION, which bridges robotics and cognitive psychology with the aim of developing more perceptive robots.
“The original research question that triggered this work was to provide humanoid robots and artificial agents in general with the capacity to perceive their body as humans do,” Pablo Lanillos, one of the researchers who carried out the study, told TechXplore. “The main goal was to improve their capabilities to interact under uncertainty. Under the umbrella of the Selfception.eu Marie Skłodowska-Curie project we initially defined a roadmap to include some characteristics of human perception and action into robots.”
In their study, Lanillos and his colleagues tried to gain a better understanding of human perception and then modeled it into a humanoid robot. This proved to be a very difficult task, as many details of how sensory information (visual, tactile, etc.) is processed by humans are still unknown. The researchers drew inspiration from the work of Hermann Von Helmholtz and Karl Friston, particularly from their theory of active inference, which is among the most influential neuroscience constructs.
“In essence, we propose that the robot is continuously approximating its body using its imperfect learned models,” Guillermo Oliver, another researcher involved in the study, told TechXplore. “The algorithm, based on the free-energy principle, presents perception and action working for a common goal: to reduce the prediction error. In this approach, action makes sensory data better correspond to the prediction made by the inner model.”
Lanillos, Oliver and the Prof. Gordon Cheng were the first to apply active inference to a real robot. In fact, so far, active inference was only tested theoretically or in simulations that were partially biased by the simplification of the models used.
Their approach tries to reproduce humans’ ability to change their actions (e.g. their gait) in particular situations, for instance, when they are approaching a metro escalator, but suddenly discover that it is broken or out of service, and adapt their movements accordingly. The perception and control algorithm developed by Lanillos, Oliver and Cheng replicates a similar mechanism in robots.
For example, in a reaching task in which a robot needs to touch an object, the model creates an error in the desired hand location that triggers an action toward the object. The equilibrium (or minimization) is obtained when the robot’s hand and the object are in the same location.
“This approach is rare in the robotics community, but provides tractability, allows combination of sensory information from different sources and permits tuning of the reliability of each sensor information, depending on the precision,” Oliver said.
The researchers applied their algorithm to iCub, an open source cognitive humanoid robot developed as part of another EU-funded project, and evaluated its performance in tasks that involved dual arm reaching and active head tracking. In their tests, the robot was able to perform advanced and robust reaching behaviors, as well as active head tracking of objects in its visual field.
“The humanoid robot was able to perform robust dual-arm reaching and visual tracking tasks of an object using the same mathematical model,” Oliver said. “With this type of algorithm, we would like to change the current view of input-output perception pipeline (e.g. state-of-the-art neural networks) by enforcing the idea of close loop perception, where forward and backward passes are processed online, and including action as another inevitable variable.”
Lanillos, Oliver and Cheng are the first to implement a model based on the free-energy principle on a real humanoid robot. Their findings suggest that validating such models in real-world settings is possible, as is analyzing the advantages of these models in the presence of noisy sensory information, occlusions or when only partial information is available. The researchers are now planning to apply their model to other robots and test its generalizability.
“In the long term, we want to enable the development of artificial agents with the same capabilities of body adaptation and interaction as humans,” Lanillos said. “Meanwhile, we are developing new bio-inspired artificial intelligence algorithms. In the future, we will also use this model to investigate body-ownership and agency, and who knows, we might someday enable self-recognition in machines.”