Start website main content

Sensorimotor Representation:
Advancements in Learning Models for Robots

Half-day Workshop @ IEEE RAS EMBS 10th International Conference

on Biomedical Robotics and Biomechatronics (BioRob 2024)

September 1st, 2024 - 2pm to 6pm CEST - Heidelberg, Germany

Abstract

Recent advances in AI and computational neuroscience have introduced new concepts, methods and architectures for learning sensory representation and motor control to foster robot capabilities in unstructured environments. Developments in learning models span a wide spectrum of methodologies, embracing deep learning architectures and probabilistic models, and brain-inspired learning frameworks such as spiking neural networks.

Rooted in the intricate neural processes of the human brain, such methodologies bestow robots with the remarkable capacity to learn from experience, discern patterns in sensory data, and execute precise motor responses. Notably, cognitive architectures for motor control extend this exploration into decision-making and action execution, drawing inspiration from human cognition. By providing an organized blueprint for learning and executing motor commands with efficiency and adaptability, these architectures mimic the intricate processes observed in biological systems where sensory information seamlessly translates into precise motor actions.

The core motivation behind this workshop lies in comprehending and replicating these intricate processes, leveraging insights from the evolving field of neuroscience to advance the field of artificial intelligence and robotics. By providing a platform for comprehensive exploration, this workshop aims to contribute to the ongoing discourse on the convergence of AI, neuroscience, and robotics, pushing the boundaries of our understanding of sensory representation and motor control.

BioRob24 logo

Organizers

elisa donati

Egidio Falotico

Tenure-track Assistant Professor

BRAIR Lab @ The BioRobotics Institute, Sant'Anna School of Advanced Studies, Italy

Elisa Donati

Scientific Researcher

Neuromorphic Cognitive Systems @ Institute of Neuroscience, University of Zurich and ETHZ, Switzerland

Matej Hoffmann

Associate Professor

Humanoid and cognitive robotics @ Department of Cybernetics, Czech Technical University in Prague,  Czech Republic


Topics

  • Sensory representation
  • Motor control
  • Artificial Intelligence
  • Neuroscience
  • Robotics

Invited speakers (confirmed)

Pablo Lanillos

Neuro AI and Robotics Group Cajal International Neuroscience Center. Spanish National Research Council (CSIC), Spain

Talk: Predictive coding with spikes: towards efficient world models for perception and control

Abstract. Computational theories of brain sensorimotor integration at the algorithmic or functional level, such as predictive coding and active inference, try to explain cognition from first principles. They have revolutionized the way we model both humans and robots. However, there is still a big leap between these mathematical models and real neurons, which use spikes to communicate. This talk describes a novel approach to model perception and control using spiking neural networks that adhere to predictive coding ideas and shows a pathway to develop efficient world models for behavior generation for robotic and biomechatronic systems.

Bio and Research Activities

Pablo Lanillos is a principal investigator at the Spanish National Research Council in Spain and the Donders Institute for Cognition in the Netherlands, researching neuroscience-inspired artificial intelligence and machine learning approaches for perception and action.

Focusing on variational learning, probabilistic deep learning, predictive coding and active inference. His goal is to develop algorithms that allow robots to perceive and act with their body as humans do and at the same time disentangle how animals construct their self-representation through sensorimotor learning.

Chiara Bartolozzi

Event-Driven Perception for Robotics group Italian Institute of Technology (IIT), Italy

Talk: Neuromorphic technologies for robotic applications, from sensing to control

Abstract. Since the first prototypes of neuromorphic vision sensors and computing devices, part of the community focused its efforts in deploying neuromorphic devices in practical applications, to exploit their intrinsic compression, low latency, high temporal resolution, high dynamic range.

The quest to find the best strategy to exploit event-driven sensing and spike-based computing is still open, but a lot of progress has been made. In this talk, I’ll describe possible approaches towards the development of neuromorphic perception for robots and the relevance of doing so in embodied agents. I’ll discuss the relevance of the development of neuromorphic sensing for touch and other modalities.

Bio and Research Activities

Chiara Bartolozzi is senior researcher tenured at the Istituto Italiano di Tecnologia. She earned a degree in Engineering (with honors) at University of Genova (Italy) and a PhD in Neuroinformatics at ETH Zurich, developing analog subthreshold circuits for emulating biophysical neuronal properties onto silicon and modelling selective attention on hierarchical multi-chip systems. She is currently the principal investigator of the Event Driven Perception for Robotics group, mainly working on the application of the "neuromorphic" engineering approach to the design of sensors and algorithms for robotic perception.

Chiara has participated in a number of EU-funded projects, she coordinated the H2020 MSCA-ETN "NeuTouch" and FP7 FET "eMorph", and is PI in VOJEXT, APRIL, and PRIMI Research and Innovation Actions. As leader of the educational activities of the coordination and support action NEUROTECH, she co-organised the Neuromorphic Colloquium, a series of online events to build up educational material for the next generation of neuromorphic researchers. She is in the scientific board of the Capocaccia Workshop on Neuromorphic Intelligence. She is Editor for NPJ Robotics, IOP Neuromorphic Computing and Engineering, Frontiers in Neuroscience, IEEE JETCAS and TCASI.

She is an IEEE member, actively supporting the CAS and RAS societies, and chair of WiCAS committee. In 2020, she was general chair of "AICAS2020", on Circuits and systems for efficient embedded AI.

Yulia Sandamirskaya

Cognitive Computing in Life Sciences Zurich University of Applied Sciences (ZHAW), Switzerland

Talk: Shaping attractors for cognitive behaviour on a neuromorphic chip

Abstract. In this talk, I will present a new brain-inspired neural network architecture for behavior generation and autonomous learning for robots. The architecture is based on the network motives in cortical and sub-cortical recurrent loops in animal brains and enables continual learning driven by self-generated match-mismatch signals in a “consistent coding” architecture. I will show this work in relation to the recent neuromorphic computing frameworks.

Bio and Research Activities

Yulia Sandamirskaya is the Head of a Research Center “Cognitive Computing in Life Sciences” at the Zurich University of Applied Sciences (ZHAW) and leads the Neuromorphic Computing Group there. She has led the Applications research team of the Neuromorphic Computing Lab at Intel and the Neuromorphic Cognitive Robots group in the Institute of Neuroinformatics at the University of Zurich and ETH Zurich. She has a Dr. rer. nat. degree in Neural Computation from the Ruhr-University Bochum and a Physics degree from the Belarussian State University. Her research interest is in development of brain-inspired algorithms and systems to enable autonomous service robots in human-centered environments.

Mehdi Khamassi

ACIDE Centre National de la Recherche Scientifique (CNRS), France

Talk: Uncertainty, non-stationarity and context in world models for robots

Abstract. An important current challenge for reinforcement learning (RL) robots is to learn world models to predict the effect of their actions and plan in a diversity of situations. One major difficulty in this context is to automatically detect when the same action results in uncertain effects, or in distinct effects in different contexts. This prompts to learn distinct world models and switch between them depending on the context. This can be useful for both non-social tasks (navigation, object manipulation) and social tasks (where other agents' behaviour can differ between contexts).In this talk, I will present a series of robotics experiments for learning and switching between world models depending on the context. I'll illustrate how non-stationarity and change point detection can be approached in terms of model switching. I'll also illustrate how uncertainty can be monitored and used for model creation, model switching and model merging. I'll finally show how this can be applied to deep probabilistic model-based RL, where we used a Bayesian last layer and derived an analytical solution to disentangle model uncertainty and aleatoric uncertainty. This is illustrated with simulations of a 3-dof robotic arm in a target reaching task with abrupt changes in the dynamics function, and efficient storage of distinct learned world models. Moreover, this is achieved at a drastically reduced computational cost, thus showing sample efficiency, compared to existing methods based on intensive Monte Carlo sampling.

Bio and Research Activities

Mehdi Khamassi is a research director employed by the Centre National de la Recherche Scientifique (CNRS), and working at the Institute of Intelligent Systems and Robotics (ISIR), on the campus of Sorbonne Université, Paris, France. He has a double background in Computer Science and Cognitive Neuroscience. He is editor-in-chief for Intellectica and serves as associate editor for several other journals. His main topics of research include decision-making and reinforcement learning in robots and humans, the role of social and non-social rewards in learning, and ethical questions raised by machine autonomous decision-making. His main methods are computational modelling, design of new neuroscience experiments to test model predictions, analysis of experimental data, design of AI algorithms for robots, and behavioural experimentation with humans, non-human animals and robots.

Martin Butz

Martin Butz

Cognitive Modeling Department of Computer Science and Department of Psychology, Faculty of Science, University of Tübingen, Germany

Talk: Extracting Entities and Events from Sensorimotor Dynamics

Abstract. Our minds continuously attempt to explain away our perceptions for interacting with our environment in a goal-directed manner. We thereby infer the presence of objects as well as their interactions with each other and the rest of the environment. To do so, we learn predictive, generative world models. But what is the structure of these models? How are they learned? Multidisciplinary evidence suggests that we segment our world into event-predictive conceptual structures and embed these events into contexts. I selectively introduce some of our recent neuro-cognitive models (Bayesian and generative recurrent artificial neural networks) along these lines and identify critical inductive learning and processing biases. These models have the potential to progressively close the gap between conceptual world models and embodied sensorimotor experiences, and may lead to the development of fully grounded strong artificial intelligence.

Bio and Research Activities

Martin Butz is a professor at the Department of Computer Science and the Department of Psychology at the Faculty of Science, University of Tübingen, Germany. His main background lies in computer science and machine learning. His interdisciplinary research agenda integrates the fields of cognitive and developmental psychology, computational neuroscience, robotics, linguistics, and cognitive science, as well as, more recently, parts of the geosciences. His current main research foci include learning conceptual, compositional, causal structures from sensorimotor experiences in humans and artificial system as well as developing machine learning algorithms for understanding atmospheric dynamics and hydrological processes.  He has published three monographs, numerous edited books and special issues, and more than 200 peer reviewed conference and journal articles.


Program (2pm - 6pm CEST)

Time

Topic

Speaker

2pm - 2.10pm

Welcome and Introduction

Organizers

2.10pm - 2.40pm

Talk: "Neuromorphic technologies for robotic applications, from sensing to control"

Chiara Bartolozzi

2.40pm - 3.10pm

Talk: "Predictive coding with spikes: towards efficient world models for perception and control"

Pablo Lanillos

3.10pm - 3.40pm

Talk: "Shaping attractors for cognitive behaviour on a neuromorphic chip"

Yulia Sandamirskaya

3.40pm - 4.10pm

Break (poster session and video loop projection)

 

4.10pm - 4.40pm

Talk: "Uncertainty, non-stationarity and context in world models for robots"

Mehdi Khamassi

4.40pm - 5.10pm

Talk: "Extracting Entities and Events from Sensorimotor Dynamics"

Martin Butz

5.10pm - 5.50pm

Discussion with experts

All

5.50pm - 6pm

Best contribution award and Conclusion

Organizers


Call for Contributions

Link for submission - Deadline: August 9, 2024 - 5pm CEST

We invite young researchers and students to submit an extended abstract, up to two A4 pages in PDF format, along with a 2-minute video introducing their research. The video can be provided via a YouTube link or by forwarding a file (less than 10MB). All abstracts should follow the standard IEEE conference page layout.

Authors of selected abstracts will be invited to present their work with a poster (A0 size, portrait) during the break. Videos will be projected on a loop in the workshop room.

Best Contribution Award: 150 USD

The workshop is supported by the IEEE RAS Technical Committee on Cognitive Robotics.

If you have additional questions, please contact: egidio.falotico@santannapisa.it