Sie sind hier: Startseite Forschung ReScaLe - Responsible and Scalable …

ReScaLe - Responsible and Scalable Learning For Robots Assisting Humans


AI-based robots are expected to support numerous tasks in our society, for example by assisting people in everyday life or making production processes more efficient - however, despite rapid advances in research, they are not yet part of our everyday lives. In order to make them more integrable for everyday life, the ReScaLe project, led by Prof. Dr. Joschka Bödecker, will on the one hand work on the still existing technical challenges in the field of machine learning. On the other hand, the project will also consider social, ethical and legal aspects in order to strengthen trust in these systems.

Robots to learn tasks form humans by beeing demonstrated to them

Innovative machine learning methods will enable ReScaLe robots to learn tasks from humans by demonstrating them to them. To enable robots to efficiently perform the learning task, ReScaLe is developing new approaches to minimize the number of demonstrations required. The research project will introduce novel unsupervised and self-supervised deep learning methods that require only a small amount of annotated data. Further innovative methods will also support deep learning in dealing with uncertainties to further improve data efficiency.ReScaLe will simultaneously pave the way for responsible AI and robotics applications based on human rights, taking an integrated multi-level approach that considers ethical-legal normative requirements in conjunction with risks to core rights and interests, as well as user*oriented design requirements. Specially tailored participatory outreach activities accompany the project to promote community acceptance and enable bidirectional communication with researchers.

Researchers from the fields of computer science, ethics, human-machine interaction, law, mathematics and robotics are participating in ReScaLe. The project strengthens the profile field "Data Analysis and Artificial Intelligence" of the University of Freiburg and will be located in the research building 'Intelligent Machine-Brain Interfacing Technology' (IMBIT).

Part A: Methods for robot skill learning in human-centered environments through imitation and reinforcement learning

In this part of the project, we will investigate robot skill acquisition through imitation and reinforcement learning in human-centered environments. The robot used in this context consists of a mobile base with a height-adjustable arm. It is further equipped with multiple RGB-D cameras to perceive the environment. We will set up a real-world kitchen environment in which the developed learning methods will be evaluated on tasks such as setting a table, tidying up, or sorting groceries into cupboards. Hence, the required robot skills will range from simple pick-and-place actions to more complex trajectory-dependent actions such as opening cabinets or handing over objects.

Part B: Generalization of skills across different tasks and environments using meta-learning techniques

In this part of ReScaLe, we will set up multiple additional distinct kitchen environments to develop and evaluate transfer learning techniques. Among these different households, the robot tasks will be similar but have to be carried out with different objects, different contexts, and different goals. We will set up multiple identical robot platforms, such as the one described in Part A, that will learn and train skills in parallel in the different environments while sharing their experience.

Part C: Assessment of uncertainty in learned models, risk management, and ethical and legal basis and implications

This part of the ReScaLe project, led by Prof. Dr. Silja Vöneky and Prof. Dr. Thorsten Schmidt, aims to lay down the foundations for risk-sensitive and safe human-robot interaction by improving uncertainty estimates of deep learning models. We will quantify properly what these models do not know, especially in the context of sequential decision-making problems, which are relevant for all of the learning procedures considered within ReScaLe. These improved uncertainty estimates will provide improved risk management approaches and permit procedures to govern the interaction of AI-driven robots with humans. Towards this end, we will analyze legal and ethical requirements for a responsible research and innovation approach to develop AI-based robotics systems, and generate empirical data through participatory ethics research with different stakeholder groups; eventually, we will integrate the findings into a multi-level framework for responsible AI and robotics that can serve as a blueprint for legally and ethically aligned research in this field.

Part D: Technology acceptance research and outreach

This part of the ReScaLe project will facilitate a mutual transfer of knowledge between scientific and non-scientific actors. In this part, (preliminary) results of the ongoing research process of all WPs converge and are disseminated to society in innovative formats of public outreach and knowledge transfer. The aim is to foster highly open-minded and sincere discussions between researchers and the public on the societal and ethical implications of assistive AI systems and to provide approaches of reverse communication from society to academia. The participatory exchange will provide the basis for developing a novel model of technology acceptance related to human-robot interactions.


Prof. Dr. Silja Vöneky
Public Law, Inernational Law,
Comparative Law and Ethics of Law
Prof. Dr. Joschka Bödecker
ReScaLe Coordinator
Dr. Noor Awad
Machine Learning
Jun,-Prof. Dr. Josif Grabocka,
Representation Learning
Prof. Dr. Frank Hutter,
Machine Learning
Dr. Philipp Kellmeyer
Neuroethics and AI Ethics
Prof. Dr. Oliver Müller,
Philosophy with a focus on the
present and technology
Prof. Dr. Thorsten Schmidt,       
Mathematical Stochastics
Jun.-Prof. Dr. Abhinav Valada,
Robot Learning

Sabrina Livanec,
Nexus Experiments 
Technology Acceptance Research


Research Assistants

Alisa Pojtinger

Nora Hertz  


PhD Projects

Hertz, Nora: Der menschenrechtliche Schutz neuronaler Aktivität - Menschenrechtliche Anforderungen an die Regulierung von Neurotechnologien als nicht-medizinische Anwendungen

Pojtinger, Alisa: A Public International Law Perspective on the Regulation of Social Robots: A Comparative Analysis of EU, UK and US Regulatory Approaches



N. Hertz, The Right to Freedom of Thought in Germany, in Bethany Shiner and Patrick O'Callaghan (eds.), The Cambridge Handbook on the Right to Freedom of Thought (Forthcoming 2024).

L. Londoño, J.V. Hurtado, N. Hertz, P. Kellmeyer, S. Voeneky, A. Valada, Fairness and Bias in Robot Learning (Proceedings of the IEEE, May 2024)

D. Feuerstack, D. Becker, N. Hertz, Die Entwürfe des EU-Parlaments und der EU-Kommission für eine KI-Verordnung im Vergleich. Eine Bewertung mit Fokus auf Regeln zu Transparenz, Forschungsfreiheit, Manipulation und Emotionserkennung (ZfDR 4/2023)

N. Hertz, „Neurorechte“ – Zeit für neue Menschenrechte? Eine Neubetrachtung des Menschenrechts auf Gedankenfreiheit, FIP 2/2023 

S. Vöneky, Regulating AI in Non-Military Applications: Lessons Learned, in: Geiss/Lahmann (Hrsg.), Research Handbook on Warfare and Artificial Intelligence (zusammen mit Thorsten Schmidt) (erscheint Juli 2024).



N. Hertz, Workplace Brain Surveillance, University of Oxford, Algorithms at Work and Future of Tech & Society Discussion Group, 23.05.2024

N. Hertz, Operationalising the Human Right to Freedom of Thought in the Context of Neurotechnologies, University of Oxford, Bonavero Institute of Human Rights and Newcastle Law School Graduate Seminar, 08.05.2024

N. Hertz,  The Human Rights Protection of (Mental) Autonomy, Internal Workshop Meeting, IMBIT Freiburg, 07.02.2024 

A. Pojtinger, Biden’s Executive Order on AI and the EU AI Act - A Comparison, Internal Workshop Meeting, IMBIT Freiburg, 07.02.2024 

N.Hertz, Mental Integrity and Social Chat Bots. A Legal Perspective, FRIAS Responsible AI Lecture & Discussion, 01.02.2024

S. Vöneky/F. Hutter, Adaptive Zukunft?! Neue Technologien, KI, Ethik und Recht - Herausforderungen für das 21. Jahrhundert, Presentation, Science meets Politics, Stuttgart, 31.01.2024

S. Vöneky, Responsible AI, Law and Ethics: Ideal World versus Real World Approaches? Presentation, Marburger Vorträge (via Zoom), University of Marburg, 24.01.2024

N. Hertz, The Human Rights Protection of Mental Integrity, Internal Workshop Meeting, IMBIT Freiburg, 29.11.2023 

N. Hertz, Online Manipulation - Social Media, Attention Economy and the Law, UWC Freiburg, Global Affairs Series, 08.11.2023

A. Pojtinger, The Concept of Human Dignity in International LawInternal Workshop Meeting, IMBIT Freiburg, 04.10.2023 

N. Hertz, The Prohibition of Manipulative AI and Emotion Recognition Systems in the EU Draft AI Act,  Internal Workshop Meeting, IMBIT Freiburg, 04.10.2023

A. Pojtinger, A Public International Law Perspective on the Regulation of Social Robots. A Human-Rights-Based Approach to Regulation,  Internal Workshop Meeting, IMBIT Freiburg, 09.08.2023

A. Pojtinger, Risk Governance in International Environmental Law: Uncertainty and the Precautionary Principle, Internal Workshop Meeting, IMBIT Freiburg, 14.06.2023

N. Hertz, The Human Right to Freedom of Thought, Internal Workshop Meeting, IMBIT Freiburg, 17.05.2023

N. Hertz, Positive Human Rights Obligations, Internal Workshop Meeting, IMBIT Freiburg, 19.04.2023



ChatGPT und Co.: KI Tools in der Forschung, Video zur interdisziplinären Podiumsdiskussion mit Prof. Dr. Vöneky, 23.10.2023 (Untertitel Englisch/Deutsch)

Statement for the Science Media Center on risks of current AI-Research, 04.04.2023

Brauchen wir eine KI-Pause?, mdr Wissen, 04.04.2023

"Man kann Technologie nicht mit Verboten aufhalten": Diskussion um ChatGPT-Regulierung in Deutschland, Handelsblatt Online, 03.04.2023

ChatGPT & Co.: Arbeitspause für mächtige Sprachmodelle - Streit unter Experten, heise online, 02.04.2023

Brauchen wir eine KI-Pause?, Frankfurter Allgemeine Zeitung, 01.04.2023

S. Voeneky, Interview "Moratorium Künstliche Intelligenz - Lässt sich künstliche Intelligenz in Bahnen lenken?", Deutschlandfunk Kultur, 30.März 2023.