Sie sind hier: Startseite Forschung AI-Trust (2020-2024)

AI-Trust (2020-2024)

Interpretable Artificial Intelligence Systems for Trustworthy Applications in Medicine

AI-Trust

About

AI Trust is an inter- and transdisciplinary research project under the auspices of the Freiburg Institute for Advanced Studies and their Saltus! Research Group "Responsible Artificial Intelligence". The project is funded by the Baden-Württemberg Foundation.

The super-convergence of digital technologies - big data, smart sensors, artificial neural networks for deep learning, high-performance computing and other advances - enable a new generation of 'intelligent' systems, often subsumed under the notion of 'AI system'. This megatrend leads to a profound transformation of all sectors of society from education, industrial production and logistics to science and health care and poses real and imminent ethical and legal challenges. The research group AI Trust examines the challenges of Interpretable Artificial Intelligence Systems for Trustworthy Applications in Medicine

While deep learning methods offer the promise of great performance gains in various application domains, the solutions they provide are not readily comprehensible to human users. This black-box approach is acceptable in some domains, but for medical applications, transparency seems necessary so that clinicians can understand the decisions of a trained machine learning model and ultimately validate and accept its recommendations. Recently, a number of methods have been developed to provide more insight into the representations that these networks learn. So far, however, there is no systematic comparison of these methods with each other. Especially for automated EEG diagnosis, it is not clear which of these methods can provide the most helpful information for the practitioner in different circumstances. Furthermore, it is unclear to what extent these methods serve the overarching goals of interpretability, explainability, and comprehensibility. Thus, it is not clear how they may foster trust in the ‘AI-system’.

Our research project AI Trust follows an 'embedded ethics and law' approach that aims to investigate the ethical and legal challenges of a deep learning-based assistive system for EEG diagnosis ('DeepEEG' System) throughout the research and development phase in order to provide normative guidance for the development of the system. This approach intends to exemplify how the inclusion of ethical and legal expertise in developing ‘AI-systems’ in medicine may help leverage AI's innovation potential while ensuring a responsible and trustworthy 'ethics-and-law-by-design' development. More generally, this will demonstrate how the enormous societal challenges posed by ‘AI-systems’ can be framed, and the problems raised by ‘AI-systems’ can be solved from a conceptual (philosophical, ethical, legal) and a technical perspective.

 

Subprojects


SP1: Transdisciplinary Conceptual Foundations: Interpretability, Explainability and Trustworthiness of DeepEEG as an AI system
(PIs: Prof. Dr. Oliver Müller, University of Freiburg, Dr. Philipp Kellmeyer, University of Freiburg Medical center, Prof. Dr. Silja Vöneky, University of Freiburg)


SP2: An interpretable deep-learning-based assistive system for EEG diagnosis (PIs: PD Dr. Tonio Ball, University of Freiburg, Jun-Prof. Dr. Joschka Boedecker, University of Freiburg, Prof. Dr. Wolfram Burgard, University of Freiburg)

SP3: Ethical, Legal and Societal Analysis of the AI-based Assistive System
(PIs: Prof. Dr. Silja Vöneky, Dr. Philipp Kellmeyer, Prof. Dr. Oliver Müller)

Press

ChatGPT und Co.: KI Tools in der Forschung, Video zur interdisziplinären Podiumsdiskussion mit Prof. Dr. Vöneky, 23.10.2023 (Untertitel Englisch/Deutsch)

Statement for the Science Media Center on risks of current AI-Research , 04.04.2023

Brauchen wir eine KI-Pause?, mdr Wissen, 04.04.2023

"Man kann Technologie nicht mit Verboten aufhalten": Diskussion um ChatGPT-Regulierung in Deutschland, Handelsblatt Online, 03.04.2023

ChatGPT & Co.: Arbeitspause für mächtige Sprachmodelle - Streit unter Experten, heise online, 02.04.2023

Brauchen wir eine KI-Pause?, Frankfurter Allgemeine Zeitung, 01.04.2023

S. Voeneky, Interview "Moratorium Künstliche Intelligenz - Lässt sich künstliche Intelligenz in Bahnen lenken?", Deutschlandfunk Kultur, 30.März 2023.

Selected Publications and Presentations

S. Vöneky/F. Hutter, Adaptive Zukunft?! Neue Technologien, KI, Ethik und Recht - Herausforderungen für das 21. Jahrhundert, Presentation , Science meets Politics, Stuttgart, 31.01.2024

S. Vöneky, Responsible AI, Law and Ethics: Ideal World versus Real World Approaches? Presentation, Marburger Vorträge (via Zoom), University of Marburg, 24.01.2024

D. Becker/ D. Feuerstack, Der neue Entwurf des EU-Parlaments für eine KI-Verordnung - Analyse der wesentlichen Neuerungen gegenüber dem Entwurf der Kommission, MMR 2024, 22.

D. Feuerstack/D. Becker/N. Hertz, Die Entwürfe des EU-Parlaments und der EU-Kommission für eine KI-Verordnung im Vergleich - Eine Bewertung mit Fokus auf Regeln zu Transparenz, Forschungsfreiheit, Manipulation und Emotionserkennung, ZfDR 2023, 421

D. Becker, Der Kommissionsentwurf für eine KI-Verordnung - Gefahr für die Wissenschaftsfreiheit?, ZfDR 2023, 164

D. Feuerstack, Künstliche Intelligenz und Menschenrechte - Vorgaben in Bezug auf bestehende Risiken beim Einsatz von KI-Systemen in der Medizin, ZfDR 2023, 184

D. Feuerstack, Menschenrechtliche Vorgaben an die Transparenz KI-basierter Entscheidungen und deren Berücksichtigung in bestehenden Regulierungsansätzen, OdW-2/2022

S. Voeneky/P. Kellmeyer/O. Müller/W. Burgard (Eds.), The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives, (CUP 2022)

B. Essmann/O. Mueller, AI-Supported Brain-Computer Interfaces and the Emergence of 'Cyberbilities', in Voeneky/Kellmeyer/Müller/Burgard (Eds), The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives (CUP 2022), 427-443

P. Kellmeyer, 'Neurorights': A Human Rights-Based Approach for Governing Neurotechnologies, in Voeneky/Kellmeyer/Müller/Burgard (Eds), The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives (CUP 2022), 412-426

T. Schmidt/S. Voeneky, Fostering the Common Good: An Adaptive Approach Regulating High-Risk AI-Driven Products and Services, in Voeneky/Kellmeyer/Müller/Burgard (Eds), The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives (CUP 2022), 123-149

T. Schmidt/S. Voeneky, Neue Wege zur adaptiven Regulierung von Hochrisiko-KI-Technologien: Schutz von Rechten und Gemeinwohl, erschienen als FIP 3/2021

S. Voeneky, Key Elements of Responsible Artificial Intelligence – Disruptive Technologies, Dynamic Law, OdW-1/2020

S. Voeneky, Human Rights and Legitimate Governance of Existential and Global Catastrophic Risks, in Voeneky/Neuman (Eds), Human Rights, Democracy, and Legitimacy in a World of Disorder, CUP, 2018, 139-162