To content
Volkswagen foundation funds joint project with saarland university

New Collaborative Project on AI Decisions

-
in
  • Artificial Intelligence
  • Top News
  • Research
  • People
Portrait Junior Professor Eva Schmidt
Eva Schmidt is Junior Professor for Theoretical Philosophy at TU Dort­mund University.

How can we make sure that machines behave morally? This is a question that Junior Professor Eva Schmidt from TU Dortmund University is exploring in a joint project with Saarland University. The Volkswagen Foundation is funding Schmidt’s research and that of a doctoral researcher with around €160,000.

Big data, digitalization, deep learning – new technologies raise new questions. How, for example, should an autonomous car “decide” when forced to choose whether to run someone over? Should it hit the pensioner rather than the teenager? The one has already lived their life, the other has their whole life ahead of him. Which decision in such a situation is the right one for the machine?

This is a question caught up in the friction between artificial intelligence (AI) and society. In the framework of its funding initiative Artificial Intelligence – Its Impact on Tomorrow’s Society, the Volkswagen Foundation has approved a total of around €12 million for cross-disciplinary and transnational research into the responsible further development of AI systems. In this context, the foundation is supporting eight interdisciplinary and international research alliances in the social and technical sciences. Researchers from, for example, law, linguistics, and social sciences as well as from computer science, medicine, philosophy, and cyber security have joined forces in the program. The projects are scheduled to run for three to four years. The one in Dortmund will start in the late spring.

How can we understand decisions made by AI systems better?

Junior Professor Eva Schmidt from the Institute of Philosophy and Political Science at the Faculty of Humanities and Theology is working on it. She has been investigating this topic for quite some time: What opportunities does AI offer? Where do the risks lie? And more specifically: Why is it important that we understand the decisions made by AI systems? How does the explainability of AI systems foster, for example, trust and responsibility?

In Schmidt’s project “Explainable Intelligent Systems (EIS)”, the following disciplines are working together: Computer science, philosophy, psychology, and law. The project is concerned with the explainability of AI-based decisions and thus with one of the key questions related to the use of AI systems in society. Involved in the project alongside Eva Schmidt from TU Dortmund University are Junior Professor Lena Kästner, Professor Georg Borges, Professor Ulla Wessels, Dr. Markus Langer and Professor Holger Hermanns, all from Saarland University in Saarbrücken. “Among others, we’re looking at the overarching question: How can and should intelligent systems be designed in such a way that they deliver explainable recommendations?” says Eva Schmidt. She is assisted at TU Dort­mund University by Sara Mann, who will complete her doctoral degree in the framework of the project.

In the framework of her research work on explainable AI, Eva Schmidt has also lectured in Hannover, Zurich, and Cambridge as well as recently online at the “Usability in Germany” conference on the topic of “Autonomy and Justice in the Framework of Artificial Intelligence”. In collaboration with the Leverhulme Centre for the Future of Intelligence in Cambridge, TU Delft, and Leibniz Uni­ver­si­ty Hannover, the EIS project is organizing the workshop series “Issues in Explainable AI”. The Dortmund workshop in the series is scheduled for autumn 2022.

Further information on “Explainable Intelligent Systems”

Contact for further information: