Zum Inhalt

Verification of Neural Networks - And What It Might Have to Do With Explainability

Beginn: Ende: Veranstaltungsort: Internationales Begegnungszentrum (IBZ), Emil-Figge-Str. 59, 44227 Dortmund
Veran­stal­tungs­art:
  • Hybride Veranstaltung
  • English Language Events
  • Vortrag
Modern AI can be used to drive cars, to decide on loans, or to detect cancer. Yet, the inner workings of many AI systems remain hidden – even to experts. Given the crucial role that AI systems play in modern society, this seems unacceptable. But how can we make complex, self-learning systems explainable? What kinds of explanations are we looking for when demanding that AI must be explainable? And which societal, ethical and legal desiderata can actually be satisfied via explainability? The interdisciplinary lecture series presents the latest research on these and related topics and invites exchange with researchers, students, and the interested public.

The talk is the second lecture of the 4th installment of the interdisciplinary lecture series “Explainable AI and Society”, organized by the research project “Explainable Intelligent Systems” and funded by the Volkswagen Foundation. Explainable AI and Society is a hybrid, interdisciplinary lecture series on the societal impact of (explainable) artificial intelligence. The lectures are open to researchers, students and the interested public.

The lecture can be attended both online and in person at the IBZ of TU Dortmund University. To register, send an e-mail with the title “Registration” to sara.manntu-dortmundde. Include which lecture(s) you would like to attend and whether you will attend online or in person.

Upcoming Lectures

14 December 2023, 6.15 p.m. (CET): Anne Lauber-Rönsberg, TU Dresden (law):
“A Legal Perspective on Explainable AI: Why, How Much and For Whom?”

18 January 2024, 6.15 p.m. (CET): Gudela Grote, ETH Zurich (psychology):
“Organizing AI: How to shape accountable AI development and use”

More information on the lecture series