Ethics and Accountability of Algorithmic Decision Making Systems
Abstract
The new 'sexiest job on earth' according to former Google-CEO Schmidt is the "Data Scientist". As data scientists we are entitled to crawl through data, finding patterns that can predict the future. Our tool set is huge and increases every day, foremost by methods from machine learning. In many cases, the question to be solved by our learning systems are clear cut and so is the quality measure by which we can evaluate whether the systems are good enough to be applied. However, neither is the case when we build systems to predict future human behavior or to classificy current human behavior. For this, 1) intricate social concepts have to be quantified ("operationalization"), 2) it is often unclear how to define a "good decision", and 3) it is hard to observe whether the system embedded in a social system will actually improve the latter or not. While these problems are often discussed under the term "ethics of algorithms", I will argue that a large part of it is actually a question of accountability. As a community of computer and data scientists, we will have to make sure that we only decide on those parts of these systems for which we are trained - and include the expertise of psychologists, sociologists, lawyers, and politicians where this is not the case. I will show a framework that helps to sort these two aspects and to thus avoid mistakes in building learning algorithmic decision making systems.
CV
Prof. Dr. K.A. Zweig is head of the Algorithm Accountability Lab in the department of Computer Science at the TU Kaiserslautern. There, she has designed the unique field of study called Socio-Informatics which is concerned with modelling, analyzing, and predicting the interaction between software and individuals, groups, or society at large. In her research, she focuses on questions of Data Science Literacy and Algorithm Accountability. Currently, she develops a quality ensuring process that makes learning algorithmic decision making systems more accountable. She has won several awards (e.g., ars legendi Fakultätenpreis Informatik und Ingenieurswissenschaften 2017; recipient of a Theodor-Heuss Medal 2018) and consults media authorities, churches, and ministries on all of the above. She is also a member of the Enquete-Commission on "Artificial Intelligence" that consults the German parliament on how to best guide the transformations in society caused by new AI technologies.