Deutsches Institut für Japanstudien nav lang search
日本語EnglishDeutsch
Deutsches Institut für Japanstudien
Attitudes Toward Facial Analysis AI: A Cross-National Study Comparing Argentina, Kenya, Japan, and the USA

ダウンロード

場所

Online and DIJ Tokyo (access)

登録情報

This is a past event. Registration is no longer possible.

DIJ Mailing Lists

Please subscribe below to stay informed about our research activities, events & publications:

    Choose Subscription:

    = required field



    Attitudes Toward Facial Analysis AI: A Cross-National Study Comparing Argentina, Kenya, Japan, and the USA

    2024年9月18日 / 6 pm (JST) / 11 am (CEST)

    Chiara Ullstein, Technical University of Munich

    Computer vision AI systems present one of the most radical technical transformations of our time. Such systems are given unparalleled epistemic power to impose meaning on visual data despite their inherent semantic ambiguity. This becomes particularly evident in computer vision AI that interprets the meaning of human faces in face recognition or emotion expression systems. Despite scientific, social and political concerns, facial analysis AI systems are widely deployed also in Japan, for example, for training employees to show certain facial expressions. This talk presented findings from a study of public perceptions of facial analysis AI across Argentina, Kenya, Japan, and the USA. We developed a vignette scenario about a fictitious company that analyzes people’s portraits using computer vision AI to make various inferences about people based on their faces. The study revealed similarities in justification patterns but also significant intra-country and inter-country diversity in response to different facial inferences. For example, participants from Argentina, Japan, Kenya, and the USA vastly disagree over the reasonableness of AI classifications such as beauty or gender and mostly agree on the unreasonableness of the inferences intelligence and trustworthiness. Adding much-needed non-Western perspectives to debates on computer vision ethics, the results of the study suggest that, contrary to popular justifications for facial classification technologies, there is no “common sense” facial classification that accords simply with a general, homogeneous “human intuition.” This talk presented joint work with S. Engelmann, O. Papakyriakopoulos, Y. Ikkatai, N. Arnez-Jordan, R. Caleno, B. Mboya, S. Higuma, T. Hartwig, H. Yokoyama, and J. Grossklags.

    Chiara Ullstein is a PhD candidate at the Chair of Cyber Trust at the Technical University of Munich with Prof. Jens Großklags. With a background in Politics and Technology, Chiara’s research explores public participation in the development and regulation of AI applications. A special research interest lies in the comparative study of cross-national public perception of AI, specifically of facial analysis AI. In her research, Chiara applies both qualitative and quantitative research methods. She is currently a visiting researcher at the University of Tokyo with Prof. Hiromi Yokoyama.