🔬 Research Summary by Anna Leschanowsky, a research associate at Fraunhofer IIS in Germany working at the intersection of voice technology, human-machine interaction and privacy. [Original paper by Casandra … [Read more...] about Benchmark Dataset Dynamics, Bias and Privacy Challenges in Voice Biometrics Research
Target specification bias, counterfactual prediction, and algorithmic fairness in healthcare
🔬 Research Summary by Eran Tal, Canada Research Chair in Data Ethics and Associate Professor of Philosophy at McGill University. He studies the epistemology and ethics of data collection and data use in scientific … [Read more...] about Target specification bias, counterfactual prediction, and algorithmic fairness in healthcare
Tell me, what are you most afraid of? Exploring the Effects of Agent Representation on Information Disclosure in Human-Chatbot Interaction
🔬 Research Summary by Stephan Schlögl, a professor of Human-Centered Computing at MCI - The Entrepreneurial School in Innsbruck (Austria), where his research and teaching particularly focuses on humans’ interactions with … [Read more...] about Tell me, what are you most afraid of? Exploring the Effects of Agent Representation on Information Disclosure in Human-Chatbot Interaction
People are not coins: Morally distinct types of predictions necessitate different fairness constraints
🔬 Research Summary by Corinna Hertweck, a fourth-year PhD student at the University of Zurich and the Zurich University of Applied Sciences where she is working on algorithmic fairness. [Original paper by … [Read more...] about People are not coins: Morally distinct types of predictions necessitate different fairness constraints
From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models
🔬 Research Summary by Shangbin Feng, Chan Young Park, and Yulia Tsvetkov. Shangbin Feng is a Ph.D. student at University of Washington.Chan Young Park is a Ph.D. student at Carnegie Mellon University, studying … [Read more...] about From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models