🔬 Research Summary by Vahid Ghafouri, a Ph.D. student in Telematics at IMDEA Networks Institute working on the application of NLP to measure online polarization and radicalization. [Original paper by Vahid … [Read more...] about AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in Controversial Topics
Blog
Benchmark Dataset Dynamics, Bias and Privacy Challenges in Voice Biometrics Research
🔬 Research Summary by Anna Leschanowsky, a research associate at Fraunhofer IIS in Germany working at the intersection of voice technology, human-machine interaction and privacy. [Original paper by Casandra … [Read more...] about Benchmark Dataset Dynamics, Bias and Privacy Challenges in Voice Biometrics Research
Target specification bias, counterfactual prediction, and algorithmic fairness in healthcare
🔬 Research Summary by Eran Tal, Canada Research Chair in Data Ethics and Associate Professor of Philosophy at McGill University. He studies the epistemology and ethics of data collection and data use in scientific … [Read more...] about Target specification bias, counterfactual prediction, and algorithmic fairness in healthcare
Tell me, what are you most afraid of? Exploring the Effects of Agent Representation on Information Disclosure in Human-Chatbot Interaction
🔬 Research Summary by Stephan Schlögl, a professor of Human-Centered Computing at MCI - The Entrepreneurial School in Innsbruck (Austria), where his research and teaching particularly focuses on humans’ interactions with … [Read more...] about Tell me, what are you most afraid of? Exploring the Effects of Agent Representation on Information Disclosure in Human-Chatbot Interaction
People are not coins: Morally distinct types of predictions necessitate different fairness constraints
🔬 Research Summary by Corinna Hertweck, a fourth-year PhD student at the University of Zurich and the Zurich University of Applied Sciences where she is working on algorithmic fairness. [Original paper by … [Read more...] about People are not coins: Morally distinct types of predictions necessitate different fairness constraints