🔬 Research Summary by Oana Inel, a Postdoctoral Researcher at the University of Zurich, where she is working on responsible and reliable use of data and investigating the use of explanations to provide transparency for … [Read more...] about Collect, Measure, Repeat: Reliability Factors for Responsible AI Data Collection
Governance frameworks
Outsourced & Automated: How AI Companies Have Taken Over Government Decision-Making
🔬 Research Summary by Grant Fergusson, an Equal Justice Works Fellow at the Electronic Privacy Information Center (EPIC), where he focuses on AI and automated decision-making systems within state and local … [Read more...] about Outsourced & Automated: How AI Companies Have Taken Over Government Decision-Making
Algorithmic Harms in Child Welfare: Uncertainties in Practice, Organization, and Street-level Decision-Making
🔬 Research Summary by Devansh Saxena, a Presidential Postdoctoral Fellow at Carnegie Mellon University in the Human-Computer Interaction Institute. He studies sociotechnical practices of decision-making in the public … [Read more...] about Algorithmic Harms in Child Welfare: Uncertainties in Practice, Organization, and Street-level Decision-Making
The Ethics of AI Value Chains: An Approach for Integrating and Expanding AI Ethics Research, Practice, and Governance
🔬 Research Summary by Blair Attard-Frost, a PhD Candidate and SSHRC Joseph-Armand Bombardier Canada Graduate Scholar at the University of Toronto’s Faculty of Information. [Original paper by Blair Attard-Frost … [Read more...] about The Ethics of AI Value Chains: An Approach for Integrating and Expanding AI Ethics Research, Practice, and Governance
AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in Controversial Topics
🔬 Research Summary by Vahid Ghafouri, a Ph.D. student in Telematics at IMDEA Networks Institute working on the application of NLP to measure online polarization and radicalization. [Original paper by Vahid … [Read more...] about AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in Controversial Topics