馃敩 Research Summary by Andrew W. Reddie, Sarah Shoker, and Leah Walker. Andrew W. Reddie is an Associate Research Professor at the University
Self-Consuming Generative Models Go MAD
馃敩 Research Summary by Josue Casco-Rodriguez and Sina Alemohammad. Josue is a 2nd-year PhD student at Rice University. He is interested in
From OECD to India: Exploring cross-cultural differences in perceived trust, responsibility and reliance of AI and human experts
馃敩 Research Summary by Vishakha Agrawal, an independent researcher interested in human-AI collaboration, participatory AI and AI
Demystifying Local and Global Fairness Trade-offs in Federated Learning Using Partial Information Decomposition
馃敩 Research Summary by Faisal Hamman, a Ph.D. student at the University of Maryland, College Park. Faisal鈥檚 research focuses on Fairness,
Acceptable Risks in Europe鈥檚 Proposed AI Act: Reasonableness and Other Principles for Deciding How Much Risk Management Is Enough
馃敩 Research Summary by Dr. Henry Fraser, a Research Fellow in Law, Accountability, and Data Science at the Centre of Excellence for Automated
Open-source provisions for large models in the AI Act
馃敩 Research Summary by Harry Law and Sebastien A. Krier. Harry Law is an ethics and policy researcher at Google DeepMind, a PhD candidate at
FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines
馃敩 Research Summary by Matthew Barker, a recent graduate from the University of Cambridge, whose research focuses on explainable AI and human-machine
The path toward equal performance in medical machine learning
馃敩 Research Summary by Eike Petersen, a postdoctoral researcher at the Technical University of Denmark (DTU), working on fair, responsible, and robust
Adding Structure to AI Harm
馃敩 Research Summary by Mia Hoffmann and Heather Frase. Dr. Heather Frase is a Senior Fellow at the Center for Security and Emerging
On the Challenges of Using Black-Box APIs for Toxicity Evaluation in Research
馃敩 Research Summary by Luiza Pozzobon, a Research Scholar at Cohere For AI where she currently researches model safety. She鈥檚 also a master鈥檚 student
On the Creativity of Large Language Models
馃敩 Research Summary by Giorgio Franceschelli, a second-year Ph.D. student at the University of Bologna working on Generative Artificial Intelligence,
Supporting Human-LLM collaboration in Auditing LLMs with LLMs
馃敩 Research Summary by Charvi Rastogi, a Ph.D. student in Machine Learning at Carnegie Mellon University. She is deeply passionate about addressing