🔬 Research Summary by Dasha Pruss, a postdoctoral fellow at the Berkman Klein Center for Internet & Society and the Embedded EthiCS program at Harvard University. Dasha’s research focuses on algorithmic … [Read more...] about Ghosting the Machine: Judicial Resistance to a Recidivism Risk Assessment Instrument
Governance frameworks
International Institutions for Advanced AI
🔬 Research Summary by Lewis Ho, a researcher on Google DeepMind’s AGI Strategy and Governance Team. [Original paper by Lewis Ho, Joslyn Barnhart, Robert Trager, Yoshua Bengio, Miles Brundage, Allison Carnegie, … [Read more...] about International Institutions for Advanced AI
Participation and Division of Labor in User-Driven Algorithm Audits: How Do Everyday Users Work together to Surface Algorithmic Harms?
🔬 Research Summary by Sara Kingsley, a researcher at Carnegie Mellon University, and an expert in A.I. system risk assessments, having built A.I. auditing tools, as well as red teamed multiple generative A.I. systems for … [Read more...] about Participation and Division of Labor in User-Driven Algorithm Audits: How Do Everyday Users Work together to Surface Algorithmic Harms?
Regulating AI to ensure Fundamental Human Rights: reflections from the Grand Challenge EU AI Act
✍️ Column by Jesse Dinneen, Olga Batura, Caecilia Zirn, Sascha Donner, Azad Abad, Florian Loher. Photo credits: Darya Shramko Overview: In this column, we report on our experience as one of the teams … [Read more...] about Regulating AI to ensure Fundamental Human Rights: reflections from the Grand Challenge EU AI Act
Never trust, always verify: a roadmap for Trustworthy AI?
🔬 Research Summary by Lionel Tidjon, PhD is the Chief scientist & Founder at CertKOR AI and Lecturer at Polytechnique Montreal. [Original paper by Lionel Tidjon and Foutse Khomh] Overview: Bringing AI … [Read more...] about Never trust, always verify: a roadmap for Trustworthy AI?