🔬 Research Summary by Sara Kingsley, a researcher at Carnegie Mellon University, and an expert in A.I. system risk assessments, having built A.I. auditing tools, as well as red teamed multiple generative A.I. systems for … [Read more...] about Participation and Division of Labor in User-Driven Algorithm Audits: How Do Everyday Users Work together to Surface Algorithmic Harms?
Research Summaries
“Customization is Key”: Four Characteristics of Textual Affordances for Accessible Data Visualization
🔬 Research Summary by Shuli Jones, a recent MIT MEng in Computer Science and current software engineer at Google. [Original paper by Shuli Jones, Isabella Pedraza Pineros, Daniel Hajas, Jonathan Zong, and Arvind … [Read more...] about “Customization is Key”: Four Characteristics of Textual Affordances for Accessible Data Visualization
A Holistic Assessment of the Reliability of Machine Learning Systems
🔬 Research Summary by Anthony Corso, Ph.D., Executive Director of the Stanford Center for AI Safety and studies the use of AI in high-stakes settings such as transportation and sustainability. [Original paper by … [Read more...] about A Holistic Assessment of the Reliability of Machine Learning Systems
Algorithms as Social-Ecological-Technological Systems: an Environmental Justice lens on Algorithmic Audits
🔬 Research Summary by Bogdana Rakova, a Senior Trustworthy AI fellow at Mozilla Foundation, previously a research manager at a Responsible AI team in consulting, leading algorithmic auditing projects and working closely … [Read more...] about Algorithms as Social-Ecological-Technological Systems: an Environmental Justice lens on Algorithmic Audits
Never trust, always verify: a roadmap for Trustworthy AI?
🔬 Research Summary by Lionel Tidjon, PhD is the Chief scientist & Founder at CertKOR AI and Lecturer at Polytechnique Montreal. [Original paper by Lionel Tidjon and Foutse Khomh] Overview: Bringing AI … [Read more...] about Never trust, always verify: a roadmap for Trustworthy AI?