🔬 Research Summary by Abel Salinas and Parth Vipul Shah. Abel is a second-year Ph.D. student at the University of Southern California. Parth is a second-year master’s student at the University of Southern … [Read more...] about The Unequal Opportunities of Large Language Models: Revealing Demographic Bias through Job Recommendations
Fairness
Fair allocation of exposure in recommender systems
🔬 Research Summary by Virginie Do and Nicolas Usunier Virginie Do is a former PhD student at Meta AI (Facebook AI Research) and PSL University Nicolas Usunier is a research scientist at Meta AI (Facebook AI … [Read more...] about Fair allocation of exposure in recommender systems
Ghosting the Machine: Judicial Resistance to a Recidivism Risk Assessment Instrument
🔬 Research Summary by Dasha Pruss, a postdoctoral fellow at the Berkman Klein Center for Internet & Society and the Embedded EthiCS program at Harvard University. Dasha’s research focuses on algorithmic … [Read more...] about Ghosting the Machine: Judicial Resistance to a Recidivism Risk Assessment Instrument
Participation and Division of Labor in User-Driven Algorithm Audits: How Do Everyday Users Work together to Surface Algorithmic Harms?
🔬 Research Summary by Sara Kingsley, a researcher at Carnegie Mellon University, and an expert in A.I. system risk assessments, having built A.I. auditing tools, as well as red teamed multiple generative A.I. systems for … [Read more...] about Participation and Division of Labor in User-Driven Algorithm Audits: How Do Everyday Users Work together to Surface Algorithmic Harms?
Algorithms as Social-Ecological-Technological Systems: an Environmental Justice lens on Algorithmic Audits
🔬 Research Summary by Bogdana Rakova, a Senior Trustworthy AI fellow at Mozilla Foundation, previously a research manager at a Responsible AI team in consulting, leading algorithmic auditing projects and working closely … [Read more...] about Algorithms as Social-Ecological-Technological Systems: an Environmental Justice lens on Algorithmic Audits