🔬 Research Summary by Nathaniel Dennler and Queer in AI. Nathan is a Ph.D. candidate at the University of Southern California and a member of the Queer in AI organization; their personal work is in adapting robot … [Read more...] about Bound by the Bounty: Collaboratively Shaping Evaluation Processes for Queer AI Harms
Fairness
Transferring Fairness under Distribution Shifts via Fair Consistency Regularization
🔬 Research Summary by Bang An, a Ph.D. student at the University of Maryland, College Park, specializing in trustworthy machine learning. [Original paper by Bang An, Zora Che, Mucong Ding, and Furong … [Read more...] about Transferring Fairness under Distribution Shifts via Fair Consistency Regularization
Fairness Uncertainty Quantification: How certain are you that the model is fair?
🔬 Research Summary by Abhishek Roy, a post-doc at Halıcıoğlu Data Science Institute, UC San Diego [Original paper by Abhishek Roy and Prasant Mohapatra] Overview: Designing fair Machine Learning (ML) … [Read more...] about Fairness Uncertainty Quantification: How certain are you that the model is fair?
On the Impact of Machine Learning Randomness on Group Fairness
🔬 Research Summary by Prakhar Ganesh, incoming Ph.D. student at the University of Montreal and Mila; interested in studying the learning dynamics of neural networks at the intersection of fairness, robustness, privacy, … [Read more...] about On the Impact of Machine Learning Randomness on Group Fairness
A hunt for the Snark: Annotator Diversity in Data Practices
🔬 Research Summary by Ding Wang, a senior researcher from the Responsible AI Group in Google Research, specializing in responsible data practices with a specific focus on accounting for the human experience and … [Read more...] about A hunt for the Snark: Annotator Diversity in Data Practices