🔬 Research Summary by Shangbin Feng, Chan Young Park, and Yulia Tsvetkov. Shangbin Feng is a Ph.D. student at University of Washington.Chan Young Park is a Ph.D. student at Carnegie Mellon University, studying … [Read more...] about From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models
Fairness
From OECD to India: Exploring cross-cultural differences in perceived trust, responsibility and reliance of AI and human experts
🔬 Research Summary by Vishakha Agrawal, an independent researcher interested in human-AI collaboration, participatory AI and AI safety. [Original paper by Vishakha Agrawal, Serhiy Kandul, Markus Kneer, and Markus … [Read more...] about From OECD to India: Exploring cross-cultural differences in perceived trust, responsibility and reliance of AI and human experts
Demystifying Local and Global Fairness Trade-offs in Federated Learning Using Partial Information Decomposition
🔬 Research Summary by Faisal Hamman, a Ph.D. student at the University of Maryland, College Park. Faisal’s research focuses on Fairness, Explainability, and Privacy in Machine Learning, where he brings novel foundational … [Read more...] about Demystifying Local and Global Fairness Trade-offs in Federated Learning Using Partial Information Decomposition
The path toward equal performance in medical machine learning
🔬 Research Summary by Eike Petersen, a postdoctoral researcher at the Technical University of Denmark (DTU), working on fair, responsible, and robust machine learning for medicine. [Original paper by Eike … [Read more...] about The path toward equal performance in medical machine learning
The Unequal Opportunities of Large Language Models: Revealing Demographic Bias through Job Recommendations
🔬 Research Summary by Abel Salinas and Parth Vipul Shah. Abel is a second-year Ph.D. student at the University of Southern California. Parth is a second-year master’s student at the University of Southern … [Read more...] about The Unequal Opportunities of Large Language Models: Revealing Demographic Bias through Job Recommendations