🔬 Research Summary by Isabel O. Gallegos, a Ph.D. student in Computer Science at Stanford University, researching algorithmic fairness to interrogate the role of artificial intelligence in equitable … [Read more...] about Bias and Fairness in Large Language Models: A Survey
Bias Mitigation
Benchmark Dataset Dynamics, Bias and Privacy Challenges in Voice Biometrics Research
🔬 Research Summary by Anna Leschanowsky, a research associate at Fraunhofer IIS in Germany working at the intersection of voice technology, human-machine interaction and privacy. [Original paper by Casandra … [Read more...] about Benchmark Dataset Dynamics, Bias and Privacy Challenges in Voice Biometrics Research
From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models
🔬 Research Summary by Shangbin Feng, Chan Young Park, and Yulia Tsvetkov. Shangbin Feng is a Ph.D. student at University of Washington.Chan Young Park is a Ph.D. student at Carnegie Mellon University, studying … [Read more...] about From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models
The Unequal Opportunities of Large Language Models: Revealing Demographic Bias through Job Recommendations
🔬 Research Summary by Abel Salinas and Parth Vipul Shah. Abel is a second-year Ph.D. student at the University of Southern California. Parth is a second-year master’s student at the University of Southern … [Read more...] about The Unequal Opportunities of Large Language Models: Revealing Demographic Bias through Job Recommendations
A hunt for the Snark: Annotator Diversity in Data Practices
🔬 Research Summary by Ding Wang, a senior researcher from the Responsible AI Group in Google Research, specializing in responsible data practices with a specific focus on accounting for the human experience and … [Read more...] about A hunt for the Snark: Annotator Diversity in Data Practices