✍️ Column by Sun Gyoo Kang, Lawyer. Disclaimer: The views expressed in this article are solely my own and do not reflect my employer's opinions, beliefs, or positions. Any opinions or information in this article … [Read more...] about Does diversity really go well with Large Language Models?
Fairness
A Sequentially Fair Mechanism for Multiple Sensitive Attributes
🔬 Research Summary by Francois Hu & Philipp Ratz. François Hu is a postdoctoral researcher in statistical learning at UdeM in Montreal. Philipp Ratz is a PhD student at UQAM in Montreal. [Original … [Read more...] about A Sequentially Fair Mechanism for Multiple Sensitive Attributes
Towards User-Guided Actionable Recourse
🔬 Research Summary by Jayanth Yetukuri, a final year Ph.D. student at UCSC, advised by Professor Yang Liu, where his research focuses on improving the trustworthiness of Machine Learning models. [Original paper … [Read more...] about Towards User-Guided Actionable Recourse
Bias Propagation in Federated Learning
🔬 Research Summary by Hongyan Chang, a sixth-year Ph.D. student at the National University of Singapore, focuses on algorithmic fairness and privacy, particularly their intersection, and is also invested in advancing … [Read more...] about Bias Propagation in Federated Learning
Intersectional Inquiry, on the Ground and in the Algorithm
🔬 Research Summary by Liam Magee, a digital and urban sociologist. Liam’s current work examines the interface between generative AI and human psychosocial experience. [Original paper by Shanthi Robertson, Liam … [Read more...] about Intersectional Inquiry, on the Ground and in the Algorithm