✍️ Column by Sun Gyoo Kang, Lawyer. Disclaimer: The views expressed in this article are solely my own and do not reflect my employer's opinions, beliefs, or positions. Any opinions or information in this article … [Read more...] about Does diversity really go well with Large Language Models?
Core Principles of Responsible AI
Open and Linked Data Model for Carbon Footprint Scenarios
🔬 Research Summary by Boris Ruf, an AI researcher at AXA, focusing on algorithmic fairness and digital sustainability. [Original paper by Boris Ruf and Marcin Detyniecki] Overview: Measuring the carbon … [Read more...] about Open and Linked Data Model for Carbon Footprint Scenarios
A Sequentially Fair Mechanism for Multiple Sensitive Attributes
🔬 Research Summary by Francois Hu & Philipp Ratz. François Hu is a postdoctoral researcher in statistical learning at UdeM in Montreal. Philipp Ratz is a PhD student at UQAM in Montreal. [Original … [Read more...] about A Sequentially Fair Mechanism for Multiple Sensitive Attributes
Towards User-Guided Actionable Recourse
🔬 Research Summary by Jayanth Yetukuri, a final year Ph.D. student at UCSC, advised by Professor Yang Liu, where his research focuses on improving the trustworthiness of Machine Learning models. [Original paper … [Read more...] about Towards User-Guided Actionable Recourse
LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI’s ChatGPT Plugins
🔬 Research Summary by Umar Iqbal, an Assistant professor at Washington University in St. Louis, researching computer security and privacy. [Original paper by Umar Iqbal (Washington University in St. Louis), … [Read more...] about LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI’s ChatGPT Plugins