🔬 Research Summary by Andy Zou, a second-year PhD student at CMU, advised by Zico Kolter and Matt Fredrikson. He is also a cofounder of the Center for AI Safety (safe.ai). [Original paper by Andy Zou, Zifan … [Read more...] about Universal and Transferable Adversarial Attacks on Aligned Language Models
Core Principles of Responsible AI
LLMCarbon: Modeling the end-to-end Carbon Footprint of Large Language Models
🔬 Research Summary by Ahmad Faiz, Masters in Data Science student at Indiana University Bloomington. [Original paper by Ahmad Faiz, Sotaro Kaneda, Ruhan Wang, Rita Osi, Parteek Sharma, Fan Chen, and Lei … [Read more...] about LLMCarbon: Modeling the end-to-end Carbon Footprint of Large Language Models
Engaging the Public in AI’s Journey: Lessons from the UK AI Safety Summit on Standards, Policy, and Contextual Awareness
✍️ Column by Connor Wright, our Partnerships Manager. Overview: The Montreal AI Ethics Institute is a partner organization with Partnership on AI (PAI). Our Partnerships Manager, Connor, attended their UK AI … [Read more...] about Engaging the Public in AI’s Journey: Lessons from the UK AI Safety Summit on Standards, Policy, and Contextual Awareness
Distributed Governance: a Principal-Agent Approach to Data Governance – Part 1 Background & Core Definitions
🔬 Research Summary by Dr. Philippe Page, trained as a theoretical physicist developed a career in international banking before focusing energy on the next generation of internet as a Trustee of the Human Colossus … [Read more...] about Distributed Governance: a Principal-Agent Approach to Data Governance – Part 1 Background & Core Definitions
Collect, Measure, Repeat: Reliability Factors for Responsible AI Data Collection
🔬 Research Summary by Oana Inel, a Postdoctoral Researcher at the University of Zurich, where she is working on responsible and reliable use of data and investigating the use of explanations to provide transparency for … [Read more...] about Collect, Measure, Repeat: Reliability Factors for Responsible AI Data Collection