馃敩 Research Summary by Ismael Kherroubi Garcia, trained in business management, and philosophy of the social sciences. He is the founder and CEO of
Humans are not Boltzmann Distributions: Challenges and Opportunities for Modelling Human Feedback and Interaction in Reinforcement Learning
馃敩 Research Summary by David Lindner, a doctoral student at ETH Zurich working on reinforcement learning from human feedback. [Original paper
Beyond Bias and Compliance: Towards Individual Agency and Plurality of Ethics in AI
馃敩 Research Summary by Megan Welle Brozek and Thomas Krendl Gilbert Megan is the CEO and co-founder of daios, a deep tech AI ethics startup,
Towards Responsible AI in the Era of ChatGPT: A Reference Architecture for Designing Foundation Model based AI Systems
馃敩 Research Summary by Dr Qinghua Lu, the team leader of the responsible AI science team at CSIRO's Data61. [Original paper by 聽Qinghua Lu,
Democratising AI: Multiple Meanings, Goals, and Methods
馃敩 Research Summary by Elizabeth Seger, PhD, a researcher at the Centre for the Governance of AI (GovAI) in Oxford, UK, investigating beneficial AI
Can ChatGPT replace a Spanish or philosophy tutor?
鉁嶏笍 Column by Connor Wright, our Partnerships Manager. Overview: ChatGPT has been the language model on everyone鈥檚 lips since its launch.
Aging with AI: Another Source of Bias?
鉁嶏笍 Column by Marianna Ganapini, and Myriam Bergamaschi. Dr. Marianna Ganapini is our Faculty Director and Assistant Professor in Philosophy at
Is ChatGPT for everyone? Seeing beyond the hype toward responsible use in education
鉁嶏笍 Column by Marianna Ganapini, Pamela Lirio, and Andrea Pedeferri. Dr. Marianna Ganapini is our Faculty Director and Assistant Professor in
Can LLMs Enhance the Conversational AI Experience?
馃敩 Column by Julia Anderson, a writer and conversational UX designer exploring how technology can make us better humans. Part of the ongoing Like
System Safety and Artificial Intelligence
馃敩 Research Summary by Roel Dobbe, an Assistant Professor working at the intersection of engineering, design and governance of data-driven and
A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning
馃敩 Research Summary by Siobhan Mackenzie Hall, PhD student at the Oxford Neural Interfacing groups at the University of Oxford. Siobhan is also a
A Hazard Analysis Framework for Code Synthesis Large Language Models
馃敩 Research Summary by Heidy Khlaaf, an Engineering Director at聽Trail of Bits聽specializing in the evaluation, specification, and verification of