🔬 Research Summary by Dr. Qinghua Lu, the team leader of the Responsible AI science team at CSIRO's Data61 and is the winner of the 2023 APAC Women in AI Trailblazer Award. [Original paper by Qinghua Lu, Liming … [Read more...] about Responsible Generative AI: A Reference Architecture for Designing Foundation Model-based Agents
Safety and Security
Beyond Empirical Windowing: An Attention-Based Approach for Trust Prediction in Autonomous Vehicles
🔬 Research Summary by Zhaobo Zheng, a scientist at Honda Research Institute USA, Inc. [Original paper by Minxue Niu, Zhaobo Zheng, Kumar Akash, and Teruhisa Misu] Overview: The trust in autonomous driving is … [Read more...] about Beyond Empirical Windowing: An Attention-Based Approach for Trust Prediction in Autonomous Vehicles
Exploiting Large Language Models (LLMs) through Deception Techniques and Persuasion Principles
🔬 Research Summary by Sonali Singh, a Ph.D. Student at Texas Tech University working on Large language model(LLM). [Original paper by Sonali Singh, Faranak Abri, and Akbar Siami Namin] Overview: This paper … [Read more...] about Exploiting Large Language Models (LLMs) through Deception Techniques and Persuasion Principles
Deployment corrections: An incident response framework for frontier AI models
🔬 Research Summary by Joe O’Brien, an Associate Researcher at the Institute for AI Policy and Strategy, focusing on corporate governance and accountability surrounding developing and deploying frontier AI … [Read more...] about Deployment corrections: An incident response framework for frontier AI models
Risky Analysis: Assessing and Improving AI Governance Tools
🔬 Research Summary by Kate Kaye, a researcher, author, award-winning journalist, and deputy director of the World Privacy Forum, a nonprofit, non-partisan, public-interest research group. Kate is a member of the OECD.AI … [Read more...] about Risky Analysis: Assessing and Improving AI Governance Tools