🔬 Research Summary by Rohith Kuditipudi, a third year Ph.D. student at Stanford University advised by John Duchi and Percy Liang. [Original paper by Rohith Kuditipudi, John Thickstun, Tatsunori Hashimoto, and … [Read more...] about Robust Distortion-free Watermarks for Language Models
Safety and Security
GenAI Against Humanity: Nefarious Applications of Generative Artificial Intelligence and Large Language Models
🔬 Research Summary by Emilio Ferrara, a professor at the Thomas Lord Department of Computer Science of the University of Southern California. [Original paper by Emilio Ferrara] Overview: This paper delves … [Read more...] about GenAI Against Humanity: Nefarious Applications of Generative Artificial Intelligence and Large Language Models
A Case for AI Safety via Law
🔬 Research Summary by Jeff Johnston, an independent researcher working on envisioning positive futures, AI safety and alignment via law, and Piaget-inspired constructivist approaches to artificial general … [Read more...] about A Case for AI Safety via Law
AI Deception: A Survey of Examples, Risks, and Potential Solutions
🔬 Research Summary by Dr. Peter S. Park andAidan O’Gara. Dr. Peter S. Park is an MIT AI Existential Safety Postdoctoral Fellow and the Director of StakeOut.AI. Aidan O’Gara is a research engineer at the … [Read more...] about AI Deception: A Survey of Examples, Risks, and Potential Solutions
AI and Great Power Competition: Implications for National Security
🔬 Research Summary by Arun Teja Polcumpally, a Technology Policy Analyst at Wadhwani Institute of Technology Policy (WITP), New Delhi, India). [Original paper by Eric Schmidt] Overview: This research … [Read more...] about AI and Great Power Competition: Implications for National Security