🔬 Research Summary by Rohith Kuditipudi, a third year Ph.D. student at Stanford University advised by John Duchi and Percy Liang. [Original paper by Rohith Kuditipudi, John Thickstun, Tatsunori Hashimoto, and … [Read more...] about Robust Distortion-free Watermarks for Language Models
Core Principles of Responsible AI
Bias Propagation in Federated Learning
🔬 Research Summary by Hongyan Chang, a sixth-year Ph.D. student at the National University of Singapore, focuses on algorithmic fairness and privacy, particularly their intersection, and is also invested in advancing … [Read more...] about Bias Propagation in Federated Learning
GenAI Against Humanity: Nefarious Applications of Generative Artificial Intelligence and Large Language Models
🔬 Research Summary by Emilio Ferrara, a professor at the Thomas Lord Department of Computer Science of the University of Southern California. [Original paper by Emilio Ferrara] Overview: This paper delves … [Read more...] about GenAI Against Humanity: Nefarious Applications of Generative Artificial Intelligence and Large Language Models
A Case for AI Safety via Law
🔬 Research Summary by Jeff Johnston, an independent researcher working on envisioning positive futures, AI safety and alignment via law, and Piaget-inspired constructivist approaches to artificial general … [Read more...] about A Case for AI Safety via Law
Balancing Transparency and Risk: The Security and Privacy Risks of Open-Source Machine Learning Models
🔬 Research Summary by Dominik Hintersdorf & Lukas Struppek. Dominik & Lukas are both Ph.D. students at the Technical University of Darmstadt, researching the security and privacy of deep learning … [Read more...] about Balancing Transparency and Risk: The Security and Privacy Risks of Open-Source Machine Learning Models