🔬 Research Summary by Rohith Kuditipudi, a third year Ph.D. student at Stanford University advised by John Duchi and Percy Liang. [Original paper by Rohith Kuditipudi, John Thickstun, Tatsunori Hashimoto, and … [Read more...] about Robust Distortion-free Watermarks for Language Models
Technical Methods
Bias Propagation in Federated Learning
🔬 Research Summary by Hongyan Chang, a sixth-year Ph.D. student at the National University of Singapore, focuses on algorithmic fairness and privacy, particularly their intersection, and is also invested in advancing … [Read more...] about Bias Propagation in Federated Learning
Balancing Transparency and Risk: The Security and Privacy Risks of Open-Source Machine Learning Models
🔬 Research Summary by Dominik Hintersdorf & Lukas Struppek. Dominik & Lukas are both Ph.D. students at the Technical University of Darmstadt, researching the security and privacy of deep learning … [Read more...] about Balancing Transparency and Risk: The Security and Privacy Risks of Open-Source Machine Learning Models
Faith and Fate: Limits of Transformers on Compositionality
🔬 Research Summary by Nouha Dziri, a research scientist at Allen Institute for AI working with Yejin Choi and the Mosaic team on understanding the inner workings of language models. [Original paper by Nouha … [Read more...] about Faith and Fate: Limits of Transformers on Compositionality
Mind your Language (Model): Fact-Checking LLMs and their Role in NLP Research and Practice
🔬 Research Summary by Alexandra Sasha Luccioni and Anna Rogers. Dr. Sasha Luccioni is a Research Scientist and Climate Lead at Hugging Face; her work focuses on better understanding the societal and environmental … [Read more...] about Mind your Language (Model): Fact-Checking LLMs and their Role in NLP Research and Practice