🔬 Research Summary by Faisal Hamman, a Ph.D. student at the University of Maryland, College Park. Faisal’s research focuses on Fairness, Explainability, and Privacy in Machine Learning, where he brings novel foundational … [Read more...] about Demystifying Local and Global Fairness Trade-offs in Federated Learning Using Partial Information Decomposition
Technical Methods
Open-source provisions for large models in the AI Act
🔬 Research Summary by Harry Law and Sebastien A. Krier. Harry Law is an ethics and policy researcher at Google DeepMind, a PhD candidate at the University of Cambridge, and postgraduate fellow at the Leverhulme … [Read more...] about Open-source provisions for large models in the AI Act
FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines
🔬 Research Summary by Matthew Barker, a recent graduate from the University of Cambridge, whose research focuses on explainable AI and human-machine teams. [Original paper by Matthew Barker, Emma Kallina, … [Read more...] about FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines
On the Challenges of Using Black-Box APIs for Toxicity Evaluation in Research
🔬 Research Summary by Luiza Pozzobon, a Research Scholar at Cohere For AI where she currently researches model safety. She’s also a master’s student at the University of Campinas, Brazil. [Original paper by Luiza … [Read more...] about On the Challenges of Using Black-Box APIs for Toxicity Evaluation in Research
On the Creativity of Large Language Models
🔬 Research Summary by Giorgio Franceschelli, a second-year Ph.D. student at the University of Bologna working on Generative Artificial Intelligence, Reinforcement Learning, and creativity. [Original paper by … [Read more...] about On the Creativity of Large Language Models