• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Technical Methods

Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Refute Misconceptions

January 18, 2024

🔬 Research Summary by Arjun Arunasalam, a 4th-year Computer Science Ph.D. student at Purdue University researching security, privacy, and trust on online platforms from a human-centered lens. [Original paper by … [Read more...] about Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Refute Misconceptions

Writer-Defined AI Personas for On-Demand Feedback Generation

January 18, 2024

🔬 Research Summary by Karim Benharrak, a first-year CS PhD student at the University of Texas Austin, where I design, develop, and evaluate interactive AI systems to unlock the collaborative potential of Human-AI … [Read more...] about Writer-Defined AI Personas for On-Demand Feedback Generation

Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Sentence Embeddings.

January 14, 2024

🔬 Research Summary by Stephen Fitz, an Artificial Intelligence scientist working in the areas of Neural Networks, Representation Learning, and Computational Linguistics. [Original paper by Stephen … [Read more...] about Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Sentence Embeddings.

Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models

January 14, 2024

🔬 Research Summary by Leyang Cui, a senior researcher at Tencent AI lab. [Original paper by Yue Zhang , Yafu Li , Leyang Cui, Deng Cai , Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao , Yu Zhang , Yulong Chen, … [Read more...] about Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models

Experimenting with Zero-Knowledge Proofs of Training

January 1, 2024

🔬 Research Summary by Guru Vamsi Policharla, a computer science PhD student at UC Berkeley. [Original paper by Sanjam Garg, Aarushi Goel, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Guru-Vamsi … [Read more...] about Experimenting with Zero-Knowledge Proofs of Training

« Previous Page
Next Page »

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.