• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Promoting Bright Patterns

July 26, 2023

šŸ”¬ Research Summary by Hauke Sandhaus, a Ph.D. student in Information Science at Cornell Tech researching wicked design problems in Human-AI-Interaction to create an ethical future of automation.

[Original paper by Hauke Sandhaus]


Overview: User experience designers face increasing scrutiny and criticism for creating harmful technologies, leading to pushback against unethical design practices. While clear-cut harmful practices such as dark patterns have received attention, trends towards automation, personalization, and recommendation present more ambiguous ethical challenges. To address potential harm in these ā€œgrayā€ instances, we propose the concept of ā€œbright patternsā€ – persuasive design solutions that prioritize user goals and well-being over their desires and business objectives.Ā 


Introduction

Have you ever considered the subtle ways that user interfaces shape your online behavior? Many of us are aware of ‘dark patterns’ — manipulative design techniques that favor business objectives over user needs. Think of ‘roach motels,’ where users are easily drawn into a service but find it hard to get out. 

But have you ever wondered about the other side of the coin? In my recent paper, “Promoting Bright Patterns,” I delve into the subtler, less explored, yet equally influential phenomenon: designs prioritizing user well-being and long-term goals. The digital world is not just a matter of black or white. Many services play a multifaceted role. Recommendation algorithms may guide us to enriching educational content or entertaining videos, but they might also entrap us in misinformation rabbit holes or foster addictive behaviors.

This is where ‘bright patterns’ come in — acting as a ‘band-aid’ to mitigate such negative impacts. A classic example would be screen time limits or usage reminders implemented by platforms like TikTok. Despite their business model that thrives on user engagement, these features are designed to keep users from overusing the service — a clear case of prioritizing users’ well-being over immediate business gains.

Key Insights

Defining Bright Patterns

The crux of our paper was to establish a working definition of ‘bright patterns’ based on several competing definitions of ā€˜dark patterns.’ Unlike their sinister cousins, bright patterns as an antonym are user interface elements designed to promote user behavior aligned with their genuine goals rather than their immediate desires or business objectives. While good design practices and the absence of dark patterns lay the foundation for ethical design, bright patterns leap further. They actively prioritize users’ well-being and long-term satisfaction, even if it means resisting short-term business gains.

Bright Patterns in Action

To make this concept more tangible, we gathered a range of examples where bright patterns have been implemented effectively. For instance, consider how certain platforms offer users screen time limits or usage reminders. At first glance, this may seem counterintuitive to a business model that thrives on user engagement. However, these features are designed to keep users from overusing the service — a clear example of bright patterns prioritizing users’ well-being over immediate business gains.

The Bright Patterns Repository

Recognizing the need for a dedicated space to discuss and visualize the concept of bright patterns, we established a website: brightpatterns.org. Here, designers, researchers, and anyone interested in ethical design can find a growing repository of bright pattern instances and join the conversation around this user-centric design approach.

Towards Ethical Design Practices

Our exploration into the world of bright patterns is just beginning. The goal is to advocate for ethical design practices in our increasingly digital lives. While the fight against dark patterns continues, our work shines a light on the other side — the potential of design to be a force for good, a beacon guiding us toward a user-centric digital landscape.

Between the lines

The rise of bright patterns in user interface design is an interesting development that begs the question: Why are companies implementing these patterns? Are they earnestly dedicated to ethical practices or simply trying to avoid regulatory scrutiny? Or could their products be so densely populated with dark patterns that bright patterns act as a necessary counterbalance?

Another aspect that piques interest is the somewhat paternalistic nature of these bright patterns. They seemingly protect users, but should designers have that responsibility? Or is it essential to allow users the freedom to navigate their own digital experiences?

Philosophical questions about bright pattern appropriateness show parallels to AI ethics discussions. A lot of the tuning to make AI models more ethical happens behind closed doors. Companies often don’t disclose these efforts, leading to claims of “woke” AI. Is it acceptable to prompt AI’s certain ways or even manipulate data and potentially favor certain ideologies under the guise of ethical adjustments?

Finally, we need to contemplate the role of design interventions. Are we overestimating their impact or underestimating their potential? As design plays a crucial role in user experience, these are important considerations moving forward.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • Worried But Hopeful: The MAIEI State of AI Ethics Panel Recaps a Difficult Year

    Worried But Hopeful: The MAIEI State of AI Ethics Panel Recaps a Difficult Year

  • The struggle for recognition in the age of facial recognition technology

    The struggle for recognition in the age of facial recognition technology

  • The Wrong Kind of AI? Artificial Intelligence and the Future of Labour Demand (Research Summary)

    The Wrong Kind of AI? Artificial Intelligence and the Future of Labour Demand (Research Summary)

  • Clueless AI: Should AI Models Report to Us When They Are Clueless?

    Clueless AI: Should AI Models Report to Us When They Are Clueless?

  • The Algorithm Audit: Scoring the Algorithms That Score Us (Research Summary)

    The Algorithm Audit: Scoring the Algorithms That Score Us (Research Summary)

  • Embedding Ethical Principles into AI Predictive Tools for Migration Management in Humanitarian Actio...

    Embedding Ethical Principles into AI Predictive Tools for Migration Management in Humanitarian Actio...

  • A survey on adversarial attacks and defences

    A survey on adversarial attacks and defences

  • It doesn't tell me anything about how my data is used'': User Perceptions of Data Collection Purpos...

    "It doesn't tell me anything about how my data is used'': User Perceptions of Data Collection Purpos...

  • The State of AI Ethics Report (Oct 2020)

    The State of AI Ethics Report (Oct 2020)

  • Sharing Space in Conversational AI

    Sharing Space in Conversational AI

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • Ā© 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.