• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Building Bridges: Generative Artworks to Explore AI Ethics

July 21, 2021

🔬 Research summary by Ramya Srinivasan, AI Researcher at Fujitsu Research of America Inc.

[Original paper by Ramya Srinivasan and Devi Parikh]


Overview: The paper outlines some ways in which generative artworks could aid in narrowing the communication gaps between different stakeholders in the AI pipeline. In particular, the authors argue that generative artworks could help surface different ethical perspectives, highlight mismatches in the AI pipeline, and aid in the visualization of counterfactual scenarios, and non-western ethical perspectives.


Introduction

A picture is worth a thousand words!

Indeed, visuals are extremely effective in conveying complex concepts in an accessible manner—they transcend language barriers, simulate engagement, trigger critical thinking, and leave lasting imprints in the minds of the observer. Backed by this understanding, the authors posit that generative artworks (i.e., artworks created by AI systems) could come handy in educating AI scientists with regards to potential pitfalls in the design, development, and deployment of the AI systems. To substantiate their argument, the authors lay out four potential pathways in which generative artworks could be leveraged in educating AI scientists about AI ethics, namely,—1) by visualizations of different ethical viewpoints, 2) by visualizations of mismatches in the AI pipeline, 3) by visualizations of counterfactual scenarios, and 4) by visualizations of non-western ethical perspectives.

Key Insights

Here, a brief description of each of the four aforementioned potential pathways (through which generative artworks could aid in enhancing AI ethics) is provided. 

Visualizations of different ethical perspectives: Different ethical theories emphasize different principles in decision making, and can thus shed light on varying viewpoints relevant in a given context. For example, in utilitarian ethics, the emphasis is on maximizing the well-being of all stakeholders, which is not necessarily the case in deontological ethics, where the emphasis is on following the laws and regulations.  Thus, even within the context of a single problem setting, there can be diverse viewpoints about what is right, fair, just, or appropriate. In order to enhance AI ethics, it thus becomes important to educate AI researchers and developers about these diverse viewpoints and thereby aid in reflexive design. Generative artworks could serve as powerful visualization tools to surface such diverse perspectives. For example, through generative artworks, it may be possible to visualize the compounded adverse effects of an AI decision in an individual’s life, a consequentialism ethical perspective. 

Visualizations of mismatches in the AI pipeline: Computational systems involve quantitatively modeling abstract concepts or constructs which may or may not be observable. Furthermore, there may be unobservable factors that affect the constructs themselves. Consider, for example, a construct such as “skill” or “ability”, which is relevant across many applications such as hiring and admissions. These constructs can be influenced by both innate potential specific to the individual and other factors such as socio-economic status. Thus, a mismatch can be introduced even before measuring a construct. Generative artworks could aid in visualizing such mismatches. For example, it may be possible to highlight differences in measurement of similar constructs, thereby aiding AI researchers and developers in understanding system behavior. Consider an AI based hiring use case. Suppose one of the features in making the decision concerns measuring social skills of the candidate. In this regard, one might expect the constructs “self-esteem” and “confidence” to be related. Visualizations of AI system’s behavior under different scenarios could reveal whether it treats these constructs similarly – whether it exhibits “convergent validity” , which refers to the degree to which two measures of constructs that theoretically should be related, are in fact related.

Visualizations of counterfactuals: Generative artworks could also aid in visualizing counterfactual situations which in turn can be beneficial in reflexive design via empathy fostering. Counterfactual thinking can help in engendering empathy by enabling one to visualize situations through another person’s world. Thus, certain situations that may be irrelevant in one person’s context, but relevant in another person’s context, can be understood via such counterfactual visualizations. Generative artworks could be used as tools to visualize the consequences of AI decisions so AI researchers and developers (for instance), who may not necessarily be affected by the decision, can empathize with the impacted population, and thereby redesign their system for the better.  

Visualizations of non-western perspectives: Generative artworks can serve as visualizations of social, cultural, and economic differences that exist across geographies. For example, through generative artworks it may be possible to highlight different viewpoints regarding fairness based on the local context such as social practices, religious beliefs, economic status, etc. By training generative models on data across cultures and looking at the latent visualizations, it might also be possible to view how different everyday practices (e.g. dress, food, etc.) and objects (e.g. furniture, houses, etc.) can vary across cultures thereby shedding light on local contexts which can be valuable in AI system design. 

Between the lines

The ideas postulated in the paper offer promise in that they can open up new ways of reflexive design and facilitate introspection. Generative artworks could be especially beneficial in highlighting counterfactual scenarios— given that such visualizations may not exist in the real world, and thereby could shed light on new and latent perspectives. That said, for surfacing non-western perspectives and viewpoints based on various ethical theories, existing artworks could also be used. Also, as the authors acknowledge, generative artworks could themselves be biased, so it is necessary to employ these tools mindfully. Ecological costs/environmental impacts of generative artworks are however not discussed in the paper. Given that generating artworks requires significant computational resources, there exists a tradeoff between the ecological cost and educational benefit, which calls for further analysis.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

related posts

  • Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem

    Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem

  • Prediction Sensitivity: Continual Audit of Counterfactual Fairness in Deployed Classifiers

    Prediction Sensitivity: Continual Audit of Counterfactual Fairness in Deployed Classifiers

  • DC-Check: A Data-Centric AI checklist to guide the development of reliable machine learning systems

    DC-Check: A Data-Centric AI checklist to guide the development of reliable machine learning systems

  • The Meaning of “Explainability Fosters Trust in AI”

    The Meaning of “Explainability Fosters Trust in AI”

  • The AI Carbon Footprint and Responsibilities of AI Scientists

    The AI Carbon Footprint and Responsibilities of AI Scientists

  • The Limits of Global Inclusion in AI Development (Research Summary)

    The Limits of Global Inclusion in AI Development (Research Summary)

  • Putting collective intelligence to the enforcement of the Digital Services Act

    Putting collective intelligence to the enforcement of the Digital Services Act

  • Bias Propagation in Federated Learning

    Bias Propagation in Federated Learning

  • The Ethics of AI Business Practices: A Review of 47 AI Ethics Guidelines

    The Ethics of AI Business Practices: A Review of 47 AI Ethics Guidelines

  • Disability, Bias, and AI (Research Summary)

    Disability, Bias, and AI (Research Summary)

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.