• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Exploiting The Right: Inferring Ideological Alignment in Online Influence Campaigns Using Shared Images

May 28, 2023

🔬 Research Summary by Amogh Joshi, a high school researcher at the Inf.Eco Lab at the iSchool of the University of Maryland.

[Original paper by Amogh Joshi and Cody Buntain]


Overview: The US Department of Justice and others have alleged foreign interference in US politics through campaigns of malevolent online influence. Our recent work assesses the visual media shared by disinformation accounts from four of these online influence campaigns: Iran, Venezuela, Russia, and China. Our models demonstrate consistencies in the types of images these campaigns share, especially as they pertain to political ideologies, with each campaign tending toward conservative imagery.


Introduction

Investigations into foreign influence campaigns show these efforts span various online platforms and modalities, from text to images. However, most of the research in this domain largely focuses on the text shared in these campaigns, leaving open questions about the role and use of media in manipulating online audiences.

Creating media may be more time- and resource-intensive than sharing links to news and amplifying other accounts. One such question is whether inauthentic accounts present a consistent persona across these modalities, as evidence shows multimedia drives more engagement than text sharing. How inauthentic actors use these modalities affects their detection and quality in the online information ecosystem.

Key Insights

Identifying Specific Image Types Shared in a Political Context

Images can generally be separated into several different types. E.g., you can often find images categorized based on their primary subjects, such as humans, objects, and landscapes, or the circumstance or event in which they are shared. To understand imagery in a political context, however, we develop a similar categorization of images shared by US politicians.

Images are a unique modality in that they can contain many features. To obtain the features of greatest importance from the images in our dataset, we use a pre-trained deep learning model (ResNet50) to create dense embeddings that characterize these images. ResNet50 is originally trained on the ImageNet object-detection dataset, which consists of common household objects to animals.

These embeddings are then used to train a model to cluster these images into eight types, which we broadly describe as belonging to the following categories: text-based documents (as in cluster 0), infographics and advertisements (as in clusters 6 and 7), Americana-style patriotic imagery (as in cluster 1), and various types of images featuring people (in clusters 2, 3, 4, and 5). Further analysis of these types can be found in our work at https://arxiv.org/abs/2110.01183.

Eight clusters of images, using the ResNet50 pre-trained model. 

Correlating Imagery With Ideology

After identifying unique imagery types, we next determine whether a correlation exists between these image types and political ideology. To this end, we fit a linear regression model to the proportions of images US congressperson shares across each of these eight clusters, along with their corresponding political ideologies–using well-established ideology measures. Our results from this regression model demonstrate significant correlations between certain types of imagery and ideology–in particular, document-style imagery, infographics, and photos of groups of people are correlated with a more liberal congressperson. In contrast, patriotic and Americana-style imagery is associated with a more conservative congressperson.

With these results demonstrating a significant connection between visual presentation and ideological position, we next use these methods to assess the ideology of images used by foreign influence campaigns–extrapolating our conclusions regarding images shared in US political contexts.

Images shared by influence campaigns with predicted conservative (red-border) and liberal (blue-border) ideologies. The flag in the top-right corner represents the country where the account originated.

We develop a random forest regression model using the image types we have collected–for each influence campaign account, we sample images, extract features, and obtain the distribution of images they have shared in each cluster. Since this model collapses the number of input features to a small subset of types, we also use a model trained directly on the raw embeddings as a consistency check.

Influence Campaign Ideology Predictions

For this work, we present a slightly refined version of our raw embeddings model–we use embeddings from EfficientNetB0, a newer image characterization model, and we only use image types that correlate with political ideology (i.e., we discard images from clusters that have no significant ideological correlation), as determined by our k=8 clustering model. The model reveals that each of the four influence campaigns tends to lean moderately conservative, with most accounts–especially in Venezuela and Russia–having a conservative ideology.

Between The Lines

This research demonstrates a correlation between the types of imagery shared by foreign influence campaigns and a conservative ideology. Our approach demonstrates the viability of using image characterizations alongside raw features when assessing imagery in a political context.

A notable limitation in using a model-based feature extraction approach, however, is modeling bias toward certain ideological positions. The off-the-shelf, pre-trained models used herein are trained in separate, apolitical contexts (i.e., object detection), suggesting the need for models trained in political contexts for a more suitable analysis of political discourse. Furthermore, additional analysis shows that our regression models, though generally performing well overall, underperform for conservative politicians relative to liberals. This effect may be attributed to the reduced diversity we observe in the imagery shared by conservative accounts.

This work suggests that imagery is increasingly relevant when assessing political influence campaign accounts. Prior work has suggested that Russian disinformation accounts are ideologically diverse in the text-based news they share. In turn, our image-oriented work provides new insights into coordination axes and further suggests that accounts may not necessarily present consistent ideological positions across different modalities, e.g., a caption and an image may have different meanings in different contexts.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Balancing Data Utility and Confidentiality in the 2020 US Census

    Balancing Data Utility and Confidentiality in the 2020 US Census

  • Putting AI ethics to work: are the tools fit for purpose?

    Putting AI ethics to work: are the tools fit for purpose?

  • Defining a Research Testbed for Manned-Unmanned Teaming Research

    Defining a Research Testbed for Manned-Unmanned Teaming Research

  • The Role of Arts in Shaping AI Ethics

    The Role of Arts in Shaping AI Ethics

  • Towards Environmentally Equitable AI via Geographical Load Balancing

    Towards Environmentally Equitable AI via Geographical Load Balancing

  • Intersectional Inquiry, on the Ground and in the Algorithm

    Intersectional Inquiry, on the Ground and in the Algorithm

  • Why was your job application rejected: Bias in Recruitment Algorithms? (Part 1)

    Why was your job application rejected: Bias in Recruitment Algorithms? (Part 1)

  • Why AI Ethics Is a Critical Theory

    Why AI Ethics Is a Critical Theory

  • Responsible and Regulatory Conform Machine Learning for Medicine: A Survey of Challenges and Solutio...

    Responsible and Regulatory Conform Machine Learning for Medicine: A Survey of Challenges and Solutio...

  • Resistance and refusal to algorithmic harms: Varieties of ‘knowledge projects’

    Resistance and refusal to algorithmic harms: Varieties of ‘knowledge projects’

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.