• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

AI Neutrality in the Spotlight: ChatGPT鈥檚 Political Biases Revisited

December 21, 2023

馃敩 Research Summary by Kazuhiro Takemoto, Professor at Kyushu Institute of Technology.

[Original paper by Sasuke Fujimoto and Kazuhiro Takemoto]


Overview: ChatGPT, OpenAI鈥檚 renowned conversational AI, has not escaped the intense gaze of researchers and critics alike, particularly regarding potential political biases. Delving deep into this complex landscape, this study looks at ChatGPT鈥檚 biases, meticulously analyzing the system鈥檚 responses in different languages with varied user settings. The results comprehensively picture the AI鈥檚 tendencies, revealing layers of intricacies and challenges.


Introduction

Imagine a world where your AI assistant, which you rely on for daily tasks and information, subtly pushes you toward a particular political ideology. This is the contentious environment that ChatGPT is believed to have created. Recognizing these inherent biases is crucial in an era dominated by AI鈥檚 growing influence. However, a renewed assessment is essential, given OpenAI鈥檚 dedication to minimizing biases and ChatGPT鈥檚 ongoing development. This study delves into the urgent question: Does ChatGPT genuinely exhibit political bias, especially with a left-libertarian lean, as some suggest? To address this, this study used various political orientation tests in both English and Japanese. They also adjusted settings for gender and race to gauge the extent of potential biases. The insights gleaned offer a fresh lens, highlighting the complex interplay between AI, language, and political tendencies.

Key Insights

The AI Political Landscape

ChatGPT, a technological marvel in the realm of conversational AI, hasn鈥檛 been without its share of controversies. Chief among them is its purported political biases. Previous academic studies have waved red flags, suggesting a pronounced left-libertarian orientation, casting shadows on the AI鈥檚 neutrality.

Why Reevaluate Now?

Several prior studies pointed out ChatGPT鈥檚 political leanings. These findings set the stage for societal concerns, especially considering ChatGPT鈥檚 widespread real-world applications. The potential for such biases to cause societal rifts, political polarization, and miscommunication is undeniable. OpenAI, the organization behind ChatGPT, has been vocal about recognizing and mitigating these biases. This ongoing commitment from OpenAI makes the reevaluation of ChatGPT鈥檚 biases both timely and crucial.

Methodology Unveiled

This study approached this study with a robust methodology. Using the snapshot of ChatGPT (gpt-3.5-turbo) from March 1, 2023, it was subjected to a meticulously chosen series of political orientation tests. These tests comprised multiple-choice questions designed to gauge an individual鈥檚 (or, in this case, an AI鈥檚) political leanings. The comprehensive tests included the well-regarded IDRLabs political coordinates test, the Eysenck political test, and several others.

Is Political Bias Truly Addressed?

The analysis revealed that ChatGPT demonstrates less political bias than what had been previously assumed. This is significant, as prior evaluations have largely shaped the narrative around ChatGPT鈥檚 biases. However, the current study suggests a departure from this narrative. While not entirely devoid of biases, ChatGPT鈥檚 responses, especially in English, often veered towards neutral or were more balanced across the political spectrum. This might indicate OpenAI鈥檚 efforts in refining and improving the model or perhaps the evolving nature of the datasets it鈥檚 trained on.

Exploring Factors Beyond Politics: Language, Gender, and Race

Despite the aforementioned progress, there remains room for caution. In addition to the political orientation tests, the study investigated how language, gender, and race settings influenced the AI鈥檚 responses. Notably, discrepancies emerged when comparing the AI鈥檚 reactions in English to those in Japanese, emphasizing the intricate interplay between language and perceived biases. Adjustments to gender and race settings further illuminated the subtle ways ChatGPT responds to varied prompts. In particular, prompts in Japanese revealed more pronounced biases, highlighting the importance of understanding the intricate dynamics of these variables.

Consistency, Ambiguity, and The Challenges Ahead

On a positive note, ChatGPT demonstrated a commendable level of consistency across multiple iterations of the same test. But this consistency was punctuated by occasional ambiguous or invalid answers, especially to politically sensitive questions. This behavior could reflect real-world polarization around these topics or indicate areas of improvement in AI processing.

Furthermore, certain questions, especially those dealing with information transparency or medical ethics, seemed to trip up the AI, leading to inconsistent or even invalid responses. This poses questions about the AI鈥檚 handling of controversial or ethically charged topics.

Ethical Concerns and Potential Misuse

The varying AI responses based on language and user settings spotlight ethical concerns. The potential misuse of these nuances, especially by adversaries aiming to manipulate outcomes, is a real threat. Such exploitation could lead to misinformation or reinforce harmful stereotypes.

While ChatGPT鈥檚 capabilities are undeniably impressive, the journey to ensure its unbiased operation is fraught with challenges. The interplay of language, user settings, and inherent biases in training data adds layers of complexity to the issue. As AI continues to permeate our daily lives, understanding and addressing these biases becomes paramount.

Between the lines

This study鈥檚 findings underscore the profound implications of AI biases in our interconnected world. While ChatGPT鈥檚 reduced political bias in English is a promising step forward, the discrepancies in Japanese interactions are a stark reminder of the needed work. It begs the question: are we inching closer to true AI neutrality or merely scratching the surface? The

 AI鈥檚 challenges with ethically charged questions suggest a deeper, perhaps philosophical, limitation. Can AI ever truly grasp the nuances of human morality, or will it perpetually mirror societal divisions? The observed inconsistencies present fertile ground for further exploration. Future studies should delve deeper into AI behavior across various languages and the impact of cultural nuances. Additionally, a focus on improving AI鈥檚 handling of contentious issues might pave the way for more reliable, universally accepted AI systems. As we integrate AI further into our lives, the quest for understanding and refining its intricacies intensifies.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

馃攳 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • The State of Artificial Intelligence in the Pacific Islands

    The State of Artificial Intelligence in the Pacific Islands

  • Research summary: Lexicon of Lies: Terms for Problematic Information

    Research summary: Lexicon of Lies: Terms for Problematic Information

  • On Prediction-Modelers and Decision-Makers: Why Fairness Requires More Than a Fair Prediction Model

    On Prediction-Modelers and Decision-Makers: Why Fairness Requires More Than a Fair Prediction Model

  • Responsible AI In Healthcare

    Responsible AI In Healthcare

  • How Tech Companies are Helping Big Oil Profit from Climate Destruction

    How Tech Companies are Helping Big Oil Profit from Climate Destruction

  • Human-AI Collaboration in Decision-Making: Beyond Learning to Defer

    Human-AI Collaboration in Decision-Making: Beyond Learning to Defer

  • One Map to Rule Them All? Google Maps as Digital Technical Object

    One Map to Rule Them All? Google Maps as Digital Technical Object

  • Algorithmic Harms in Child Welfare: Uncertainties in Practice, Organization, and Street-level Decisi...

    Algorithmic Harms in Child Welfare: Uncertainties in Practice, Organization, and Street-level Decisi...

  • Adding Structure to AI Harm

    Adding Structure to AI Harm

  • Combatting Anti-Blackness in the AI Community

    Combatting Anti-Blackness in the AI Community

Partners

  • 聽
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • 漏 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.