• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Rethinking Gaming: The Ethical Work of Optimization in Web Search Engines (Research Summary)

February 7, 2021

🔬 Research summary contributed by Dr. Iga Kozlowska (@kozlowska_iga), a sociologist working on Microsoft’s Ethics & Society team where she’s tasked with guiding responsible AI innovation.

✍️ This is part 3 of the ongoing Sociology of AI Ethics series; read previous entries here.

[Link to original paper + authors at the bottom]


Overview: Through ethnographic research, Ziewitz examines the “ethical work” of search engine optimization (SEO) consultants in the UK. Search engine operators, like Google, have guidelines on good and bad optimization techniques to dissuade users from “gaming the system” to keep their platform fair and profitable. Ziewitz concludes that when dealing with algorithmic systems that score and rank, users often find themselves in sites of moral ambiguity, navigating in the grey space between “good” and “bad” behavior. Ziewitz argues that designers, engineers, and policymakers would do well to move away from the simplistic idea of gaming the system, which assumes good and bad users, and focus instead on the ethical work that AI systems require of their users as an “integral feature” of interacting with AI-powered evaluative tools. 


Remember JC Penny? In 2011, they were shamed in the pages of The New York Times for “gaming” Google’s search engine algorithm to boost their website’s ranking. They did this by having a bunch of irrelevant pages link to theirs so that the algorithm would read that as an indicator of the page’s relevancy and bump it up in the “organic” search results, right before the holiday shopping season. When Google found out, they “punished” JC Penny’s “bad” behavior by substantially lowering their ranking thereby reducing traffic to their website. 

Ziewitz uses this case study to interrogate what we mean when we accuse algorithmic technology users of “gaming the system.” As machine learning models that rank, score, classify, and predict proliferate across AI applications in fields ranging from healthcare and journalism to credit scores and criminal justice, there is widespread concern around how to design and govern AI systems to prevent abuse and manipulation. Maintaining the integrity of AI systems is of paramount importance to most stakeholders, but it is a hard nut to crack. 

From “Gaming the System” to “Ethical Work”

As with anything in the social world, what is “ethical” behavior is hard to pin down. People often straddle the line between good and bad behavior, constantly negotiating what is acceptable and what isn’t in any given situation. It may seem like we live in a world with hard and fast rules like “lying is bad” and “speaking the truth is good.” Instead, when we take a closer look, we see that we operate in a morally ambiguous world where “white lies” or coloring or downplaying the truth to protect others’ feelings, for example, may be acceptable. Intentions, identities, and context matter. By remembering that AI systems are in fact sociotechnical systems, we can expect people to engage with AI systems just like they do with other people i.e. in complex ways and from within existing (though everchanging) cultural norms, simultaneously reproducing and resisting them. 

Because people alter their behavior as they negotiate the cultural rules of interacting back and forth with algorithms, ranking and scoring algorithms don’t just measure “objective” reality. Through interaction with people and instructions, algorithms co-create reality. It is through that “ethical work,” as Ziewitz calls it, that we collectively produce more or less ethical outcomes. 

What does it mean then to design and build “ethical AI”? It requires us to take into consideration the ethical work that will be done through the building, deployment, maintenance, and use of the AI system. Below are some questions that ML developer teams can explore to move away from the binary thinking associated with “gaming the system” to a more nuanced approach that tries to understand the ambiguities, uncertainties, and context dependencies of algorithm-human interaction.  

Applying Ziewitz’s Ideas to Machine Learning Development

Extending Ziewitz’s sociological research into engineering practice, we can extract a few thought-provoking questions for ML developers to consider when building ML models and AI systems. 

  • Recognize that AI systems don’t just take in human behavior as input, but they also actively elicit some behaviors versus others. In other words, algorithms have the power to change and shape human behavior as users respond to the affordances or constraints of the system. 
    • How will your AI system potentially change human behavior or incentivize broader collective action? 
    • For example, could your product inadvertently create a new cottage industry, like SEO consultancies, to deal with the ambiguities of your product? 
  • Acknowledge that just as ethics is blurry, so is ethical AI. Moving away from binaries (AI is neither ethical nor unethical), we can instead build AI on a moral spectrum, that through its use, adaptation, and constant change in the deployment environment, creates more or less ethical behaviors, practices, and outcomes depending on the time and place of its use. 
    • In other words, in addition to considering “bad” or “good” user interactions with your system, what kind of behavior could be considered to fall in a morally ambiguous or grey area? 
    • Will that depend on the type of user or the time or place of their interaction with your product? 
    • In the SEO example, is trading links acceptable but paying for them is not? Is it ok if a struggling small business does it versus a big retailer like JC Penny? 
  • Consider the blurry lines between intended and unintended uses of an AI system during the design phase. 
    • How likely is it that your intended uses for the product align with those of your expected users? 
    • Broadening your view of relevant stakeholders, how might unexpected users of your product or types of users that don’t yet exist (e.g. SEO consultants) engage with your product? 
    • What mechanisms will you rely on to figure out if your system is being manipulated (e.g. New York Times exposĂ©s) and how will you impose fair penalties and mechanisms for appeal?
  • Stepping back from the black and white thinking of “gaming the system,” consider the kind of “ethical work” that is expected of various stakeholders.  
    • How heavy is the burden of ethical work on some versus others? 
    • What kind of behaviors do you expect stakeholders to engage in to try to generate ethical clarity out of ethical ambiguity when interacting with your product? 
    • What is every stakeholder’s position of power(lessness) vis-Ă -vis that of the AI system, the company developing and maintaining the AI system, and the broader institutions within which the AI system will be deployed? Could some behaviors be a form of resistance to an existing unequal power relationship? If so, how can you distribute more agency to marginalized stakeholders to level the playing field? 
    • What systematic inequalities can your product potentially propagate within the ecosystem and how might your design compensate for that? 

Conclusion

More and more, ML models are used to organize online information, rank and score human characteristics or behaviors, and police online interactions. 

Despite being done by largely automated AI systems, this work, nonetheless, requires making value judgements about people and their behavior. And this means that there will always be moral grey areas that cannot be automated away with the next best ML model. 

Ziewitz encourages us to think more deeply about how users will interact with our AI systems beyond the binary of good or bad behavior. He welcomes us to take the extra time to dwell in ambiguity and consider the ethical work that all humans will do as they interact with automated voice assistants, self-driving cars, child abuse prediction algorithms, or search engine ranking models. 

There are no easy answers or one-size-fits-all solutions, but surely considering human behavior in all of its complexity will help us build better and more human-centered AI.


Original paper by Malte Ziewitz: https://doi.org/10.1177/0306312719865607

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • Towards Community-Driven Generative AI

    Towards Community-Driven Generative AI

  • FairQueue: Rethinking Prompt Learning for Fair Text-to-Image Generation (NeurIPS 2024)

    FairQueue: Rethinking Prompt Learning for Fair Text-to-Image Generation (NeurIPS 2024)

  • AI and the Global South: Designing for Other Worlds  (Research Summary)

    AI and the Global South: Designing for Other Worlds (Research Summary)

  • A technical study on the feasibility of using proxy methods for algorithmic bias monitoring in a pri...

    A technical study on the feasibility of using proxy methods for algorithmic bias monitoring in a pri...

  • Fairness and Bias in Algorithmic Hiring

    Fairness and Bias in Algorithmic Hiring

  • Data Capitalism and the User: An Exploration of Privacy Cynicism in Germany

    Data Capitalism and the User: An Exploration of Privacy Cynicism in Germany

  • Understanding technology-induced value change: a pragmatist proposal

    Understanding technology-induced value change: a pragmatist proposal

  • Research summary: Robot Rights? Let’s Talk about Human Welfare instead

    Research summary: Robot Rights? Let’s Talk about Human Welfare instead

  • Listen to What They Say: Better Understand and Detect Online Misinformation with User Feedback

    Listen to What They Say: Better Understand and Detect Online Misinformation with User Feedback

  • Research summary: AI Governance in 2019, A Year in Review: Observations of 50 Global Experts

    Research summary: AI Governance in 2019, A Year in Review: Observations of 50 Global Experts

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.