• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Tech Futures: AI For and Against Knowledge

April 13, 2026

A network diagram with lots of little emojis, organised in clusters.

✍️ By Ismael Kherroubi Garcia.

Ismael is Founder & Co-lead of the Responsible Artificial Intelligence Network (RAIN), and Founder & CEO of Kairoi.


📌 Editor’s Note: This article is part of our Tech Futures series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Responsible Artificial Intelligence Network (RAIN). The series challenges mainstream AI narratives, proposing that rigorous research and science are better sources of information about AI than industry leaders. This sixth installment of Tech Futures by RAIN tackles the tension between the usefulness of AI in science and the undermining of knowledge systems through the misuse of mainstream AI tools.


AI For Knowledge

How we understand the world depends on how we interpret data about it. In theory, the more data we have about a subject, the better we will understand what is going on. In the 1960s, Margaret Dayhoff, a doctor of philosophy in quantum chemistry, was working on protein sequencing, which is about studying the order of amino acids within proteins. Comparing protein sequences could be critical, as Dayhoff and her co-authors explain in the Atlas of Protein Sequence and Structure (1965):

“Conspicuous in comparative human protein sequences is information of great medical-diagnostic value. A long series of abnormalities has been found to be attributable to single amino acid replacements. One such tragically serious disease is sickle-cell anemia.”

Protein sequencing thus provides a crucial trove of data for the advancement of health sciences. But it’s a lot of data. To make things easier, Dayhoff et al.’s Atlas compiled protein sequences formatted for computational analysis:

“The information is kept in a compact, uniform format on punched cards. New information and corrections are easily inserted, while the text is kept accurate.”

How we store and analyse data with computers has evolved a bit since the 1960s, but Dayhoff’s work served as the foundation of bioinformatics, where computational techniques are applied to life sciences.

Fast-forward a decade or so, and artificial intelligence (AI) was being applied in medical settings. What we now call good old-fashioned AI (GOFAI) was applied through computer-assisted diagnostic tools such as 1971’s Internist-I. Meanwhile, early machine learning (ML) techniques were being tested for analysing X-rays. By 2006, researchers were able to identify ML use cases across the studies of evolution, genomics, proteomics, and systems biology. Fast-forward again to 2021, and Google DeepMind overcame the problem of determining the 3D structure of proteins using AI techniques.

There is no doubt that AI technologies and practices are valuable for the advancement of our understanding of the world around us. But might it be possible that AI can also undermine human knowledge? Wikipedia serves as the world’s largest online encyclopedia, and AI is unfortunately becoming a problem.

AI Against Knowledge

In an interview published January 12th, the founder of Wikipedia, Jimmy Wales, explained: “We don’t ban the use of AI. We do say, ‘be very careful with it,’ and ‘you’re responsible for what you put in Wikipedia.’” Just a couple of months later, on March 20th, “volunteer editors for Wikipedia’s English-language platform formally voted to ban all AI-generated text from its 7.1 million articles.”

What the volunteer editors specifically voted on was a new policy on large language models (LLMs) that states, as of April 11th, 13:06 (UTC):

“The use of LLMs to generate or rewrite article content is prohibited, save for these two exceptions:

1. Editors are permitted to use LLMs to suggest basic copyedits to their own writing, and to incorporate some of them after human review, provided the LLM does not introduce content of its own. Caution is required because LLMs can go beyond what is asked of them and can change the meaning of the text such that it is not supported by the sources cited.
2. Editors are permitted to use LLMs to translate articles from another language’s Wikipedia into the English Wikipedia, but must follow the guidance laid out at Wikipedia: LLM-assisted translation.”

What exactly is driving this almost-outright ban of LLMs in English Wikipedia? A few weeks ago, we discussed the threat AI-generated code poses to the world’s digital infrastructure, as low-quality contributions are inundating maintainers’ inboxes. The proliferation of LLM-based chatbots has meant something similar for Wikipedia: volunteer editors are having to spend disproportionate amounts of time handling AI-generated content. In late 2023, the “AI Cleanup” was launched “to combat the increasing problem of poorly written AI-generated content on Wikipedia.” Some of what editors of AI-generated content have had to contend with are fake citations that are difficult to detect, AI-generated images that look like paintings appropriate to the periods they refer to, and even entire articles about fortresses that never existed.

ALT: A tower symbolizing all modern digital infrastructure is held up by a project some random person in Nebraska has been thanklessly maintaining since 2003. A playful cat is looking closely at the project.

Caption: © 2026 Responsible Artificial Intelligence Network (RAIN) and Ismael Kherroubi Garcia, CC BY 4.0, adapted from xkcd.com (Dependency) and Ricinator on Pixabay

The way Wikipedia editors are experiencing AI seems to be in tension with what scientists have found in AI for decades. But it is important to note that commercial LLMs, as we currently know them, are a far cry from the sorts of AI and ML techniques deployed across the life sciences since the 1970s. AI is not prima facie problematic for knowledge systems. Science will benefit from improving and implementing different forms of AI for years to come. However, the wide availability of LLM-based chatbots makes it far easier for individuals to generate low-quality content at scale and, intentionally or unintentionally, apply pressure to some of the most important systems for human knowledge, including Wikipedia.

Image credit: Fabrizio Matarese / Better Images of AI / CC BY 4.0

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

related posts

  • Recess: Is AI in Law School a Helpful Tool or a Hidden Trap?

    Recess: Is AI in Law School a Helpful Tool or a Hidden Trap?

  • The Paris AI Summit: Deregulation, Fear, and Surveillance

    The Paris AI Summit: Deregulation, Fear, and Surveillance

  • AI Policy Corner: The Kenya National AI Strategy

    AI Policy Corner: The Kenya National AI Strategy

  • Social Context of LLMs - the BigScience Approach, Part 4:Model Governance and Responsible Use

    Social Context of LLMs - the BigScience Approach, Part 4:Model Governance and Responsible Use

  • This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

    Tech Futures: Co-opting Research and Education

  • AI Policy Corner: Japan’s AI Promotion Act

    AI Policy Corner: Japan’s AI Promotion Act

  • Social Context of LLMs - the BigScience Approach, Part 2: Project Ethical and Legal Grounding

    Social Context of LLMs - the BigScience Approach, Part 2: Project Ethical and Legal Grounding

  • AI Policy Corner: The Colorado State Deepfakes Act

    AI Policy Corner: The Colorado State Deepfakes Act

  • Open Letter: Moving Forward Together – MAIEI’s Next Chapter

    Open Letter: Moving Forward Together – MAIEI’s Next Chapter

  • A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

    Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.