• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Research summary: Lexicon of Lies: Terms for Problematic Information

June 17, 2020

Summary contributed by Khaulat Ayomide Abdulhakeem (@khaulat_ayo), Machine Learning Researcher at For.ai.

*Author & link to original paper at the bottom


This article seeks to explain the terms used to describe problematic information. They could be inaccurate, misleading, or altogether fabricated. The terms we use in describing information would impact how information spreads, who spreads it, and who receives it as this choice is based solely on the perspective of the descriptor. This makes the labelling of information complex, inconsistent and imprecise. 

The 2 major divisions of problematic information are misinformation and disinformation; 

Misinformation is when the incorrectness/ inaccuracy of information is not intentional but due to mistakes. This is caused by the failure to independently verify a source’s claims or the rush to pass information across- for example in the case of journalists trying to win the competition of being the first to report. 

Disinformation, on the other hand, is when information is deliberately intended to mislead. 

Social media has facilitated the spread of both forms of problematic information. With computational systems that push for the spread of “trending topics”, it allows these topics to reach more and more people. 

Whether a given story or piece of content is labelled as misinformation or disinformation can depend as much on a speaker’s intent as on the professional standards of who is evaluating it. 

Automated systems with bugs and sites created just for profit without concern for the accuracy of content posted on it are another source of misinformation. The interaction between news content and entertainment content contribute to the complexity of information interpretation. Generally, intentions behind contents on social media are usually not clear. 

Information need not be accurate to be popular or profitable 

Extra caution is taken by journalists before labelling information as misinformation or disinformation because misrepresentations can lead to reputational damage, professional sanctions, and legal repercussions. 

Publicity and Propaganda are persuasive information campaigns that try to link brands, people, products, or nations with certain feelings, ideas, and attitudes. Both focus on reaching a large crowd. While the former tries to get information – which may be accurate information, misinformation, disinformation, or a mix of all three, the later, is a deliberate attempt to deceive or manipulate. The existence of both forms in the same space makes their difference not very obvious. 

In practice, the lines separating advertising, public relations, and public diplomacy (terms often regarded as neutral) from the pejorative term propaganda (which usually implies deliberate intent to manipulate or deceive) can be hard to discern. 

The source of campaigns could help us understand the category in which a piece of information belongs – whether propaganda or publicity. Some information, however, does not have an obvious source. Information like advertising, public relations, or public diplomacy have obvious sources while other information might come from information operations – a term which originated from the military and referred to the strategic use of technological, operational, and psychological resources to disrupt the enemy’s informational capacities and protect friendly forces. Today, this term is used to describe the deliberate and systematic attempts by unidentified actors to steer public opinion using inauthentic accounts and inaccurate information. 

The differences in languages also make categorising information difficult. In Spanish, for example, la propaganda can refer to political communications, advertising, and even junk mail which does not follow the standard definition mentioned above. 

Propaganda can sometimes be a deliberate act to cultivate attitudes and/or provoke action. When it takes this form, it is termed agitprop, however, this term is rarely used as all forms of propaganda are just called propaganda. At some time – around the 20th century, propaganda was classified into 3 groups; white, grey, or black, depending on the information’s accuracy, the channel of distribution and also the formality of it. White propaganda is accurate and from accurately identified sources, whereas black propaganda is inaccurate or deceptive and their source is misrepresented. Grey propaganda combines accurate and inaccurate content. 

Far more pressing issues beyond differentiating publicity from propaganda is when the goal of an information campaign is not to promote support for an idea but to confuse people by spreading uncertainty and starting debates that are most likely to divert. 

One form of confusion is Gaslighting – a situation in which a person misinterprets and changes the narrative of an event in a deceptive and inaccurate manner to the extent that their victim stops trusting their own judgments and perceptions. 

Dezinformatsiya is a coordinated state effort to share false or misleading information to the media in targeted countries or regions. This involves taking active measures in spreading disinformation, especially with the goals of widening existing rifts; stoking existing tensions; and destabilizing other states’ relations with their publics and one another. 

At the moment, there is no obvious solution to the spread of problematic information. Media literacy is necessary, but not sufficient for understanding today’s problematic information flows. With the increasing number of fact-check news stories and those that try to debunk rumours, there seems to be no headway in restoring the authority of the press or social institutions; as the parties involved in creating disruption, destabilization, distraction and derailing information seems to have the upper hand. 

One method adopted to solve false information spread is Xuanchuan, a Chinese term, to describe a misdirection strategy on social media in which coordinated posts don’t spread false information, but instead flood conversational spaces with positive messages or attempts to change the subject. This act is perceived positively even though it is still a form of propaganda. This further demonstrates the ambiguous boundary between publicity and propaganda to be a cross-cultural phenomenon as seen in the Spanish example above. 

Emergent techniques of sowing confusion and distraction are no excuse for jumping to dystopian or simplistic conclusions about the effects of digital technologies 

As useful as these cross-cultural terms like dezinformatsiya, and xuanchuan are, they should be used with care because of the cultural associations they can raise. There is a risk of forming certain assumptions of the cultures involved. 

Other terms come from using rhetorical means to describe issues of society. These means could be playful, humorous, or ironic which are all not intended to be perceived seriously. Examples of these forms of information are satire, parody, culture jamming, ​and hoaxing​. 

Satire uses exaggeration, irony, and absurdity to amuse the audience while calling attention to, and critiquing perceived wrongdoing. 

Parody is a form of satire that exaggerates the notable features of a public figure, artist, or genre. Culture jamming turns the tools of parody against advertising culture, ironically repurposing the logos and conventions of advertising in order to critique corporate culture. 

Hoax is a deliberate deception that plays on people’s willingness to believe. 

Hoaxes depend, at least initially, on some people taking them seriously. They are a means of challenging authority, custom, or the status quo and also could be motivated by self-interest. An example of a hoax is the April Fools’ Day, a day with a lot of misinformation flying around. This might not go well with everyone as information in any form has the ability to attract and retain the attention of fickle audiences. 

The networked nature of social media often contributes to changing the context of the original information as it is passed on. This makes it difficult to judge whether a piece of content is serious or sarcastic in nature. 

The interconnection of disinformation and misformation and that of serious and sarcastic information makes it easy for people to take advantage of the ambiguity and instrumentalize it. Those who give ideas and perform actions far outside the mainstream that end up being criticized automatically label their actions or ideas as sarcastic. The defence of “it was just a joke!” mobilizes plausible deniability and frames anyone who objects as intolerant of free speech. 

In today’s information environment, we may need to modify and qualify the terms we have, or find new metaphors and models that acknowledge the complexity and ambiguity of today’s problematic information. 

In conclusion, the term chosen to describe an information campaign conveys information about who is running that campaign and the goals they might have in running it. It also reveals information about the writer – namely, how she assesses the accuracy, validity, and potential consequences of the information campaign. 

Misinformation and disinformation should be discussed with care; writers must be mindful that their representations of problematic information in today’s world can bolster assumptions that may be inaccurate and can reestablish social divisions. 


Original paper by Caroline Jack: https://datasociety.net/library/lexicon-of-lies/

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Top 5 takeaways from our conversation with I2AI on AI in different national contexts

    Top 5 takeaways from our conversation with I2AI on AI in different national contexts

  • Ethics and Governance of Trustworthy Medical Artificial Intelligence

    Ethics and Governance of Trustworthy Medical Artificial Intelligence

  • A Snapshot of the Frontiers of Fairness in Machine Learning (Research Summary)

    A Snapshot of the Frontiers of Fairness in Machine Learning (Research Summary)

  • The Role of Arts in Shaping AI Ethics

    The Role of Arts in Shaping AI Ethics

  • Exploring XAI for the Arts: Explaining Latent Space in Generative Music

    Exploring XAI for the Arts: Explaining Latent Space in Generative Music

  • The state of the debate on the ethics of computer vision

    The state of the debate on the ethics of computer vision

  • Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback

    Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback

  • Can LLMs Enhance the Conversational AI Experience?

    Can LLMs Enhance the Conversational AI Experience?

  • Talking About Large Language Models

    Talking About Large Language Models

  • Energy and Policy Considerations in Deep Learning for NLP

    Energy and Policy Considerations in Deep Learning for NLP

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.