• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Research summary: Lexicon of Lies: Terms for Problematic Information

June 17, 2020

Summary contributed by Khaulat Ayomide Abdulhakeem (@khaulat_ayo), Machine Learning Researcher at For.ai.

*Author & link to original paper at the bottom


This article seeks to explain the terms used to describe problematic information. They could be inaccurate, misleading, or altogether fabricated. The terms we use in describing information would impact how information spreads, who spreads it, and who receives it as this choice is based solely on the perspective of the descriptor. This makes the labelling of information complex, inconsistent and imprecise. 

The 2 major divisions of problematic information are misinformation and disinformation; 

Misinformation is when the incorrectness/ inaccuracy of information is not intentional but due to mistakes. This is caused by the failure to independently verify a source’s claims or the rush to pass information across- for example in the case of journalists trying to win the competition of being the first to report. 

Disinformation, on the other hand, is when information is deliberately intended to mislead. 

Social media has facilitated the spread of both forms of problematic information. With computational systems that push for the spread of “trending topics”, it allows these topics to reach more and more people. 

Whether a given story or piece of content is labelled as misinformation or disinformation can depend as much on a speaker’s intent as on the professional standards of who is evaluating it. 

Automated systems with bugs and sites created just for profit without concern for the accuracy of content posted on it are another source of misinformation. The interaction between news content and entertainment content contribute to the complexity of information interpretation. Generally, intentions behind contents on social media are usually not clear. 

Information need not be accurate to be popular or profitable 

Extra caution is taken by journalists before labelling information as misinformation or disinformation because misrepresentations can lead to reputational damage, professional sanctions, and legal repercussions. 

Publicity and Propaganda are persuasive information campaigns that try to link brands, people, products, or nations with certain feelings, ideas, and attitudes. Both focus on reaching a large crowd. While the former tries to get information – which may be accurate information, misinformation, disinformation, or a mix of all three, the later, is a deliberate attempt to deceive or manipulate. The existence of both forms in the same space makes their difference not very obvious. 

In practice, the lines separating advertising, public relations, and public diplomacy (terms often regarded as neutral) from the pejorative term propaganda (which usually implies deliberate intent to manipulate or deceive) can be hard to discern. 

The source of campaigns could help us understand the category in which a piece of information belongs – whether propaganda or publicity. Some information, however, does not have an obvious source. Information like advertising, public relations, or public diplomacy have obvious sources while other information might come from information operations – a term which originated from the military and referred to the strategic use of technological, operational, and psychological resources to disrupt the enemy’s informational capacities and protect friendly forces. Today, this term is used to describe the deliberate and systematic attempts by unidentified actors to steer public opinion using inauthentic accounts and inaccurate information. 

The differences in languages also make categorising information difficult. In Spanish, for example, la propaganda can refer to political communications, advertising, and even junk mail which does not follow the standard definition mentioned above. 

Propaganda can sometimes be a deliberate act to cultivate attitudes and/or provoke action. When it takes this form, it is termed agitprop, however, this term is rarely used as all forms of propaganda are just called propaganda. At some time – around the 20th century, propaganda was classified into 3 groups; white, grey, or black, depending on the information’s accuracy, the channel of distribution and also the formality of it. White propaganda is accurate and from accurately identified sources, whereas black propaganda is inaccurate or deceptive and their source is misrepresented. Grey propaganda combines accurate and inaccurate content. 

Far more pressing issues beyond differentiating publicity from propaganda is when the goal of an information campaign is not to promote support for an idea but to confuse people by spreading uncertainty and starting debates that are most likely to divert. 

One form of confusion is Gaslighting – a situation in which a person misinterprets and changes the narrative of an event in a deceptive and inaccurate manner to the extent that their victim stops trusting their own judgments and perceptions. 

Dezinformatsiya is a coordinated state effort to share false or misleading information to the media in targeted countries or regions. This involves taking active measures in spreading disinformation, especially with the goals of widening existing rifts; stoking existing tensions; and destabilizing other states’ relations with their publics and one another. 

At the moment, there is no obvious solution to the spread of problematic information. Media literacy is necessary, but not sufficient for understanding today’s problematic information flows. With the increasing number of fact-check news stories and those that try to debunk rumours, there seems to be no headway in restoring the authority of the press or social institutions; as the parties involved in creating disruption, destabilization, distraction and derailing information seems to have the upper hand. 

One method adopted to solve false information spread is Xuanchuan, a Chinese term, to describe a misdirection strategy on social media in which coordinated posts don’t spread false information, but instead flood conversational spaces with positive messages or attempts to change the subject. This act is perceived positively even though it is still a form of propaganda. This further demonstrates the ambiguous boundary between publicity and propaganda to be a cross-cultural phenomenon as seen in the Spanish example above. 

Emergent techniques of sowing confusion and distraction are no excuse for jumping to dystopian or simplistic conclusions about the effects of digital technologies 

As useful as these cross-cultural terms like dezinformatsiya, and xuanchuan are, they should be used with care because of the cultural associations they can raise. There is a risk of forming certain assumptions of the cultures involved. 

Other terms come from using rhetorical means to describe issues of society. These means could be playful, humorous, or ironic which are all not intended to be perceived seriously. Examples of these forms of information are satire, parody, culture jamming, ​and hoaxing​. 

Satire uses exaggeration, irony, and absurdity to amuse the audience while calling attention to, and critiquing perceived wrongdoing. 

Parody is a form of satire that exaggerates the notable features of a public figure, artist, or genre. Culture jamming turns the tools of parody against advertising culture, ironically repurposing the logos and conventions of advertising in order to critique corporate culture. 

Hoax is a deliberate deception that plays on people’s willingness to believe. 

Hoaxes depend, at least initially, on some people taking them seriously. They are a means of challenging authority, custom, or the status quo and also could be motivated by self-interest. An example of a hoax is the April Fools’ Day, a day with a lot of misinformation flying around. This might not go well with everyone as information in any form has the ability to attract and retain the attention of fickle audiences. 

The networked nature of social media often contributes to changing the context of the original information as it is passed on. This makes it difficult to judge whether a piece of content is serious or sarcastic in nature. 

The interconnection of disinformation and misformation and that of serious and sarcastic information makes it easy for people to take advantage of the ambiguity and instrumentalize it. Those who give ideas and perform actions far outside the mainstream that end up being criticized automatically label their actions or ideas as sarcastic. The defence of “it was just a joke!” mobilizes plausible deniability and frames anyone who objects as intolerant of free speech. 

In today’s information environment, we may need to modify and qualify the terms we have, or find new metaphors and models that acknowledge the complexity and ambiguity of today’s problematic information. 

In conclusion, the term chosen to describe an information campaign conveys information about who is running that campaign and the goals they might have in running it. It also reveals information about the writer – namely, how she assesses the accuracy, validity, and potential consequences of the information campaign. 

Misinformation and disinformation should be discussed with care; writers must be mindful that their representations of problematic information in today’s world can bolster assumptions that may be inaccurate and can reestablish social divisions. 


Original paper by Caroline Jack: https://datasociety.net/library/lexicon-of-lies/

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

related posts

  • An Algorithmic Introduction to Savings Circles

    An Algorithmic Introduction to Savings Circles

  • GAM(e) changer or not? An evaluation of interpretable machine learning models

    GAM(e) changer or not? An evaluation of interpretable machine learning models

  • Warning Signs: The Future of Privacy and Security in an Age of Machine Learning  (Research summary)

    Warning Signs: The Future of Privacy and Security in an Age of Machine Learning (Research summary)

  • Explainable artificial intelligence (XAI) post‐hoc explainability methods: risks and limitations in ...

    Explainable artificial intelligence (XAI) post‐hoc explainability methods: risks and limitations in ...

  • Defending Against Authorship Identification Attacks

    Defending Against Authorship Identification Attacks

  • Self-Consuming Generative Models Go MAD

    Self-Consuming Generative Models Go MAD

  • Conversational AI Systems for Social Good: Opportunities and Challenges

    Conversational AI Systems for Social Good: Opportunities and Challenges

  • Measuring Fairness of Text Classifiers via Prediction Sensitivity

    Measuring Fairness of Text Classifiers via Prediction Sensitivity

  • Listen to What They Say: Better Understand and Detect Online Misinformation with User Feedback

    Listen to What They Say: Better Understand and Detect Online Misinformation with User Feedback

  • Melting contestation: insurance fairness and machine learning

    Melting contestation: insurance fairness and machine learning

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.