• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Research Summary: Countering Information Influence Activities: The State of the Art

June 28, 2020

Summary contributed by Andrew Buzzell, a PhD student in Philosophy at York University.

*Author & link to original paper at the bottom.


Mini-summary: This report by the Swedish Civil Contingencies Agency provides an overview of scholarship on information influence operations from a Swedish national security perspective. It forms the basis of the more pragmatically focussed “Counter Influence Strategies for Communicators”. It offers a framework for identifying information influence operations and deploying countermeasures specially designed to be used in training and education for broader audiences, and as part of a strategy to improve social resilience, in particular by preparing public sector communicators to identify and counter information operations.

Full summary:

In the national security context, information influence operations often take place alongside concerted military, diplomatic, and economic activities – they are part of a hybrid approach to warfare. The report frames the defence challenge in three parts, understanding the nature of the threat, learning to identify it it, and the development and application of countermeasures. 

There are three broad characteristics that can be used to define and operationalize the concept of information influence activities:

Legitimacy: They are illegitimate attempts to change opinions in democratic states

Intention: They are conducted to benefit foreign powers (state, non-state, or proxy)

Ambiguity: They are conducted in a gray zone of hybrid threat between peace and war

Although the lines can be somewhat unclear, where we might think there are legitimate and authentic processes and practices that gain influence in the public sphere, information operations look to use illegitimate and and inauthentic methods. 

We can think about illegitimate influence in moral terms, where the aim is to deceive people, the methods exploit vulnerabilities and the presumption of good will, and the strategies break the rules of open and free debate. The report proposes a diagnostic framework, identified through four features and the acronym “DIDI”:

  • they contain Deceptive elements
  • they are not constructive but Intend to do harm
  • they are actually Disruptive in practice, harming society, individuals, institutions 
  • they Interfere with democratic processes – the actors are not legitimate participants and the methods encroach on the sovereignty of states

This diagnostic, when combined with sensitivity to known techniques and strategies, should be able to differentiate hostile information operations from ordinary communication and diplomacy, but this is a matter of judgement. 

Communicators are urged treat their primary goal in counteracting influence operations as the protection of democratic values, rather than subordinate goals such as exposing or directing responding to specific actions. In part this is because operations often have complex goals that are difficult to asses. 

Vulnerabilities in the epistemic chain

Information influence operations attack the “open system of opinion formation in Western democracies” at points of individual, group, and infrastructural vulnerability. We can model the process of democratic opinion formation as an epistemic chain from cognitive properties of individuals to the structure of the media environment. 

Media system vulnerability: Changes to the forms and power dynamics in the media environment create technological, regulatory and economic vulnerabilities that can be exploited to get access and amplification 

Public opinion vulnerability: Digital technology rapidly and radically reshaped the processes that produce public opinion, creating opportunities to access epistemic power illegitimately. 

Cognitive vulnerability: Information influence activities exploit cognitive and implicit biases that interfere with rational belief formation.

Illegitimate Actors and Gray Zones

“[I]nformation influence activities are often only one element in a larger asymmetric strategy of influence that involves targeted use of corruption, investing in political parties, think thanks and academic institutions, cyberattacks, the use of organized crime, coercive economic means, and the exploitation of ethnic, linguistic, regional, religious and social tensions in society” (21). 

Because of the national security orientation of this report, the illegitimacy of the actors, and thus, they sense in which their actions encroaches and undermines sovereignty, is important for identifying influence operations. The actors might be nation states, for example Russia has used information operations extensively to support military objectives such as the illegal annexation of Crimea, and their proxies. They might be militant groups such as jihadis, and sub-state criminal actors such as organized criminal enterprises. They might also be small groups or individuals motivated primarily by economics, by outright criminal behaviour such as hacking, or the monetized publication of information products such as fake news. 

Strategies & Targeting

“The classic military distinction between offensive and defensive strategy does not transfer easily to information influence” (22) Instead, we can categories information operation tactics as positive, negative, and oblique. 

Positive: The content is compatible with, compliments, or elaborates widely accepted narratives.

Negative: The content directly attacks and undermines widely accepted narratives (not necessarily in a unified or coherent way). 

Oblique: The content intends to capture and control pubic attention, often to distract from other issues, or to undermine the information environment generally.

Actions can be aimed directly at individuals through psychographic profiling and digital channels that facilitate microtargeting, as well as small demographic groups and society as a whole. 

We can combine these into an analytical matrix. (26)

A battlefield of facts, or narratives?

Fact-checking, and veritistic countermeasures more generally, have been influential in the scholarly debate on counter-influence, and the implementation of defensive strategies. However, narratives that weave facts into stories with emotional and political valence are powerful tools for influence, and are more complicated to counter. The reports adopts the view that defensive strategy should view facts and narratives as interrelated – narratives build representations, interpretations and perceptions with coherent messaging that relies in part on facts, but also in turn produces putative facts. Because the nature of facts, such as a picture of a burning police car, is such that they do not supply their own interpretation, it is in the development of narratives that influence is achieved. 

Narratives are built in layers, from the basic facts to interpretive glosses on the facts which are made coherent by connection to meta-narratives. Where the traditional media environment developed gatekeeping practices that aimed to establish facts accurately, and tended to restrict the promulgation of competing meta-narratives, the digital media environment eliminates these and allows narratives to develop freely and rapidly, and to circulate widely. (In the social epistemology literature, this gatekeeping has been understood as a kind of remote monitoring (Goldberg 2011), protecting the integrity of testimonial chains that relay information, necessary because we cannot verify all the information we receive.) This increases our cognitive vulnerability, because processes that enable robust epistemology of testimony are disintermediated.

Techniques Used by Information Influence Operations:

Cognitive hacking

Sociocognitive hacking: attempt to activate psychosocial trigger-points, especially anger and fear. Examples: clickbait

Psychographic hacking: isolate targets with precision information that exploits known individual biases and vulnerability Examples: dark ads

Social Hacking: 

Social proof and fake social proof: We are biased to accept real or simulated endorsements as a form of evidence. 

Bandwagon-effect and spiral of silence: Creating the appearance of group consensus creates a psychological appeal to defer. Example: Astroturing. The opposite effect occurs because we tend to be guarded about opinions and views that are silenced, further muting their narrative salience. 

Selective exposure: Filter bubbles and echo chambers. Filter bubbles are media environments where opposing views have been excluded, and echo chambers as ones where opposing views are actively undermined (see Nguyen 2019 for en excellent discussion of this, as well as Begby 2020 on evidential preemption).  Exploiting the tendency of digital information environments to generate these epistemic dysfunctions can increase polarization and expose vulnerabilities to influence operations. 

Para-social hacking: “The expression para-social captures the idea that humans sometimes begin to experience their objectively one-sided-relationships with personalities in media subjectively as two-sided; that is to say, symmetrical and reciprocal. ” (39). Our relationships with radio hosts and online celebrities often exhibit these features. They can be exploited in two ways, illegitimate influencers can befriend their audience to capture influence, and they can create the illusion of influence with fake engagement, bootstrapping their position of influence.

Symbolic Action: Generating real actions in the material environment provides content that can be powerfully integrated into narratives and influence operations. Nation-states might engage in war games, or encourage and organize acts of protest, that can be immediately deployed within influence operations. They are problematic for countermeasures that depend on falseness, since they are real occurrences. 

Disinformation and “fake news”

Disinformation is false information disguised as legitimate information, and is easily created and transmitted online, where the absence of traditional gatekeepers and the speed of information flow allow disinformation to flourish. Types of disinformation include:

Fabrication: Outright falsehoods which might seem plausible by connecting to accepted narratives

Manipulation: Faked evidence such as photos

Misappropriation: Re-framing information out of context to support unconnected narratives

Propaganda: Information produced specifically to influence narratives

Satire: Even though produced to be humourous, it can undermine discourse 

Parody: Can be confused with real news

Advertising: Often cannot be identified when embedded in content, and has secondary effect of undermining trust

Disinformation tends to either constructive aims, building new narratives, or destructive aims, undermining existing ones. 

It can be granular, producing specific targeted content or systematic, creating new outlets and 

sources, and is sometimes implemented by creating alternative media ecosystems 

Forging and leaking

Journalistic attention is often directed at sources of new information from leaks and whistleblowers. However, sometimes the evidence is faked, or, fake information is included with true information (tainted leaks). They have a primary effect of promoting some specific message or narrative, but a secondary effect to undermine trust generally, because they “effectively cast doubt on the authority and legitimacy of individuals and institutions” (51). This in turns make it it difficult to effectively react to new information. 

Potemkin villages of evidence

Facts are not just discovered, they are also produced by the institutions and procedures we use to generate knowledge. This has been called the “Woozle effect”, where the volume of citations is taken to indicate credibility, and which in turn generates further citation. 

The creation of parallel media and scientific environments creates the appearance of a functioning epistemic environment, but one that is controlled by a hostile actor. This has been observed in the context of anti-tobacco and anti-climate science lobbying (see also Levy 2018 on polluted epistemic environments)

Deceptive Identities: Bots, Sockpuppets and Botnets 

The use of deceptive identities transfers legitimacy from legitimate actors and sources to illegitimates one by shilling, impersonating or hijacking. Computational tools can generate personas that act alone or in concert to fake engagement and support. Human users can also be incentivized to engage with content, or adopt personas, that support information operations, often using multiple faked accounts called sock puppets to amplify their messages. 

The report offers some diagnostic indicators of bot accounts, though this is constantly shifting target. 

Trolling and flaming

The publication and promotion of content that emotionally engages with individuals has both silencing and polarization effects. It can create a toxic environment where some views are underrepresented due to fear. 

Humour and memes

These can be consumed quickly and spread virally, and humour acts as a psychological cover for subversive ideas that might otherwise be offensive (the subversive buffer). 

Malign rhetoric

Malicious forms of rhetoric include name-calling, whataboutism (distracting from one issue to another), ad hominem attacks, gish gallop (promulgating so many false claims that debunking is impractical, a sort of epistemic denial of service attack), transfer (illicitly associating arguments with partisan themes), and strawman. 

Strategies used by Information Influence Operations:

Black propaganda: Where white propaganda clearly states its source and grey propaganda obscures its origins, black propaganda actively deceives about its origins.

Point and shriek: Exploiting existing sensitivities, such as using misleading evidence to depict a military attack as having taking place at a place of worship to influence opinion.

Laundering: Combining several techniques such as tainted leaks, fake news, and potemkin villages to create the appearance of legitimate and significant controversy where there is none. 

Flooding: Russia’s modern propaganda model has been called the “Firehose of Falsehood” and seeks to overwhelm epistemic institutions, exhausting the ability to filter information, using high-volume, multi-channel disinformation. 

Cheerleading: The use of information influence techniques to crowd out dissent. 

Raiding: The rapid and coordinated application of influence techniques on a target, such as the “Swiftboating” of the John Kerry campaign in the 2004 US Election, where influence across multiple channels amplified false and misleading information to support a false narrative. 

Polarisation: Influence techniques can be applied to discredit moderate views and force narratives to focus on extreme positions, or to re-frame a non-partisan discourse (such as the use of face coverings to control the spread of COVID-19) as a partisan subject. 

Hack, mix, release: Often anchored by the tainted leak technique, this strategy employs the hacking of IT-systems with the tainting of that information to undermine or falsely incriminate individuals or institutions. 

Counteracting Information Influence Activities

Civil society approach: empower civil society to resist

Facts first approach: employ systemic fact checking

Collaborative approach: partnering institutions to organize resistance

Counter narrative approach: design and propagate counter narratives

Counter-propaganda approach: develop and deploy propaganda to counter influence

Raising the threshold approach: develop high quality high quality information channels to outcompete for influence

Ignoring approach: deny attention and engagement 

Regulatory approach: rely on laws to block 

Hard-liner approach: fighting back

The Role of Communicators: Preparation, Action, Learning

“Much of the public debate on information influence activities has emphasised the importance of source criticism among the public as a remedy to disinformation. However, we also believe that the responsibility of public sector communicators is to communicate in a legitimate manner. This strengthens the bond between society and citizens, making citizens more resilient to information influence activities.” (86)

Preparation: Minimize vulnerability to influence operations. Institutional, organisational and individual preparedness requires vigilance and awareness, a sense of shared responsibility for a society’s “total defence”, and governmental policy toward information influence actors

Preparation can include raising awareness of actual and potential influence operations, and includes education, training, and the creation of tools such as fact checking and hashtag analysis systems. It requires examination of the potential vulnerabilities of the target media environments.  Involvement of stakeholders including academia, civil society, and and business is preferred. 

The societal resilience model of raising awareness fosters the awareness of the legitimate  information sources and their protection of the reputation and legitimacy, whereas the information warfare approach involves the public in conscious and active resistance to hostile influence operations. 

Debunking operations must be conducted carefully so as not to play into the intended function of the strategies deployed. Debunking too frequently, or too quickly, can create backfire effects and collateral damage. 

Risk and vulnerability analysis: Public organizations in Sweden are required to undertake risk and vulnerability analysis that are reminiscent of cybersecurity audits, and which the report proposes should included explicit measures to defend against information operations. 

Target audience analysis: To help avoid situations where it is propagandists who set the agenda, and defensive measures are reactive, and also to improve the effectiveness of countermeasures, communicators should target their responses at the threat’s audience, not the threat itself. “This means knowing who those audiences are, as well as understanding how to reach them, the narratives that resonate with them, and their patterns of behaviour, motivations, fears and expectations. “(99) Organizations should build and maintain maps of their audiences, their interests and their narrative and epistemic environment. 

Strategic narratives and messaging: Over and above reactionary work such as debunking, communicators and develop strategic narratives that build trust and positive relations with their audience. Expanding an organization’s tactical messaging repertoire to include messaging that articulates core values and identity through narrative can help help remediate and inoculate against information operations.

Social Media: An organization should be prepared with competency in social media. 

Action: Assess, Inform, Advocate, Defend

Assess: First level of response is to a suspected information operation is to assess, determine factuality, credibility, and establishing an initial position in a holding statement. 

Inform: Second level: Correcting misinformation , referring to sources, and briefing direct stakeholders. 

Advocate: Third level is to positively advocate the organization’s position, though statement, dialogue, events, and the involvement of allied across and invents. 

Defend: Lastly, overt defence, such ignoring, blocking, engaging regulatory and legal mechanisms, and exposing. These are actions of last resort. 

Learn:

It is frequently difficult to ascertain in real time if a given even was part of an information operation, therefore it is important to document and evaluate. Did the event fit into the DIDI diagnostic? Was it handled effectively? What can be improved?

The Limits of Counter Strategy

We lack robust ethical and legal frameworks to guide defensive actions within information warfare, response should be developed in accordance with basic democratic values and norms. 

Cognitive Limits: 

Influence operations often display sophisticated understanding of human cognition and its vulnerabilities, often lacking in counter measures. Naive debunking efforts can actually amplify and entrench disinformation. Integrating knowledge of cognitive biases such as backfire effects into countermeasures is recommended, but these also suggest there are limits to the effectiveness of responses. 

Legal and Ethical Limits:

Democratic values and norms demand that “…influence activities that fall within the scope of the fundamental law on freedom of expression should always be addressed on the arena of open and free debate…” (112). Influence operations are typically within the limits of law, even when illegitimate. Government response must be ethically beyond reproach, and avoid interfering with open debate.


Original paper by James Pamment, Howard Nothhaft, Henrik Agardh-Twetman, Alicia Fjällhed: https://www.msb.se/RibData/Filer/pdf/28697.pdf

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Now I’m Seen: An AI Ethics Discussion Across the Globe

    Now I’m Seen: An AI Ethics Discussion Across the Globe

  • Rethink reporting of evaluation results in AI

    Rethink reporting of evaluation results in AI

  • Beyond Bias and Compliance: Towards Individual Agency and Plurality of Ethics in AI

    Beyond Bias and Compliance: Towards Individual Agency and Plurality of Ethics in AI

  • Right to be Forgotten in the Era of Large Language Models: Implications, Challenges, and Solutions

    Right to be Forgotten in the Era of Large Language Models: Implications, Challenges, and Solutions

  • Unprofessional Peer Reviews Disproportionately Harm Underrepresented Groups in STEM (Research Summar...

    Unprofessional Peer Reviews Disproportionately Harm Underrepresented Groups in STEM (Research Summar...

  • An error management approach to perceived fakeness of deepfakes: The moderating role of perceived de...

    An error management approach to perceived fakeness of deepfakes: The moderating role of perceived de...

  • Technology on the Margins: AI and Global Migration Management From a Human Rights Perspective (Resea...

    Technology on the Margins: AI and Global Migration Management From a Human Rights Perspective (Resea...

  • Why We Need to Audit Government AI

    Why We Need to Audit Government AI

  • Writer-Defined AI Personas for On-Demand Feedback Generation

    Writer-Defined AI Personas for On-Demand Feedback Generation

  • Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects

    Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.