Disinformation

Terminal Veracity: How Russian Propaganda Uses Telegram to Manufacture ‘Objectivity’ on the Battlefield

Abstract:

This article investigates over 130,000 Telegram messages, 15,000 Telegram forwards, and 750 news articles from Russian-affiliated media to assess the information supply chain between Russian media and Telegram channels covering the war in Ukraine. Using machine-learning techniques, this research provides a framework for conducting argument and network analysis for disambiguating narratives, channels, and users, and mapping dissemination pathways of influence operations. The findings indicate that a central feature of Russian war reporting is actually the prevalence of neutral, non-argumentative language. Moreover, dissemination patterns between media sites and Telegram channels reveal a well-cited information laundering network with a distinct supply chain of covert, semi-covert, and overt channel types active at seed, copy, and amplification levels of operation.

Combating Trust Erosion: Discerning Fake News and Propaganda on Social Media in the Era of AI

Abstract:

This paper introduces a model to combat fake news and propaganda spread on social media, derived from a systematic literature review of 28 articles. It outlines the model based on seven key themes: scepticism, AI detection, fact checking, media literacy, ethical technology use, digital manipulation, and community verification. This comprehensive model aims to bolster individuals’ and communities’ abilities to critically assess information, emphasising its application in research, policy, and education. By advocating a multi-layered strategy, the model seeks to foster a discerning global community equipped to navigate the complexities of discerning fake news and propaganda.

Beyond Deepfakes: Synthetic Moving Images and the Future of History

Abstract:

This paper investigates the role of generative Artificial Intelligence (AI) tools in the production of synthetic moving images—specifically, how these images could be used in online disinformation campaigns and could profoundly affect historical footage archives. AI-manipulated content, especially moving images, will have an impact far beyond the current information warfare (IW) environment and will bleed into the unconsidered terrain of visual historical archives with unknown consequences. The paper will also consider IW scenarios in which new types of long-term disinformation campaigns may emerge and will conclude with potential verification and containment strategies.

Evaluating the Ambiguous Cognitive Terrain: A Framework to Clarify Disinformation

Abstract:

Defense and civilian planners have struggled to place disinformation as a discrete weapon in the cognitive domain. This is so because disinformation is inadequately and ambiguously defined for military and civilian components. When comparing the cognitive terrain to other forms of geography, it becomes evident why it is contested and relevant to national security. This paper analyzes the reasons for the ambiguity and explains why national security professionals must develop a framework to identify disinformation. Because disinformation is an element of cognitive warfare, it can be defined using a set of three criteria. The criteria fix disinformation in the cognitive domain enabling the warfighter and homeland defenders to counter and use it effectively.

Combatting Privacy Information Warfare: A New Incident Response Framework

Abstract:

When nation-state actors weaponize information to harm individuals, communities, and societies, they erode civilian confidence in legitimate authorities, institutions, and defences to impact national security. This paper proposes new conceptual models and a methodology, the Privacy Incident Response Plan (PIRP). The methodology’s design prepares and mitigates privacy-related harms, tactics, techniques, and mitigation strategies to counter sophisticated threat actors. Using this methodology, contingency planners and incident responders can develop strategies to defend against the privacy harms of information warfare.

The Evolution of Information Warfare in Ukraine: 2014 to 2022

Abstract:

In January 2022, Russian forces began building up on the Ukrainian border prior to entering Ukraine in what was termed a ‘special military operation’ in support of ethnic Russians. In the ten months of conflict, there has been a range of information warfare tactics deployed, most notably disinformation and cyber operations. Ukraine is a particularly useful case study due to the ongoing tensions and low-intensity conflict, since the social media-led uprisings and annexation of Crimea in 2014. This article conducts an analysis of the information warfare in the Russo-Ukraine conflict, and contrasts this to prior operations to illustrate the evolution, limitations, and possible future of information warfare during a kinetic conflict.

Relating Credibility to Writing Style, Emotion, and Scope of Spread of Disinformation

Abstract:

This study focuses on Taiwan, a Chinese-speaking country suffering from disinformation attacks. To fully explore the situation in Taiwan, this study adopts the victim-oriented approach and focuses on the following question: what are some factors regarding the credibility of disinformation in Taiwan? The result of an exit poll survey (n=892) and a series of behavior experiments (n=86) indicate that, when countering disinformation, regulations that only focus on transparency of the source may have little impact since the source is not the main variable in terms of credibility.

Ambiguous Self-Induced Disinformation (ASID) Attacks: Weaponizing a Cognitive Deficiency

Abstract:

Humans quickly and effortlessly impose context onto ambiguous stimuli, as demonstrated through psychological projective testing and ambiguous figures. This feature of human cognition may be weaponized as part of an information operation. Such Ambiguous Self-Induced Disinformation (ASID) attacks would employ the following elements: the introduction of a culturally consistent narrative, the presence of ambiguous stimuli, the motivation for hypervigilance, and a social network. ASID attacks represent a low-risk, low-investment tactic for adversaries with the potential for significant reward, making this an attractive option for information operations within the context of grey-zone conflicts.

Information Warfare: Leveraging the DMMI Matrix Cube for Risk Assessment

Abstract:

This paper presents the DMMI Matrix Cube and demonstrates its use in assessing risk in the context of information warfare. By delineating and ordinating the concepts of disinformation, misinformation, malinformation, and information, its purpose is to gauge a communication’s intention to cause harm, and its likelihood of success; these, together, define the severity of weaponised information, such as those employed within sophisticated information operations. The chance or probability of the (information) risk is determined by the intention to harm, the apparent veracity of the information, and the probability of its occurrence. As an exemplar, COVID-19 anti-vaccine campaigns are mapped to the DMMI Matrix Cube, and recommendations are offered based on stakeholder needs, interests, and objectives.

False Information as a Threat to Modern Society: A Systematic Review of False Information, Its Impact on Society, and Current Remedies

Abstract:

False information and by extension misinformation, disinformation and fake news are an ever-growing concern to modern democratic societies, which value the freedom of information alongside the right of the individual to express his or her opinions freely. This paper focuses on misinformation, with the aim to provide a collation of current research on the topic and a discussion of future research directions

Social Cybersecurity: A Policy Framework for Addressing Computational Propaganda

Abstract:

After decades of Internet diffusion, geopolitical and information threats posed by cyberspace have never been greater. While distributed denial-of-service (DDOS) attacks, email hacks, and malware are concerns, nuanced online strategies for psychological influence, including state-sponsored disinformation campaigns and computational propaganda, pose threats that democracies struggle to respond to. Indeed, Western cybersecurity is failing to address the perspective of Russia’s ‘information security,’—manipulation of the user as much as of the network. Based in computational social science, this paper argues for cybersecurity to adopt more proactive social and cognitive (non-kinetic) approaches to cyber and information defense. This protects the cognitive, attitudinal, and behavioral capacities required for a democracy to function by preventing psychological apparatuses, such as confirmation bias and affective polarization, that trigger selective exposure, echo chambers, in-group tribalization, and out-group threat labelling.

Machine Intelligence to Detect, Characterise, and Defend against Influence Operations in the Information Environment

Abstract:

Deceptive content—misleading, falsified, and fabricated—is routinely created and spread online with the intent to create confusion and widen political and social divides. This study presents a comprehensive overview of content intelligence capabilities (WatchOwl– https://watchowl. pnnl.gov/) to detect, describe, and defend against information operations on Twitter as an example social platform to explain the influence of misleading content diffusion and enable those charged with defending against such manipulation and responsive parties to counter it. We first present deep learning models for misinformation and disinformation detection in multilingual and multimodal settings followed by psycho-linguistic analysis across broad deception categories. 

Influence Operations & International Law

Abstract: 

There is no treaty or specifically applicable customary international law that deals squarely with ‘Influence Operations’ (IO). Despite this, there are a number of discrete areas of international law that nonetheless apply indirectly to regulate this activity. These principally relate to the Use of Force (Jus ad Bellum), International Human Rights Law, and the Law of Armed Conflict. Influence Operations are presumptively lawful in each of these three areas provided that such activities do not cross relatively high thresholds of prohibition. In the event that an IO does cross a prohibition set by international law, there are a number of responses available to a targeted State.

Understanding and Assessing Information Influence and Foreign Interference

Abstract: 

The information influence framework was developed to identify and to assess hostile, strategy-driven, state-sponsored information activities. This research proposes and tests an analytical approach and assessment tool called information influence and interference to measure changes in the level of strategy-driven, state-sponsored information activities by the timeliness, specificity, and targeted nature of communications as well as the dissemination tactics of publicly available information. 

Disinformation in Hybrid Warfare: The Rhizomatic Speed of Social Media in the Spamosphere

Abstract:

In this paper, two case studies are analysed, namely Finland’s Rapid Reaction Force and the arrest of a Russian citizen in Finland at the request of U.S. officials. A so-called rhizomatic focus (Deleuze and Guattari 1983) is adopted to assess social networking spam and the implications that this phenomenon has for interaction in security cases. In both case studies, the respective timeline of events and the social media impacts on the rhizomatic ‘spam’ information context are analysed.

Twitter as a Vector for Disinformation

Abstract

Twitter is a social network that represents a powerful information channel with the potential to be a useful vector for disinformation. This paper examines the structure of the Twitter social network and how this structure has facilitated the passing of disinformation both accidental and deliberate. Examples of the use of Twitter as an information channel are examined from recent events. The possible effects of Twitter disinformation on the information sphere are explored as well as the defensive responses users are developing to protect against tainted information.

Journal of Information Warfare

The definitive publication for the best and latest research and analysis on information warfare, information operations, and cyber crime. Available in traditional hard copy or online.

Keywords

A

AI
APT

C

C2
C2S
CDX
CIA
CIP
CPS

D

DNS
DoD
DoS

I

IA
ICS

M

P

PDA

S

SOA

X

XRY

Quill Logo

The definitive publication for the best and latest research and analysis on information warfare, information operations, and cyber crime. Available in traditional hard copy or online.

SUBSCRIBE NOW

Get in touch

Registered Agent and Mailing Address

  • Journal of Information Warfare
  •  ArmisteadTEC
  • Dr Leigh Armistead, President
  • 1624 Wakefield Drive
  • Virginia Beach, VA 23455

 757.510.4574

 JIW@ArmisteadTec.com