Russia has historically employed deception, misinformation/disinformation, propaganda, active measures, and information operations to dissuade and limit state actors from pursuing courses of actions that challenge the Kremlin’s political and military objectives. Misinformation is non-kinetic and both informs and assists Russia’s military strategy. Communication platforms with global reach spread state-sponsored misinformation to influence, shape, and limit Western political and military responses against Russia’s war in Ukraine. That Kremlin’s stated willingness to deploy tactical and strategic nuclear weapons against Ukraine and the West follows narratives that generate doubt and uncertainty regarding the true intentions of Russian state behaviour.
Cyber deception is a burgeoning defence technique that provides increased detection and slowed attack impact. Deception could be a valuable solution for defending the slow-to-patch and minimally cryptographic industrial Cyber-Physical Systems. However, it is necessary for cyber- physical decoys to appear connected to the physical process of the defended system to be convincing. In this paper, the authors present a machine-learning approach to learn good-enough models of the defended system to drive realistic decoy response. The results of studying this approach with simulated and real building systems are discussed.
This paper investigates information influence in society’s Information Environment. The Grounded Theory approach was used to collect and to analyse the data. A conceptual framework of the thematic categories and item categories was developed on the basis of empirical evidence and past studies that reflect the findings of the field. The most fundamental components in this conceptual framework were six thematic categories (information influence, information operations, cyber operations, psychological operations, kinetic operations, and deception), their item categories, the items themselves, and the interrelationships between the thematic categories.
Sophisticated attacks usually involve decision logic that observes the victim’s responses before deciding the next action. Such logic presents an opportunity for the defence, as it provides a controllable feedback channel. Manoeuvres that manipulate responses can confuse the adversary’s decision process, causing them to undertake ineffective actions.
Defensive deception provides promise in rebalancing the asymmetry of cybersecurity. It makes an attacker’s job harder because it does more than just block access; it impacts the decision making causing him or her to waste time and effort as well as expose his or her presence in the network. Pilot studies conducted by NSA research demonstrated the plausibility and necessity for metrics of success including difficulty attacking the system, behavioral changes caused, cognitive and emotional reactions aroused, and attacker strategy changes due to deception. Designing reliable and valid measures of effectiveness is a worthy (though often overlooked) goal for industry and government alike.
The emergence of 3D-printed guns over 2013-15 is part of a more fundamental shift in the dynamics of war caused by two different forms of convergence. One is technology convergence, and the second is the bundling-up of various tactical and operational concepts, developed over the last two decades. These have converged into a broad-based concept called five-dimensional operations or battlespace.
This paper examines perception management as practiced by governments and the militaries of Western nations since 1980. It examines this topic using the framework of a simple model of information. Basically, it defines information as that product formed when data meets cognition. Contemporary conflicts and their associated information campaigns are examined. It postulates about the impact these practices will have on the democratic process in these nations.
Deception techniques are often employed as part of a proactive and preventative measure of security. However, its application in security has seldom been expressed with a defining explanation of the actual deception. This paper will present a discourse on the existence of deceptions in nature to construct a model that has application to network deceptions. A model of deception will be developed with the intention of applying the delineated actions of deceit, deception, and deceiving to a wireless honeypot. In a future experiment, a research goal will be to establish associations between deceptions deployed and the attainment of network defense goals through implementation of the model of deception.
‘Cyberwar’ is information warfare directed at the software of information systems. It represents an increasing threat to our militaries and civilian infrastructures. Six principles of military deception are enumerated and applied to cyberwar. Two taxonomies of deception methods for cyberwar are then provided, making both offensive and defensive analogies from deception strategies and tactics in conventional war to this new arena. One taxonomy has been published in the military literature, and the other is based on case theory in linguistics. The application of both taxonomies to cyberwar is new. We then show how to quantify and rank proposed deceptions for planning using ‘suitability’ numbers associated with the taxonomies. The paper provides planners for cyberwar with a more comprehensive enumeration than any yet published to the tactics and strategies that they and their enemies may use. Some analogies to deception in conventional warfare hold, but many do not, and careful thought and preparation must be applied to any deception effort.
This is preliminary research into the effectiveness of deceptive defensive measures in particular honeypots that use deceit as a primary defensive and offensive mechanism. Initial research has been conducted using the Deception Tool Kit and its ability to fool commonly available network scanning tools such as Nessus and Nmap The preliminary research indicates that these deceptive tools have a place in modern network defense architecture.
Deception offers one means of hiding things from an adversary. This paper introduces a model for understanding, comparing, and developing methods of deceptive hiding. The model characterizes deceptive hiding in terms of how it defeats the underlying processes that an adversary uses to discover the hidden thing. An adversary’s process of discovery can take three forms: direct observation (sensing and recognizing), investigation (evidence collection and hypothesis formation), and learning from other people or agents. Deceptive hiding works by defeating one or more elements of these processes. The model is applied to computer security, and it is also applicable to other domains.
This paper proposes that both Information Warfare attacks and non-intentional perception errors can be categorised as causes of misperception. The causes of misperception are then analysed in the terms of Boyd’s OODA loop model to determine when they cause errors to occur. The OODA loop model is then expanded to produce a theoretical model of the internal process of the Orientation step of the OODA loop. One of these errors is then explained in greater detail with the new model.
Information deception is a core component of three dimensional tactics (3D tactics). 3D tactics is a relatively new concept which seeks to develop spherical security, or ‘look-around’ tactical thinking in three dimensions. However, the connection between information deception and 3D tactics is not well understood. In both the 2005 London Underground attacks, and the 2007 Haymarket attempted attack factors such as information deception played a key operational frame of reference in the development of the attack methodology.
Perception Management is a key component of Information Operations. This article presents a taxonomy of Perception Management, which is seen as comprising five principal sub-elements: Public Affairs, Public Diplomacy, Psychological Operations, Deception, and Covert Action. While these are traditional activities, the author argues that they generally have not been employed well and in a synergistic fashion by the Western Powers since the Second World War. The article suggests an approach to foreign political-military challenges in terms of ‘Shaping the Information Space’ as an organising principle of policy and the application of power in the international arena. In order to undertake such an enterprise, the Allied nations require improved understanding of the psychology of adversaries and neutrals, as well as one’s own friends and allies.
New Media conventions have fluttered along unforeseen flight paths. By combining sock-puppetry with the grouping power of metadata it is possible to demonstrate widespread influence through Twitter dispersion. In one nest there is a growing use of sock-puppetry accentuated by the exploitation of a social media that does not attempt to verify proof of identity. Created identities in their thousands can flock towards, and in support of, a single identity. They do so alongside legitimate accounts but in concert remain imperceptible within an overall group. In another nest there is the practise of homophily, captured through metadata, and used to imply connectivity and alliance by means of inference through the informational transfer of ideas through social media.
Attacks by lone gunmen in public places have been experienced in schools and universities around the world. These attacks are viewed as isolated acts by individuals with little or no connection to any political or ideological agenda. Additionally, these attackers are commonly seen as having little connection to each other. However, viewed from the perspective of tactics it is argued that certain commonalities arise. Taking examples from past infamous school shootings in the U.S., a particular attack method of operations and a possible range of tactical solutions to these deadly attack methods can be identified. A methodology is proposed which views these actions by the lone gunmen as a tactical concept called erratic attacks. The attackers themselves in this methodology are viewed as wild predators. The wild predator attacker can only be defeated with one of two defences. These are a denial of space, such as boxing, and/or a dynamic defence. These approaches are designed to overcome the significant information advantages which an erratic attacker can have, namely deception advantages. This article proposes to discuss this methodology in terms of terrorist ‘tactics, techniques, and procedures’ (TTPs).
The definitive publication for the best and latest research and analysis on information warfare, information operations, and cyber crime. Available in traditional hard copy or online.
The definitive publication for the best and latest research and analysis on information warfare, information operations, and cyber crime. Available in traditional hard copy or online.