Lamboozling Attackers: A New Generation of Deception

Software engineering teams can exploit attackers’ human nature by building deception environments.

Kelly Shortridge and Ryan Petrich

Deception is a powerful resilience tactic that provides observability into attack operations, deflects impact from production systems, and advises resilient system design. A lucid understanding of the goals, constraints, and design trade-offs of deception systems could give leaders and engineers in software development, architecture, and operations a new tactic for building more resilient systems—and for bamboozling attackers.

Unfortunately, innovation in deception has languished for nearly a decade because of its exclusive ownership by information security specialists. Mimicry of individual system components remains the status-quo deception mechanism despite growing stale and unconvincing to attackers, who thrive on interconnections between components and expect to encounter systems. Consequently, attackers remain unchallenged and undeterred.

This wasted potential motivated our design of a new generation of deception systems, called deception environments. These are isolated replica environments containing complete, active systems that exist to attract, mislead, and observe attackers. By harnessing modern infrastructure and systems design expertise, software engineering teams can use deception tactics that are largely inaccessible to security specialists. To help software engineers and architects evaluate deception systems through the lens of systems design, we developed a set of design principles summarized as a pragmatic framework. This framework, called the FIC trilemma, captures the most important dimensions of designing deception systems: fidelity, isolation, and cost.

The goal of this article is to educate software leaders, engineers, and architects on the potential of deception for systems resilience and the practical considerations for building deception environments. By examining the inadequacy and stagnancy of historical deception efforts by the information security community, the article also demonstrates why engineering teams are now poised—with support from advancements in computing—to become significantly more successful owners of deception systems.

Deception: Exploiting Attacker Brains

In the presence of humans (attackers) whose objectives are met by accessing, destabilizing, stealing, or otherwise leveraging other humans’ computers without consent, software engineers must understand and anticipate this type of negative shock to the systems they develop and operate. Doing so involves building the capability to collect relevant information about attackers and to implement anticipatory mechanisms that impede the success of their operations. Deception offers software engineering teams a strategic path to achieve both outcomes on a sustained basis.

Sustaining resilience in any complex system requires the capacity to implement feedback loops and continually learn from them. Deception can support this continuing learning capacity. The value of collecting data about the interaction between attackers and systems, which we refer to as attack observability, is generally presumed to be the concern of information security specialists alone. This is a mistake. Attacker effectiveness and systems resilience are antithetical; one inherently erodes the other. Understanding how attackers make decisions allows software engineers to exploit the attackers’ brains for improved resilience.

Attack observability. The importance of collecting information on how attackers make decisions in real operations is conceptually similar to the importance of observability and tracing in understanding how a system or application actually behaves rather than how it is believed to behave. Software engineers can attempt to predict how a system will behave in production, but its actual behavior is quite likely to deviate from expectations. Similarly, software engineers may have beliefs about attacker behavior, but observing and tracing actual attacker behavior will generate the insight necessary to improve system design against unwanted activity.

Understanding attacker behavior starts with understanding how humans generally learn and make decisions. Humans learn from both im mediate and repeated interactions with their reality (that is, experiences). When making decisions, humans supplement preexisting knowledge and beliefs with relevant experience accumulated from prior decisions and their consequences. Taken together, human learning and decision-making are tightly coupled systems. Given that attackers are human beings—and even automated attack programs and platforms are designed by humans—this tight coupling can be leveraged to destabilize attacker cognition.

In any interaction rife with conflict, such as attackers vs. systems operators, information asymmetry leads to core advantages that can tip success toward a particular side. Imperfect information means players may not observe or know all moves made during the game. Incomplete information means players may be unaware of their opponents’ characteristics such as priorities, goals, risk tolerance, and resource constraints. If one player has more or better information related to the game than their opponent, this reflects an information asymmetry.

Attackers choose an attack plan based on preexisting beliefs and knowledge learned through experience about operators’ current infrastructure and protection of it. Operators choose a defense plan based on preexisting and learned knowledge about attackers’ beliefs and methods.

This dynamic presents an opportunity for software engineers to use deception to amplify information asymmetries in their favor. By manipulating the experiences attackers receive, any knowledge gained from those experiences is unreliable and will poison the attackers’ learning process, thereby disrupting their decision-making.

Deception systems allow software engineers to exacerbate information asymmetries in two dimensions: exposing real-world data on attackers’ thought processes (increasing the value of information for operators); and manipulating information to disrupt attackers’ abilities to learn and make decisions (reducing the value of information for attackers).

The rest of the article will discuss the challenges and potential of deception systems to achieve these goals in real-world contexts.

The History of Honeypots

The art of deception has been constrained by information security’s exclusive ownership of it. The prevailing mechanism used to implement deception is through a host set up for the sole purpose of detecting, observing, or misdirecting attack behavior, so that any access or usage indicates suspicious activity. These systems are referred to as honeypots in the information security community. It is worth enumerating existing types of honeypots to understand their deficiencies.

Levels of interactivity. Honeypots are typically characterized by whether they involve a low, medium, or high level of interactivity.

Low interaction (LI) honeypots are the equivalent of cardboard-cutout decoys; attackers cannot interact with them in any meaningful way. LI honeypots represent simple mimicry of a system’s availability and are generally used to detect the prevalence of port scanning and other basic methods attackers use to gather knowledge relevant for gaining access (somewhat like lead generation). They may imitate a specific port or vulnerability and record successful or attempted connections.

Medium interaction (MI) honeypots imitate a specific kind of system, such as a mail server, in enough depth to encourage attackers to exploit well-known vulnerabilities, but they lack sufficient depth to imitate full system operation. Upon an exploitation attempt, MI honeypots send an alert or record the attempt and reject it. They are best for studying large-scale exploitation trends of public vulnerabilities or for operating inside of a production network where any access attempt indicates an attack in progress.

High interaction (HI) honeypots are vulnerable copies of services meant to tempt attackers, who can exploit the service, gain access, and interact with the base operating-system components as they normally would. It is uncommon for HI honeypots to include other components that imitate a real system. For the few that do, it is usually a side effect of being built by transplantation. HI honeypots usually send an alert upon detection of an attacker’s presence, such as after successful exploitation of the vulnerable software.

Limitations of honeypots. While LI and MI honeypots are generally understood to be ineffectual at deceiving attackers (and thus can be dismissed as applicable options for real-world deception), the existing corpus of HI honeypots is primitive as well. Conventional deception approaches are unconvincing to attackers with a modicum of experience. Attackers need only ask simple questions—Does the system feel real? Does it lack activity? Is it old and forgotten?—to dissipate the mirage of HI honeypots.

The limitations of HI honeypots mean that attackers often uncover their deceptive nature by accident. HI honeypots also lack the regular flow of user traffic and associated wear of production systems—a dead giveaway for cautious attackers.

Finally, a fundamental flaw of all honeypots is that they are built and operated by information security specialists, who are typically not involved in software architecture and are largely divorced from software delivery. They may know at a high level how systems are supposed to behave but are often unaware of the complex interactions between components that are pivotal to systems function. As such, this exclusive ownership by security specialists represents a significant downside to current deception efficacy.

Modern Computing Enables New Deception

A new generation of deception is not only possible, but also desirable given its strategic potential for systems resilience. The design and ownership of this new category, deception environments, reflects a significant departure from the prior generation. Deception environments are sufficiently evolved from honeypots that they represent a new, distinct category.

It is not surprising that attackers find individual honeypot instances unconvincing, given their expertise in attacking systems and understanding the interrelation between components to inform their operations. The combination of new types of computing and ownership by software engineers means that environments dedicated to distributed deception can be created that more closely resemble the types of systems attackers expect to encounter.

The goal of traditional honeypots is to determine how frequently attackers are using scanning tools or exploiting known vulnerabilities; tracing the finer nuances of attacker behavior or uncovering their latest methodology is absent from deception projects to date. Deception environments serve as a means to observe and understand attacker behavior throughout all operational stages and as platforms for conducting experiments on attackers capable of evading variegated defensive measures. This concentrates efforts on designing more resilient systems and makes fruitful use of finite engineering attention and resources.

A few dimensions of modern infrastructure are pivotal in nurturing a new deception paradigm with lower costs and more efficacious design.

Cloud computing. The accessibility of cloud computing enables the ability to provision fully isolated infrastructure with little expense.

Deployment automation. Full systems deployment automation and the practice of defining infrastructure declaratively, commonly referred to as IaC (infrastructure as code), decreases operational overhead in deploying and maintaining shadow copies or variants of infrastructure.

Virtualization advancements. The widespread availability of nested virtualization and mature, hardened virtualization technologies inspires confidence that attackers are isolated from production, makes it possible to observe them in more detail, and extracts extra density out of computing resources.

Software-defined network (SDN) proliferation. With the ability to define networks programmatically, isolated network topology dedicated to attackers can be created without incurring additional cost.

New ownership. This is another crucial catalyst for this latest generation of deception. Ownership based on systems design expertise, rather than security expertise, creates the dynamism necessary for deception systems to succeed against similarly dynamic opponents.

Software engineering teams are already executing the necessary practices. Software operators can repurpose their unique system deployment templates for building production environments and variants (such as staging environments) toward building powerful deception systems. They can then derive attack data that is distinctly applicable to their environments and cannot be garnered elsewhere. As a result, software engineers are more qualified for the endeavor than security teams and can gain a highly effective observability tool by deploying deception environments.

Sumber: ACM Magazine

Leave a comment