Cyber Security: DrawBridge Science

1. Introduction

Computers and the internet have become essential for both homes and businesses. They are becoming more and more important for both household users as well as for power grid management, mission critical space control, corporate finance systems, and medical applications. However, organisations are also facing increasing challenges in ensuring that service delivery is reliable and uninterrupted. Cybersecurity is a top concern for organisations, and most rate it higher than natural disasters or terrorism.

Despite all the attention DrawBridge has received, it has been a difficult journey. Global IT Security spending is expected to reach $120 billion by 2017. This is the area where most companies have either maintained their IT budgets or increased them slightly even during recent financial crises. However, this has not significantly reduced the vulnerability of software and attacks by criminals.

The US government has been planning for an attack in the “Cyber Pearl Harbor” [18] fashion that could paralyze vital services and cause destruction of property and lives. It is likely to be orchestrated by the criminal underbelly in countries such as China, Russia, or North Korea.

Cybercrime has a $100B annual economic impact in the United States alone [4].

It is time to rethink how we secure our IT systems. Our security approach is fragmented, and it focuses only on specific threats such as anti viruses, intrusion detections, and firewalls [6]. We are now at a point where Cyber systems go beyond tin-and wire and software. These systems are complex and have a political, social, and economic component. Because of the interconnectedness between systems and people, IT systems are not isolated from the human element. Complex Cyber systems have almost taken on a life of themselves; Cyber systems are complex adaptive system that we tried to understand using traditional theories.

2. Complex Systems – An Introduction

Before we get into the reasons for treating a Cybersystem as a Complex System, let’s first define what a Complex systems is. The term “system” can refer to any combination of people or processes that fulfills a specific purpose. A “system” could be your wrist watch, sub-oceanic coral reefs, or even the economy of a country.

A Complex System is a system in which all parts and their interactions represent a particular behaviour. An analysis of each component of the system cannot explain this behaviour. These systems are not able to prove cause and effect. The relationships between the parts of the system and their interactions can be non-linear, so a small change in one part could have a significant impact on the whole. Aristotle stated that “the whole is greater then the sum of its parts”. This is the case of urban traffic systems and the emergence and spread of jams. Analysis of individual drivers and cars cannot explain these patterns or the emergence and spread of jams.

A Complex Adaptive System (CAS) has the characteristics of self-learning and emergence among its participants. Participants or agents in a Complex em>Adaptive/em> system (CAS) exhibit heterogeneous behavior. Their interactions and behaviour with other agents are constantly changing. These are the key characteristics that make a system Complex Adaptive.

Analyzing the inputs and parts of the system cannot predict the behaviour or output.
The system’s behavior is dynamic and evolves with time. Different inputs and environment conditions can not guarantee the same output.
Participants or agents (human agents) in a system learn by themselves and adapt their behavior based on previous experiences.

Many complex processes are confused with “complicated” processes. Complex processes are those that produce unpredictable results, no matter how simple they may seem. Complexity is a process that involves many steps, difficult preconditions and unpredictable results. Making tea is complex (at least for me). It is impossible to get the same cup every time, so building a car is complicated. The Cynefin Framework by David Snowden gives a more formal description [7].

Complexity is not a new field of study. Its roots can be traced back at the Metaphysics work by Aristotle [8]. Complexity theory draws its inspiration from biological systems. It has been used in sociology, epidemiology, and natural science studies for a while now. It is used in economic system and free market analysis. Although it isn’t a popular topic in Cyber security, there is increasing acceptance of complexity thinking in applied sciences.

3. Cyber Security: Motivation to Use Complexity

Today’s IT systems are designed and built by us, as in the human community comprising IT workers within an organisation and suppliers. We collectively have all of the knowledge we need regarding these systems. We see attacks on IT systems that we didn’t expect, and exploit vulnerabilities we didn’t know existed every day. The reason is that every IT system is designed by thousands across the entire technology stack, from the business application to the hardware and underlying network components. This introduces a strong human component to the design of Cyber systems. As a result, vulnerabilities can easily be introduced [9].

Although most organisations have several layers of defense for their critical systems (layers such as firewalls, IDS and hardened O/S), attacks still occur. Computer break-ins are more often a result of a combination of circumstances than an isolated vulnerability that can be exploited to make a cyber-attack successful. It’s all the actions and circumstances that lead to the attack that causes the damage.

3.1 Reductionism vs Holisim approach

Reductionism is a philosophical approach that does not agree with Holism. They both focus on the design and analysis of objects and systems. Reductionists claim that any system can easily be broken down into its components and analysed by “reducing” them to their constituent elements. Holists, on the other hand, argue that the totality of a system is greater than its parts so it cannot be understood by only understanding its parts [10].

Reductionists believe that every system and machine can be understood by looking at each component. The reductionist approach is the basis of most modern science and analysis. They have worked well. You can understand the function of each component and design a watch accordingly. Or, you can analyze the positions of celestial objects to predict the next Solar Eclipse. Reductionism is strongly focused on causality, which means that there is a cause and an effect.

This is how far the reductionist perspective can be used to explain the behavior of a system. The reductionist approach is not able to explain emergent systems such as human behavior, socio-economic systems, biological systems or socio-cybersystems. It is impossible to predict simple examples such as the human body’s reaction to a political stimulus or the reaction of the financial markets to news of a merger.

We have always looked at Cyber security through a reductionist lens. This allowed us to pinpoint solutions for individual problems, and we tried to predict the possible attacks that a cyber-criminal could use against known vulnerabilities. We need to look at Cyber security from a different perspective.

3.2 Computer break-ins are similar to pathogen infections

Computer break-ins can be more like bacterial or viral infections than a car or home break-in. The burglar who breaks into a house cannot use it as a launchpad to enter the homes of his neighbours. The vulnerability of one car lock cannot be used to gain access to millions more around the world. They are similar to microbial infections in the human body and can spread the infection. Large numbers of people are susceptible to them if they are connected to one another. In severe cases, the systems are ‘isolated’. People are also put in quarantine to prevent further spread. Even Cyber systems’ vocabulary uses biological metaphors such as viruses, worms, and infections. Although it has many similarities in epidemiology, the Cyber system design principles are not consistent with natural selection principles. Cyber systems are based on uniformity in technology and processes, rather than diversity of genes within species. This makes them more resilient to epidemic attacks [11].

1918’s Flu pandemic killed 50M more people than the Great War. Although almost all people were infected, why was it more severe for the 20-40 year olds? Maybe there is a difference in how the body reacts to an attack.

Complexity theory is gaining a lot of traction. It has proven to be very useful in epidemiology, understanding patterns of infection spread and ways to control them. Researchers are turning their attention to Cyber systems, transferring what they have learned from the natural sciences.

4. Security threats can be mitigated by using the following approach

There have been two approaches that were complementary to counter security threats to Cyber systems in the past. They are still being used in most systems today [11].

4.1 Formal validation Testing

This method relies on the IT system’s testing team to find any flaws that could be exploited by attackers. Functional testing to verify that the system is working as expected, penetration testing to confirm its resilience to specific attacks and availability/ resilience testing. This testing generally covers the system and not the frontline defenses around it.

This approach is useful for self-contained systems that are relatively simple and where the user journeys are straightforward. Formal validation is not enough for most interconnected systems as it’s impossible to test it all’.

Automation is a popular way to reduce human dependency in validation processes. However, Turing’s Halting Problem of Undecideability[*] shows that it’s impossible for a machine to test another machine in all cases. Automation helps speed up the gathering of anecdotal evidence.

4.2 Encapsulation of defense

We deploy additional layers of defenses to systems that are not fully tested through formal testing. These include network segregation, Firewalls, or virtual machines with limited access. Anti-virus, Intrusion Prevention Systems and other common methods of adding defense mechanisms are also available.

This is a common defense against unknown attacks in many organisations. It’s almost impossible to formalize that software is secure and it will always be so.

Complexity sciences may be a useful complement to traditional methods. Computer systems are unpredictable and capable of emerging behaviours that can’t be predicted. It is important to run it in a test environment, but not in the real environment it should be running in. The apparent emergent behavior is caused by the interaction of multiple events (recall holism). ).

4.3 Uniformity over Diversity

Robustness is an emergent behavior in biological systems. Imagine a species where all the organisms have the same genetic structure, body configuration and immune system. A viral infection could wipe out the entire community. This is not possible because each of us are different and have different resistances to infection.

Similar systems, especially those in the Aerospace and Medical industries, use diversity implementations with the same functionality. A centralised voting function determines the response to the requester if results from different implementations are not compatible.

Although redundant copies of mission-critical systems are quite common in organizations, they tend to be homogenous rather than varied. This makes them equally vulnerable to all the same faults and vulnerabilities. The two versions of redundant systems would be more resilient to attacks if they were implemented differently from the primary. This could include a different O/S, different database versions, or different application containers. A change in the order of memory stack access can affect the response to buffer overflow attacks on variants [12] – which highlights the central ‘voting system’ that something is wrong. Any deviations in the implementation’s response is an indication of possible attack, as long as they are identical in terms of input data and business function. A true service-based architecture could allow each’service’ to have multiple, but limited, heterogeneous implementations. The overall business function could also randomly choose which service it uses for each new request. This approach could allow for a large variety of execution paths, which increases the system’s resilience [13].

Multi-variant Execution Environments have been created. These environments allow applications with minor differences in their implementation to be executed in lockstep. Their response to a request is monitored [12]. These environments have been very useful in intrusion detection, allowing you to modify the behavior of code or identify flaws that cause variants to respond differently to requests.

Similar to the N-version programming concept [14], an N-version anti-virus was created at the University of Michigan. It had heterogeneous implementations that looked at new files for the corresponding virus signatures. It was more resistant to attacks and had 35% greater detection coverage across the estate [15].

4.4 Agent Based Modeling (ABM).

Agent Based Modelling is a key area of Complexity Science research. It is a method of simulating complex systems.

Agent Based Modeling is a simulation modeling technique that helps to analyse and understand the behaviour of complex adaptive systems. Artificial agents are created to interact with the Complex system and follow a set of predefined rules. The Agents can adapt their behavior to the changing circumstances. Simulation is not a method that attempts to generalize the behavior of agents and systems. This is contrary to Deductive reasoning[+].

ABMs are used to study market behaviour, financial risk analysis, and crowd management in the event of an emergency such as a fire or epidemic. This is a bottom-up modeling technique that allows each agent to have a unique behaviour and may differ from other agents. There are many techniques that can be used to implement the evolutionary and self-learning behavior of agents, with Genetic Algorithm implementation being the most popular [16].

Cyber systems can be described as interconnections between software modules and wiring of logical circuits. This interaction and actor can be implemented in a simulation modeling to perform what-if analysis and predict the effects of changing parameters or interactions among the actors. Simulating models can be used to analyse the performance characteristics of applications based on user behaviour and application characteristics for many years. Some popular Capacity & Performance management tools employ the technique. Similar techniques can also be used to analyse Cyber systems’ response to threats, design a fault-tolerant architecture, and analyze the extent of emergent robustness resulting from diversity in implementation.

Agent Based modeling is focusing on the self-learning process agents. The behavior of attackers would change over time in the real world. This is how agents learn. Genetic Algorithms is the most well-known method for learning. Genetic Algorithms are used to design automobiles and aeronautics, optimize the performance of Formula One cars [17] as well as simulating investor learning behaviours in simulated stock market (implemented with Agent Based models).

An interesting visualisation of Genetic Algorithm – or a self-learning process in action – is the demo of a simple 2D car design process that starts from scratch with a set of simple rules and end up with a workable car from a blob of different parts: http://rednuht.org/genetic_cars_2/

Agents learn self-learning by using “Mutations and “Crossovers, which are two operators that are fundamental to Genetic Algorithm implementation. Agents mimic the process of DNA mutations and crossovers in the biological evolution of living things. Agents learn from their mistakes and experiences through crossovers and mutations. These can be used to model the learning behavior of potential attackers without having to imagine all possible use cases or user journeys an attacker might attempt to break into a Cyber system.

5. Conclusion

Complexity in Cyber Systems, particularly the use of Agent Based Modelling to assess emergent behavior of systems, is still a new area of research. Agent Based modelling is not yet a viable commercial option for organisations. There are still many things to learn. Complexity science is a promising avenue for practitioners and academics, given our current focus on Cyber security.

Leave a Comment