Examining The Cyber Kill Chain

Many in the security community have long advocated on focusing beyond the perimeter, where setting a few firewall rules and an antivirus program clearly won’t hold up against advanced attacks. The new push is towards security systems with internal focus, where events like privilege escalation, transferring sensitive data, or other potentially anomalous behavior can be better incorporated into intrusion detection systems. Cyber ‘kill chain’ methodology is the latest in a series of forward-thinking security strategies, targeted especially at advanced persistent threats (APT), that are premised on a more nuanced model of monitoring, analysis, and mitigation.

The formal concept of cyber ‘kill chain’ methodology was first developed by a group of scientists at Lockheed Martin in a paper titled “Intelligence-Driven Computer Network Defense Informed by Analysis of Adversary Campaigns and Intrusion Kill Chains”. Based on conventional warfare ‘kill chain’ methodology, the concept adapts an analytical model of viewing an offensive campaign through segmented stages to the cyber domain. Applied to cyber attacks, the model analyzes attacks in a series of stages from initial reconnaissance through exfiltration of data.

In practice, the cyber kill chain comprises a highly sophisticated system where defenders monitor data on each stage of every attack. The end goal of this is to analyze the data for patterns of attack methods, behaviors of distinct hostile actors, and other indicators which can inform the development of unique responses.

alt text

Secureworks’ conception[ of a cyber killchain

For example, a kill chain system could analyze the specific stages of attack Chinese hackers took in their campaign against American media outlets like the New York Times and Wall Street Journal. Defenders would track which media outlets were targeted, how the attackers delivered payloads through spear phishing, patterns of navigation and privilege escalation through the compromised systems, and finally what information was exfiltrated. From the sum segments of this data, defenders can better attribute attacks based on indicators like the prominence of media outlets attacked, information potentially tracked or sought after like sources of Chinese dissidents, and the exfiltration to Chinese servers. On the response side, an organization could better understand where their internal vulnerabilities lie, and what kind of offensive countermeasures can be taken like implementing local honey pots and blacklisting certain IPs.

Segmenting the security process at each stage also means attackers will be forced to bear more risk. Even common phishing attacks take many steps to be successful, and with a system like this attackers need to deliver more unique methods in order to not get caught in a previously recognized or anomalous pattern. Adopting more complex methods like the cyber ‘kill chain’ will continue to be critical with the expansion of advanced persistent threats, which are more sophisticated and take place over a longer timeframe. However, while the cyber ‘kill chain’ strategy is promising, some problems need to be kept in mind.

For one, generating data from not just inbound and egress network data but every stage in between adds a new level of complexity. I think the issue lies in assigning sentiment or basically ‘weighing’ the data, where making decisions about a multitude of malicious acts becomes difficult. We have the understanding to sort between a common XSS vulnerability and a more complex zero day attack on the kernel level, but scaling this to stand out among a kill chain system is a big obstacle when there are many distinct stages of malicious behavior. This has implications on responses too, where a defender tasked with making changes to a system may have to sort through the noise between a high volume of malicious behavior in one domain versus less activity but more significant impact in another. Taking the Chinese media hacking example, this could be the difference in sorting between a high volume of successful intrusions and a smaller, but more significant number of privilege escalations.

Another critical problem when building any system based on large amounts of data from disparate domains, in this case the different stages of attack, is deciding what queries or analytics to run Too many companies today are all flocking to #bigdata, the buzz word of 2012, but run the same reports and analysis they have in the past. Whether it’s big data for marketing, or for the cyber ‘kill chain’, organizations need to realize that especially with a much larger dataset, extracting unique relationships requires unique queries. Like perimeter defenses, running the same tired analytics just won’t cut it if we want to get real insight into data relationships. I think part of the solution to solving this issue is taking the ‘human’ side into account, where incorporating data from social science and social engineering can help. Projects correlating relationships on style and culture of attacks from distinct areas like Eastern Europe vs. China, or online groups like Latin American Anonymous vs. the Izzt ad-Din al-Qassam Cyber Fighters would educate the kind of queries we need to run to sort out meaningful attributions and relationships. Running the same general analytics like a checklist of the OWASP Top 10 fails to capture the real significance of the large, nuanced dataset that a cyber kill chain aims to construct.

The cyber ‘kill chain’ represents many of the forward thinking principles and problems that security professionals are working on. Shifting from perimeter defenses to internal focus on malicious activity, using machine learning and large-scale data analytics, and trying to form unique responses to distinct actors and attacks are just a few. Incorporating this new methodology is the right step, but we need to recognize the obstacles in sorting and analysis to overcome for a kill chain to be effective.

Leave a Reply

Your email address will not be published. Required fields are marked *