One of the most common and enduring illusions within the information or cyber security space is that the adversary always has the advantage.
Although expressed in many ways, the idea generally assumes: the adversary has the initiative by virtue of initiating an attack; the adversary only needs to be successful once while the defender must be successful every time; and the adversary possesses technical advantages over the defender by virtue of vulnerabilities, “zero days,” and exploits. While all of these are legitimate concerns, they fail to realize the inherent advantages of the defense in cyber intrusions and how the defender can shape and ultimately determine the landscape on which the attacker must operate.
Rather than seeing the attacker as privileged, defenders must embrace the empowering idea of the attacker being profoundly disadvantaged. Flipping the script, the attacker must ensure that they evade every layer of defense and detection en route to their goal, while the defender only needs one of these to hit to initiate a blocking or response action.
Certainly, the earlier events are detected, the easier response becomes. But so long as adversary activity is detected before the final stage of operations—data theft, disruption, or destruction—the defender holds the advantage in setting the playing field and controlling what the adversary must achieve. Furthermore, the defender has the possibility of shaping the battlefield to their advantage (in sufficiently forward-thinking organizations) so the attacker must fight on the defender’s terms.
Defenders hold a “home field” advantage: unless the adversary has preemptively harvested information on the target environment, defenders know the network, whereas the adversary must feel their way forward to achieve their goals in a compromise event. Extending from this concept, defenders can also—albeit to a limited extent depending on their organization—shape their network based on defender priorities and requirements to disadvantage the adversary. As described in other resources, defenders can implement strategic nodes and “choke points” to implement enhanced visibility and monitoring based on known adversary requirements to achieve offensive goals. Overall, defense retains first-mover advantage in network intrusion events by setting the playing field on which an adversary must attempt an attack.
Vulnerabilities and the dreaded “zero day” attract much attention and concern and seem to advantage the attacker. But when analyzed more deeply, adversaries typically rely on far more mundane techniques to achieve initial intrusion or lateral movement: end-point compromise based on user attacks (phishing, watering holes); or credential capture and re-use. Exploits occasionally come into play, but when they do it is less because of the efficacy of the exploit than the tardiness of network defenders to patch or mitigate the vulnerability in question—either after a vulnerability announcement, or simply as a matter of good network design and implementation.
The two most successful, and perhaps most infamous, vulnerabilities of the past ten (or more) years highlight this perspective. MS08-067 (attacked by Conficker) and MS17-010 (focus of the EternalBlue exploit, then WannaCry and subsequent worms) easily represent the most significant and damaging vulnerabilities allowing exploitation and system compromise. Yet in both cases, not only were the vulnerabilities patched before they were effectively weaponized (at least in widespread, publicly-known form), but the underlying targeted services represent items that should never be remotely accessible from untrusted parties. RPC and SMB, for Conficker and EternalBlue respectively, are protocols vital to Windows network operations, but are not items that should cross trust boundaries to suspect or unmanaged spaces—such as the open Internet. Spread within networks is certainly more likely, but even here, indiscriminate trust boundaries within the network are not only less than optimal, but nearly suicidal when facing self-propagating attacks targeting commonly-used or vital services.
Essentially: defenders and IT stakeholders can effectively deny these types of attacks, even when leveraging potentially virulent exploits, through sound design and instrumentation. Firewalling off network segments, effectively implementing and managing network traffic among enclaves and the directionality of such traffic, and monitoring for signs of compromise across network enclave boundaries are all effective means to detect and respond to such attacks. Even in the case of such attacks succeeding because patches or other mitigating factors were not applied, proper design and implementation will limit damage and contain events, facilitating response and recovery.
Too much attention is assigned to vulnerability assessment and patch management and not enough to determining fundamental answers to potential attack scenarios. Ensuring patches are applied certainly represents good “network hygiene,” but in the case of the “zero day,” is essentially meaningless.
Instead, patching should take second-stage to more fundamental questions of architecture and design where even previously unseen, unknown attacks exploiting previously undiscovered vulnerabilities can at least be contained and constrained. This way, the organization ensures it is not only protected against known threats, but enables defense and effective response against new, previously unseen attacks.
Applying a tiered defensive construct focusing on good architecture, effective segmentation, and maximal visibility ensures efficacy against a wide range of potential attacks, versus investing significant resources in addressing single, application or software-specific attack vectors individually. In this fashion, the defender seizes the initiative from the adversary in a network attack scenario by making them respond to the defender’s pre-planned operations and defensive measures.
Ready to put your insights into action?
Take the next steps and contact our team today.