Managing Security Patch Management
Photo by congerdesign via Canva

Managing Security Patch Management

Patch management is an important part of cyber security, but it is hard. How can it be managed better?

Patch management is an important part of cyber security, but it is hard. You have to know what you’ve got, monitor the relevant vendor feeds for patches, figure out if the patches are security relevant and check that applying them isn’t going to break something important, while balancing the risk of this against the risk of not applying the patch.

That’s a lot of effort, and there’s a lot that can go wrong. Experts are continually saying that everyone should do better, but you don’t often hear the old maxim “work smarter, not harder” applied. But it does apply to patch management.

Minimise the Attack Surface, Reduce the Rush

The smart way to improve patch management is to reduce the need for applying critical patches by building systems in a way that minimises their attack surface – that part of the system which an attacker can reach and so needs to be free of vulnerabilities. This is definitely not saying you can avoid the need to patch, only that the need to patch fast is limited to the exposed parts of the system. Software with vulnerabilities that cannot be exploited because they are out of reach should be patched as soon as they’ve been tested, but there’s no need to risk damage by hastily applying an unproven patch.

The aim is to reduce the attack surface, not eliminate it. A system with no attack surface is one that isn’t connected to anything, so isn’t that useful. If you’re offering a public interface, like a website or portal, you inevitably have an attack surface. If you give your workers or supply chain remote access to your system, you have an attack surface. If you receive email or allow web browsing from your system’s desktops, you have an attack surface. But careful design of the system architecture can keep this to a minimum. It’s true that this is not easy when systems are aggregated through acquisition or legacy components need to be integrated, but it’s a worthy goal that will pay benefits in the long run.

Defence in Depth, Do it Right and Do it Twice

One way of reducing the need to rush patches through is to employ defence in depth. This is a tricky concept which is often misconstrued. It means using multiple independent mechanisms to defend against the same thing – if one mechanism fails, the other prevents the damage. The need for independent mechanisms is important, otherwise a flaw in a common software component will result in both failing. The need for the mechanisms to achieve the same effect is crucial too, and usually overlooked. Without this, multiple mechanisms actually increase the attack surface, as the attacker has more opportunity to find and exploit flaws. 

An example is to allow remote access via RDP only over a VPN. Both the RDP and the VPN restrict access, so both must fail for an attacker to gain unauthorised access. However, if the VPN terminator and the firewall restricting access to RDP are one and the same device, this becomes a single point of failure and the lack of independence means there’s no defence in depth.

When it comes to handling content carried by email and web browsing, there’s little prospect of finding defence in depth, because the applications handling the content are all part of the attack surface and are a single point of failure. If an application is vulnerable to attack via carefully crafted data, and the attacker knows this, it is game over. There’s no way having two applications to handle the data makes things better.

Content Checking, Keeping up with the Attackers

One common strategy to improve matters here is to check content before it arrives at the application. The checks need to anticipate what an application might get badly wrong, and they need to do this without knowing in detail how the application works, because they are incredibly complex and inevitably proprietary. In practice this means the checks only look for crafted data relating to known vulnerabilities – any new exploit remains unchecked until it has been made public, the checker is extended and the new version deployed. In effect, the defenders are continually having to keep up with the attackers and the patch management problem is moved from the application to the defence, it has not been made easier. 

Also, the checks need to be independent of the application, because otherwise an attack could use a single exploit to disable the check and take control of the application. But this can be extremely difficult – there are several cases where there’s only one software library available to handle certain essential data components, such as compressed data, so a vulnerability here will be in common. As a result, checking is not a good strategy for reducing the attack surface of applications.

Content Transformation, Always Ahead of the Game

A strategy that does work has been pioneered by Deep Secure. It’s called Transformation through Information Extraction and involves intercepting data before it reaches applications, but unlike the checking strategy it does not look for unsafe crafted data. 

Transformation means the data that is received is not the same as that which is delivered. The original data is always discarded, and new data is delivered in its place. The new data is built to carry the same useful information that was carried by the original data, hence the way data is transformed is referred to as Information Extraction. With this approach, users get the information they need, but attackers can't deliver the data they want so they can't target vulnerabilities.

It is important to note that the new data is not the original data with unsafe elements removed. To do that would mean being able to accurately judge what is unsafe, making the process nothing more than a checker that is looking for crafted data relating to known vulnerabilities. Information Extraction is about understanding the original data enough to be able to identify its useful information content – the stuff the recipient is actually interested in, ignoring any irrelevancies such as the order in which paragraphs of text were typed into a document (yes, some file formats encode this kind of thing). There's no need to understand all the strange things an attacker might do in order to exploit a vulnerability in an application, it's only necessary to understand how to represent information in the normal way that works.

Normalisation, Use Only What Works

The new data that is to carry the information to its destination is built very carefully. It always uses encoding methods that are normally used to represent the information – most formats have many ways of representing the same information, and the obscure forms may not be properly tested and so could expose vulnerabilities. This normalisation of the delivered data is an important characteristic of the process that clearly distinguishes it from checking.

Checks block or remove known unsafe data, but still deliver strange constructs that could target unknown vulnerabilities. Normalisation means applications only have to handle the normal everyday case that is continuously tested through usage and so known to be safe.

Separation, Keeping the Risky and the Critical Apart

Transforming data through information extraction hides known and unknown vulnerabilities in applications, but the process still becomes part of the attack surface as it must handle potentially unsafe data from the attacker. Unlike a checker, though, it’s possible to implement the transformation in a way that eliminates this.

The key here is the fact that the process works in two stages – the extraction stage and the build stage. The exposed risky part is the extraction stage, because it must handle data received from the attacker. But this can be kept separate from the build stage, which is critical because it is responsible for only building normalised safe data. With the stages separated, any vulnerability in the extraction software cannot cause unsafe data to be built. This means the application level attack surface is eliminated, other than from a potential denial of service point of view. 

Of course, the attack surface is not completely eliminated, because something must keep the stages separate and the extraction stage must pass the information to the build stage. So, there is an interface that could expose a vulnerability, but by comparison the attack surface here is minuscule and does not involve the complex data structures handled by applications.

Hardware Verification, the Ultimate in Zero Trust

Keeping the extraction and build stages separate gives a really small attack surface, but even so, in some applications this is still too much. The extreme cases are those that just can’t trust software to ever work properly, so an attack surface is bad however small it is.

Typically, this kind of paranoia is limited to defence and intelligence systems and parts of a critical national infrastructure (actually it's not paranoia, because in this space they really are out to get you). A common way of addressing these extreme risks is to isolate those critical systems, but this means they cannot effectively communicate or share information. But with transformation there is a way to allow the communication that’s so important without introducing a software attack surface. Again, the answer lies in the two stage Information Extraction process, and it comes down to how the interface between the two parts is implemented. 

The solution is to use physically separate machines for the two stages and join them together with Deep Secure’s High Speed Verifier – a hardware logic device that contains no software. The logic provides an independent means of verifying that the information passing between the stages is correctly formed. It also allows the information to be passed using protocols that are implemented in hardware logic, avoiding any vulnerabilities in a software protocol stack.

Transform Content, Reduce the Patch Management Burden

Transformation by Information Extraction ensures that applications always receive clean safe data that carries the information users need. It hides vulnerabilities in applications and so removes them from the attack surface, rather than just moving the attack surface to somewhere else. As such it helps reduce the patch management burden – security patches for applications can be deployed at a measured rate. For normal commercial use it can be deployed as a cloud service or as on premise virtual/physical appliances, but it can also be used in the most risk adverse situations with the help of a hardware logic device.

Defence in Depth can be used to reduce the need for rushed patching of externally facing infrastructure and the Information Extraction process can eliminate the attack surface presented by applications. Together they are the way to work smarter not harder.

To view or add a comment, sign in

More articles by Simon Wiseman

  • Solving the Cross Domain Solution Problem

    There are lots of Cross Domain Solutions (CDS) on offer, but what are they, who needs them and why are they special?…

    6 Comments
  • The Office...

    ..

  • 2020 Vision

    The Future Christmas seems to get earlier each year, and with it come the inevitable technology predictions for the…

    1 Comment
  • Surviving Links in Email

    The Problem with Links What’s the problem with the web links in email? You click them, they take you somewhere…

  • What’s the Best Way to Stop Malware?

    You Choose: Detecting when Malware Detection Fails or Removing all Malware without Detection Anti-virus products that…

    3 Comments
  • Can You Trust the Zero Trust Approach to Networking?

    Zero Trust is in some ways the latest cyber security fad, which means nobody really agrees on what it’s all about and…

    11 Comments
  • Content Security Measures

    Protecting against unsafe content in the physical world and cyber space 1. Background In the physical world we are…

    1 Comment
  • Cat photos take over Android phones

    Google have just announced a security flaw in the way Android devices handle PNG images. This is very bad.

    1 Comment

Others also viewed

Explore content categories