A step-by-step guide for patching software vulnerabilities


Coalition’s recent Cyber Threat Index 2023 predicts the average Common Vulnerabilities and Exposures (CVEs) rate will rise by 13% over 2022 to more than 1,900 per month in 2023. As thousands of patches and updates are released each month, organizations struggle with their patch management process.

Streamline your patch management process

First a quick disclaimer. Proper patch management relies on important factors like size of an organization, complexity of an IT environment, criticality of systems, and number of resources allocated to manage it all, so plan accordingly. Also, this advice assumes you already have some sort of endpoint management solution or function in place for deploying patches. If not, that’s step one.

Assuming you have a solution in place, the next step is to evaluate and prioritize patches.

Not all vulnerabilities are created equally, which means not all patches are either. But as vulnerabilities like WannaCry demonstrated, delayed patching can have catastrophic consequences. Therefore, it’s important to prioritize updates that have the highest severity of non-superseded vulnerabilities and/or the highest exposure for each environment. For example, if you have an update that impacts only a few devices out of a thousand, and another that impacts 80% of devices, but both are critical, focus on the one that could have the biggest negative impact, and then address the others.

Once the critical updates are addressed, plan to move onto the non-critical patches, which are often driver updates or new software that enhances user experience and prioritize those based on importance to business operations.

Many use the Common Vulnerability Scoring System (CVSS) to help prioritize updates, which is a good starting point. Just remember that many vulnerabilities rated at a medium severity level are ignored – and found to be the source of a breach later.

Once you’ve prioritized the types of updates, the next step is to create guidelines for testing them before they go into production.

The last thing you want to do is break the system. Start by researching the criteria of each update and identifying which components require testing. Next, install each update on at least five off-network devices to be tested against proven success criteria. Record the evidence and have another person review it. Be sure to find out if the update has an uninstaller and use it to ensure complete and safe removal of outdated programs.

If you’re like most organizations, you’ll likely plan on having tons of updates/patches happening all the time. But the more updates installed at any given time increases the risk of end-user disruption (i.e., greater volume of data needing to be downloaded, longer installation times, system reboots, etc.).

Therefore, the next step is to assess your system’s bandwidth, calculate the total number and size of the updates against the total number of devices and types. This can prevent system overloads. When in doubt, just plan to start with five updates and then reassess bandwidth.

Additionally, if you follow any change management best practices (such as ITIL, Prince2, or ServiceNow), it’s important you adhere to those processes for proper reporting and auditability. They usually require documentation on which updates are needed, the impact on a user, evidence of testing, and go-live schedules. Capturing this data properly through the above steps is often required for official approvals as it serves as a single source of truth.

We’ve now gotten to the point of deployment. The next step is to ensure deployment happens safely. I recommend using a patch management calendar when making change requests and when scheduling or reviewing new patch updates. This is where you define the baselines for the number of updates to be deployed and in which order. This should utilize information gathered from previous steps. Once that baseline is set, you can schedule the deployment and automate where necessary.

At last, we’ve made it to the final step: measuring success. This can be handled in a variety of ways. For example, by the number of registered help desk incidents, the ease of which the process can be followed or repeated, or the number of positive reports provided by your toolsets. But ultimately what matters is swift deployment, streamlined repeatable processes, a reduction in manual requirements, and in the end, an organization that is less vulnerable to exploit.

A quick note on where patching often goes awry

Believe it or not, some organizations still allow users to have local admin rights for patching. This creates major attack surfaces, and the reality is, no IT team should rely on end-users for patching (blanket admin rights are just too risky).

Some also rely on free tools, but these often do not deliver all the security needed for patching. They also generally don’t provide the necessary reporting to ensure systems are 100% patched (i.e., validation). And finally, there is an over-reliance on auto-updates. Auto-updates can provide a false sense of security and can impact productivity if they are triggered during work hours.

Conclusion

Whether large or small, organizations continue to struggle with patching. I hope this quick step-by-step guide of key considerations for patch management helps your organization create a new framework or optimize an existing one.



Source link