How to Improve your Security Posture and ROI


By Mark Evans, VP Marketing and Packet Capture Evangelist at Endace

Let’s assume your security team has completed all the “low-hanging fruit” security essentials. They’ve made sure all the basic hygiene factors have been addressed – efficient patch management is in place, network and endpoint security and log collection solutions have been deployed, and alerting and security incident triage processes are working too. Not to mention, security awareness training is regularly conducted across the organization.

What, then, are the next steps to building a truly robust security posture? What do you need to put in place to help your teams combat the increasing flood of cyberattacks, eradicate alert fatigue, and prevent burnout? What will enable you to better manage security in an increasingly complex on-prem, cloud, and hybrid-cloud environment with multiple different vendor solutions in play?

How can you accomplish all this in today’s climate of budget cuts, increasing workloads, and extreme shortage of skilled cybersecurity people?

Let’s address four of the top cybersecurity challenges that organizations are grappling with, and what leading organizations are doing to address them.

Issue One: Stopping security teams from being overwhelmed.

Just about every article about cyber defense mentions alert fatigue – because it is a major issue in almost every organization. Security analysts are overwhelmed by the volume of alerts they receive and unable to do anything to reduce the load. The outcome is stress and burnout – then inevitably a threat is missed, resulting in the serious breach scenario that organizations were working so hard to avoid!

There’s no single silver bullet to solving alert overload. But many organizations are embracing automation – leveraging SOAR tools – to eradicate some of the slow, tedious, manual work that analysts currently need to perform.

Simple mitigation tasks – such as isolating a suspect host and disabling compromised credentials – can be automated to reduce the risk of an initial attack escalating and give analysts more time to investigate and mitigate threats.

Additionally, the manual component of more complex investigation workflows – such as collecting and collating evidence – can be automated so that when an analyst starts an investigation, they have everything they need at their fingertips, rather than having to gather evidence manually and/or request data from other teams, both of which can add unnecessary delays to investigations.

Effective automation depends upon accurately identifying the type of threat that has been found and having proven playbooks in place to automate investigation workflows and streamline the human component of the process.

The best place to seek automation opportunities is by identifying what are the most prevalent incidents that consume the most analyst time. A common example is phishing attacks, where the investigation and remediation process are relatively well-defined. Automating or streamlining common workflows like this can free up considerable analyst time, while also ensuring consistent response.

Key to the successful automation of investigation and response workflows is ensuring all the evidence needed for a successful investigation is being captured and can be accessed by your SOAR solution. Investigations often fall short because critical evidence was simply never collected in the first place. Logs, flow data (NetFlow), and packet capture data must be available in addition to endpoint data and monitoring tool alerts.

Packet capture data can be particularly crucial in determining exactly what happened on the network and is an often-overlooked source of evidence. One of the first questions asked in any investigation is typically “what was this device talking to?”. The ability to quickly access and analyze a packet-level record of historical traffic that shows exactly what devices were talking to each other, and what was transmitted, can be an absolute game-changer for security analysts.

Issue Two: Protecting the crown jewels.

Obviously, it’s important to protect the organization’s most valuable assets above all else. But often the focus is on improving overall security posture and dealing with issues like alert fatigue. It’s easy to overlook how to best protect the crown jewels.

Many organizations are implementing Zero Trust to help them restrict access to valuable data, systems and IP to only those individuals or systems that should have access.

This is a laudable initiative because it forces three things. First it forces organizations to identify what is most valuable. Secondly it forces them to clarify exactly who (or what systems) needs access to those resources. And lastly it forces organizations to examine both their network architecture and their authentication mechanisms.

Undertaking this analysis can help improve security posture across the board, as teams need to know and define what “good” looks like. Importantly, it also helps teams identify where additional monitoring and visibility is required to protect the crown jewels, and highlights which threats warrant the highest priority. If an attack is detected that may provide access to highly valuable assets, then it must be prioritized over attacks on assets of lesser value. Teams can also better target proactive security activity – such as threat hunting and deeper vulnerability analysis – in areas where it matters most.

Successful Zero Trust implementation depends on careful analysis of the environment and a methodical design and implementation process. But it’s also crucial to ensure, as you re-architect the environment, that you don’t create monitoring blind spots. Indeed, you may need to increase visibility in certain areas of the network to help better detect and defend against attacks on high-value, crown jewel assets.

You need the ability to test and validate your infrastructure as well as monitor it. So collecting the evidence you need – including network and endpoint data — is crucial. Organizations frequently deploy additional evidence collection – such as continuous full packet capture – in segments of the network where high-value assets are located to ensure they have maximum visibility into all activity and can thoroughly test their defenses.

Issue Three: Gaining greater visibility into threats, as early as possible.

Detection tools must be as accurate as possible. That’s not an insignificant issue, given the difficulty of detecting attacks such as Zero Day threats, threats hidden inside encrypted traffic, and supply chain compromises like the Solarwinds “Solarflare” attacks that originate from trusted systems.

As network speeds increase, it’s critical to ensure that NDR, IDS and AI-based monitoring tools can keep pace. A monitoring tool that maxes out at 10 Gbps is going to flounder when network speeds increase to 40 Gbps or beyond and is going to have difficulty detecting threats that are hidden inside encrypted streams or that leverage common protocols such as DNS to disguise malicious activity, such as beaconing or data exfiltration.

AI-based detection tools can help supplement other monitoring tools by identifying anomalous behavior that might not have triggered alerts. But you also need the ability to quickly investigate these anomalies to determine whether they pose a real threat or are simply anomalous but not malicious. Again, it’s crucial for analysts to have the right evidence to quickly investigate and prioritize events and flag false positives back to detection tools to improve accuracy.

Accurately prioritizing alerts lets teams focus on the most important threats first. For this reason, streamlining the triage process is key. With the right evidence at their fingertips, analysts can prioritize and process alerts faster and more accurately. Automation can help with this too – for example, prioritizing those alerts that potentially threaten important resources, or target known vulnerabilities.

While improving detection is important, it’s also critical to ensure that when detection fails (and inevitably it will) your security teams can go back in time to investigate historical activity – quickly and accurately.

Understandably, the focus of security teams is often on preventing and detecting real-time attacks. But the unfortunate outcome of this “real-time focus” is that putting in place the pre-requisites necessary to accurately reconstruct historical attacks becomes an afterthought. By the time more serious attacks – often not detected immediately – become apparent, the evidence of what happened in the initial stages of the attack often can no longer be found. Either the evidence was deleted by the attacker, or it was never collected in the first place.

The only solution to this issue is to make sure that reliable evidence is continuously collected and carefully protected. Network flow data and packet capture data are extremely valuable sources of reliable evidence because it is difficult for attackers to manipulate, delete, or avoid being tracked by them. Indeed, the fact that this data is being collected is typically invisible to the attacker.

By recording a complete history of what happens on the network – including all the rich, forensic evidence that full packet capture provides – analysts have the evidence they need to accurately reconstruct attacks — even when the initial phase of the attack may have happened a week or a month ago. Let’s face it, almost all incident investigation is looking at historical events that have already happened – so it’s important to have the evidence you need to be able understand exactly what took place.

Access to reliable evidence lets analysts rapidly join the dots between what different monitoring tools may be showing them and quickly identify the root cause and scope of threats. The quicker you can see the connected phases of an attack early on, the better your chance of stopping that attack earlier in the kill chain and reducing the impact.

Issue Four: Getting better ROI from your existing investment in security tools and preparing for an uncertain future.

The security vendor landscape is daunting. There are hundreds, if not thousands, of solutions vying for your budget. All promising to remedy the shortcomings of the tools you’ve already spent money on.

Tempting as it might be to simplify things by looking for all-in-one solutions from a single vendor, this is often not a feasible or sensible option, there is truth to the saying “Jack of all Trades, master of none”. For one thing, you have existing investments in tools you have already deployed. For another, it’s impossible for all-in-one solutions to provide the best option across all the areas of security that you need to cover. Even if you managed to find a miracle solution, the cybersecurity landscape changes so quickly that what might be fit-for-purpose now will likely be obsolete in a year or two.

So, what’s the alternative?

It’s essential to build a flexible and scalable infrastructure so that as your needs evolve, you can evolve your security stack to meet your needs without having to scrap everything and re-design again from the ground up.

Flexibility comes from the ability to deploy best-of-breed solutions for your organization’s specific security requirements. The potential downside of this approach – and the reason all-in-one solutions initially seem attractive – is data can become “siloed” within specific tools or teams. The solution is to make integration capability a key attribute of any security tools you are looking to deploy – hopefully, you did this with the tools you have already in place!

Recognizing customers’ needs to integrate solutions from different vendors is thankfully forcing vendors to focus on building this capability into their products. Integrating security tools dramatically improves visibility and flexibility – allowing you to collect and collate data to see related events in context. Integration is also essential to enable automated or streamlined workflows.

Again, it is imperative to understand what are the key sources of evidence that your teams and tools need access to if you want to ensure better ROI on your investments. The world’s best detection tools can’t be effective if they can’t see all the data. The same goes for your teams.

As workloads move to cloud and hybrid-cloud environments, security teams are realizing they’ve lost visibility into network activity. As a result, many organizations are investing in solutions that give them greater control over, and visibility into, network traffic across the entire network. Building flexible and scalable traffic monitoring and evidence-collection into the infrastructure at the design level ensures your security teams always have visibility into what’s currently happening on the network — and can look back to see precisely what happened yesterday, last week or last month when needed.

Organizations are also realizing that the flexibility and scalability that cloud technology has delivered in the datacenter can be a feature of their security tool suites as well. Where traditionally security solutions were hardware based – firewalls, IDS and IPS appliances, and appliances for email or malware scanning, DDOS protection etc. – most security vendors now offer virtualized versions of their solutions for public, private, or hybrid cloud environments.

Virtualizing security functions can help eradicate “appliance sprawl” and allows organizations to design far more scalable, flexible environments where different security functions – often from multiple vendors – can be consolidated on common hardware to reduce both CAPEX and OPEX costs. Once these functions have been virtualized, the process of upgrading part of the security stack or rolling out new functionality is simpler, faster and cheaper. No longer do rollouts take months: they can now be done in hours or days. Moreover, deploying a new function is typically far less expensive because it is a software subscription rather than a costly hardware purchase. In short, virtualizing security functions can help organizations evolve to meet new threats quickly and affordably when gaps are identified.

Conclusion

Security practitioners often say effective security boils down to three things: People, Process and Technology. By focusing on making people more productive, processes more efficient, and infrastructure more flexible and scalable, you can derive the greatest ROI from your efforts and investments. With a flexible infrastructure in place, you are better prepared to adapt and evolve to meet the challenges of an uncertain cybersecurity future as they emerge.

About the Author

Mark Evans is a Packet Capture Evangelist and has been involved in the technology industry for more than 30 years. He started in IT operations, systems and application programming and held roles as IT Manager, CIO, and CTO at technology media giant IDG Communications, before moving into technology marketing and co-founding a tech marketing consultancy. Mark now heads up global marketing for Endace, a world leader in packet capture and network recording solutions. www.endace.com



Source link