Financial institutions have always been a valuable target for cyberattacks. That’s partly why banking and financial institutions are heavily regulated and have more compliance requirements than those in most other industries.
A slew of new rules have been put in place in recent years, designed to place more emphasis on continually measuring and managing this risk. For financial institutions, the way to do so is not necessarily by investing in new security tools; it’s by getting more value from existing technology through automated monitoring and optimization.
In the crosshairs
Like their peers across nearly all industry verticals, financial services firms are moving to the cloud in large numbers to drive cost efficiencies, business agility, and innovation. Cloud is the foundation on which compelling new AI-based services can be built, such as ChatGPT-based customer service bots, fraud detection algorithms, and tools designed to streamline and update compliance workflows.
Yet at the same time, these investments invite new risks. They increase the so-called “cyberattack surface” by creating new assets to target en route to sensitive data stores and critical operational systems. It could be anything from a cloud server to a home worker’s smartphone. Many of these systems contain vulnerabilities which can be routinely exploited. Or they are only protected by simple passwords that can be easily phished. Others may be misconfigured by staff, leaving them open to malicious activity.
The financial and reputational cost to breached banks can be significant. IBM calculates it to be $5.9 million per data breach, second only to the healthcare sector globally in terms of financial impact, and much higher than the average across verticals of $4.45 million. But beyond the direct hit to impacted businesses, there’s a more acute risk that makes governments and regulators nervous: A serious attack on the banking system could have a debilitating impact on national and economic security.
Regulations add complexity
Part of this nervousness has translated into more regulatory action. The Securities and Exchange Commission (SEC) recently announced the adoption of new rules on cyber risk disclosure designed to improve transparency for investors in public companies. Listed firms will have to disclose serious cybersecurity incidents within four days and inform the market of board members with expertise in cyber.
Elsewhere, the New York Department of Financial Services announced in June updated amendments to its cybersecurity regulations. These include more rigorous and expansive requirements around multi-factor authentication (MFA), monitoring and filtering of email and internet traffic, user education, incident response plans, penetration testing, application security and annual risk assessment.
Organizations operating in the EU will need to pay attention to the new Digital Operational Resilience Act (DORA). This places responsibility for IT risk firmly with the board, and mandates all in-scope financial organizations set, evolve and provide evidence of risk-based policies to ensure continued cyber-resilience.
Meanwhile, the Sarbanes-Oxley Act obligates all publicly traded companies in the US and their wholly owned subsidiaries to adhere to best practices in areas such as authentication and data safety.
There’s also more to come. The US National Cybersecurity Strategy promises new requirements and regulations in the months ahead, which will help to improve the security of organizations operating in critical national infrastructure sectors.
Automation streamlines compliance
While people and process are extremely important to meeting such compliance requirements, the third pillar—technology—is perhaps most critical. It is security controls like endpoint detection and response (EDR) or data loss prevention (DLP) that ensure organizations can enforce their carefully devised security policies, to better manage and minimize cyber risk.
Given their relatively healthy cybersecurity budgets, it may be tempting for financial institutions to react to growing compliance mandates by investing in yet more controls. Yet to do so may be unwise. Recent Panaseer research shows that large enterprises now run an average of 76 discrete security tools—up 19% since 2019. This can lead to duplicated functionality in some areas and dangerous coverage gaps in others.
Worse, the more tools there are to manage, the harder it might be to prove compliance with an evolving patchwork of global cybersecurity rules and regulations. That’s especially true of legislation like DORA, which focuses less on prescriptive technology controls and more on providing evidence of why policies were put in place, how they’re evolving, and how organizations can prove they’re delivering the intended outcomes.
In fact, it explicitly states that security and IT tools must be continuously monitored and controlled to minimize risk. This is a challenge when organizations rely on manual evidence gathering. Panaseer research reveals that while 82% are confident they’re able to meet compliance deadlines, 49% mostly or solely rely on manual, point-in-time audits.
This simply isn’t sustainable for IT teams, given the number of security controls they must manage, the volume of data they generate, and continuous, risk-based compliance requirements. They need a more automated way to continuously measure and evidence KPIs and metrics across all security controls. That way they can better identify control gaps, actively improve security posture, and provide evidence to regulators of adherence to policies.