Don’t Security Engineer Asymmetric Workloads


This core idea can unfortunately be applied to the relationship between Security Engineers and their organizational partners!

The primary result of sort of asymmetric workload is unfairness … intent doesn’t matter here. The folks on the other end feel that inequity. The boss and engineer both are saving themselves effort at the expense of others. The boss and the engineer are getting ahead at the expense of others. Like many social animals, humans are hardwired to have immediate negative reactions to this unfairness. They will rage against these people who have broken the social contract, and worse, they can come to believe that the only way they can make things even again is to be just as selfish.

Asymmetric Workloads

Asymmetric workloads are the double edged sword of force multiplier roles.

For leaders, this can be a request that grows in size as it snowballs down the management chain, or that incurs outsized cost when distributed across their organization.

The security to developer ratio can easily be 1:100. This means success relies on security impact via influence, collaboration, levers, and ratchets. Inversely, this means that security needs to use careful prioritization. We must be efficient in managing our own time and commitments. The result of these structural conditions is our security organizations carry a dual risk of asymmetry: they can introduce asymmetric costs on our organizations, just as our organizations can incur outsized costs on us.

The good news is that awareness is an easy way to build immunity to this issue! Once you develop an instinct for instances of asymmetry, you can address them in the moment.

Here are a few concrete examples of how security can accidentally cause asymmetric work:

This might be the classic example. Security maximalism, where more is more with regards to security controls and without regards to risk, is a pervasive way security teams fail their organizations. Applying blanket prescriptions and staging security theater abuse the trust an organization places in its security team.

Worse, security teams eye product delivery teams warily, as if they are guilty of a mortal sin, their apparent ignorance of security best practices leaving them beyond salvation. On the other side, product delivery teams view security teams as a bunch of highly paid cowboys who cook up implausible and unrealistic risk scenarios. Product teams crave clarity about which high-priority risks to address—after all, security is exciting! It’s not uncommon to see security teams fail to capitalize on this excitement, though, turning these interactions into something product teams dread.

  1. Throwing vulnerabilities over the fence

  1. Instituting blanket requirements, without investing in paved roads

Security teams may demand exacting standards, without empowering and equipping developers to meet them in efficiently. For example, “simple” statements like “IAM least privilege” can bog teams down, unless security has set them up for success. This is the lesson that lead Kinnaird to policy_sentry, or in the realm of service delivery spawned Zuul at Netflix.

  1. Making it effortful to engage security

One failure model I’ve seen security teams fall into is indulging a desire to overly standardize their intake of work, and triage out spurious requests cheaply. However, security engagement can be inherently fricative for developers, and so security should try to make that cost as cheap as possible to ensure that issues get Amplified.

Security is often structured as a detached Complicated Subsystem team. If you miss integration into developer processes or developer tools, you make the work easier on security, but harder for developers. Centralized security tools, “Single Panes of Glass,” these are helpful for security and leadership personas, but don’t meet developers where they are. Focus on building security in, and invest in the automation necessary (e.g two way syncs between centralized security tools and distributed developer tools) to make it sustainable.

Developers Imposes Asymmetric Work on Security

One core principle behind the introduction of asymmetric work to security teams is a form of Brandolini’s Law:

The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it

Framed through a lens of positive intent, it could be restated as:

The amount of energy needed to refute under-specified ideas is an order of magnitude bigger than that needed to produce them

Generally however, this is a symptom and security practitioners need to take ownership. It happens when developers don’t consider a component of delivery, as well as when security insufficiently establishes, documents, and standardizes security baselines and invariant.

Asymmetric Developer Request

Cost

Security failure

“Hi, I saw cool tool on Product Hunt, can I grant it access to our Github?”

Vendor security review, even when done optimally, can be time consuming, and each vendor adds risk. Every request to add a new tool can cause security to: research the tool to understand the likely access review/skim security documentation as a small test understand how the tool fits into the company’s ecosystem try to find an optimal solution that doesn’t involve just saying No!

A lack of transparent, standardized guidance on security for third party vendors

”Hi, we’ve decided it would be easiest to use new Cloud Service Provider’s feature X, as we don’t like the incumbent CSP’s offering.”

While picking best-in-breed cloud services is a valid approach, onboarding an entire CSP (ex. Azure, for an AWS shop) can be a massive endeavor, even if you’re only going to use one service.

A lack of transparent, standardized guidance on cloud strategy.

“Hi, security control Y is blocking launch, so we had to turn it off, thanks!”

Security is generally put in a tough place when forgiveness is being asked, versus permission. This can result in expensive development of compensating controls, or work on compliance.

A lack of a standardized exception process with appropriate risk sign-off.

“Hi, why do we need to implement authentication for internal service that’s on our VPN?”

Explaining elements of security philosophy in a coherent and nuanced way off the cuff is difficult. This is exacerbated when then question comes with a bias (ex. “we don’t want to do this, so you need to convince us”)

A failure to (standardize,) document and evangelism the company’s approach to network trust.

The Cost of Doing Asymmetric Work

Overall, when security introduces asymmetric work they risk being seen as a “department of no.”

That can increase the risk level as the company learns to route around security, instead of working in collaboration.

When developers introduce asymmetric work, they worsen the gap between security engineer staffing and development workforce. Pushed to the extreme, security is a capability that fails open if overwhelmed, with the security team no longer able to sufficiently reduce risk to the organization.

Security programs are components of organizations and can expend energy or absorb it, but energy is neither created nor destroyed. Beseeching employees to be vigilant to phishing threats requires them to expend energy, which the security team absorbs (as these user efforts allow the security team to expend energy elsewhere). Requiring software engineers to triage bugs discovered by vulnerability scanners is another example; developers expend energy combing through findings and fixing them, and the security team absorbs that energy. Thinking of where energy is expended and absorbed, and by whom, can help excavate the opportunity cost of a security decision.

Combating Asymmetric Work

Awareness can mitigate the problem of asymmetric work, but it’s not a panacea. Here are some other tactics you can deploy to prevent or address this issue:

  1. Start with transparency, and always set expectations on timeline and level of effort

Security teams should explain the reasoning behind their requests in the context of the development team and risk. It should focus on responsibilities, not tasks. Empower teams to solve risks, so they can apply the best controls for their specific circumstances.

  1. Help teams own their risks

Empowering teams effectively requires a system where they own risks in their systems. It can be easy for risk to be seen as the exclusive domain of the security (and compliance, etc) team. Instead, security bugs need to be integrated to team’s standard processes for ownership and accountability. This sets up the right tracking and incentives so teams are able to make thoughtful decisions about priority and investment, including accepting debt or risk at times.

  1. Understand that occasional asymmetry is part of working together

Asymmetry is most problematic when it is systemic, unnecessary, or not identified. However, that is not to say every interaction must be perfectly symmetric. Often, security teams need to request asymmetric work to respond to a vulnerability. Similarly, sometimes the business does really need security to do outsized work to unblock capabilities or enable organizational goals. This is okay! But to counter potential feelings of unfairness or difficulties on tackling the work, it is important to make requests for asymmetric work explicit. Just putting it up front that “hey, I know this is going to cause some unexpected work to address” can do a lot to keep everyone on the same team.

When asymmetry happens, you should root cause it:

Focus on identifying the structural reasons for asymmetry and invest in addressing them at their source! Look at the examples from Developers Impose Asymmetric Work on Security above, and you’ll see how each presents an opportunity to the security team to make systemic investments. Killing bug classes – not just for vulnerability management!



Source link