Why an “all gas, no brakes” approach for AI use won’t work


Machine learning and generative AI are changing the way knowledge workers do their jobs. Every company is eager to be “an AI company,” but AI can often seem like a black box, and the fear of security, regulatory and privacy risks can stymie innovation. Executives are under huge pressure to invest and prove ROI but often lack the proper guardrails and tools to ensure they can go through the process without legal or compliance concerns.

In my hundreds of meetings with C-suite executives, board directors and security teams, it’s become overwhelmingly clear to me that there are two conflicting mindsets about adopting AI. In one camp, you have who I call the gas — business and tech leaders eager to invest. And in the other, you have the brakes — security, legal, compliance and other governance teams. Now, immediately it may seem like being the brakes is inherently negative, but both sides have important points. After all, you wouldn’t get behind the wheel of a new Lamborghini if you knew it had no brakes, right? It turns out — paradoxically — that the very thing that slows us down actually gives us the confidence to go fast.

So, how can we balance these two mindsets to make impactful, thoughtful decisions that will allow us to innovate without conflict? Here are the key pieces of advice I have for enterprise security leaders as they navigate these tricky waters.

Understand the (real) risks

Security, governance and legal teams have a lot on their plates. The growing list of vulnerabilities to remediate, attacks to defend against, and surface area to protect and regulations / standards to comply with can be overwhelming. But I’ve often observed that the “brakes” side is worrying about risks that may not even apply to them.

First, companies need to identify the risks that are relevant to their AI use case, instead of getting bogged down analyzing all the possible worst-case scenarios.

Realistically, the most likely issues to arise are a result of insufficient access controls, poor data quality, and insufficient data lineage. But before any project can get off the ground, the gas and brakes camps must come together to align on the cases most relevant to them.

How can “gas” people get the “brakes” camp on their side?

Except for security companies, an enterprise does not exist solely to have a strong security program. Rather, security is a function that supports the mission and strategy of the company. But security professionals are trained to think that high risks are around every corner. No one wants to appear in the news for breaking compliance regulations.

The truth is that if security and legal teams remain stubborn in their attitude to keep things status quo, the business will end up leaving them behind. This is the most effective advice I share when I meet with customers to get this group to help drive innovation.

“Gas” people should collaborate with “brakes” people to establish a customer-centric process. This should entail service-level objectives (SLOs) and proactive communication aligned to business priorities to intake, assess and provide prescriptive guidance on how the business can seek approval for AI use cases. Decision-making should not be based on measuring deltas against a static security standard purposely built for a deterministic pre-AI world. Instead, it should be aligned with the organization’s priorities and risk tolerance.

How can “brakes” people get “gas” people on their side?

I’ve found that if you meet people where they are and approach the conversation by aligning yourself with their mission and vision, you will find them to be much more receptive.

As a CISO, if I were to approach physician leaders at a hospital imploring them to deploy certain technical controls to prevent a data breach, their response would likely be one of bemusement. The hospital has no strategic business objective that states that it needs to prevent data breaches. Instead, the CISO can frame the conversation in a way that appeals to their best interests: “In order to reliably deliver patient care, there are some controls we need to implement to reduce unwanted disruptions.” The physician leaders would more likely be engaged and motivated to see this through since the “WIFM” (what’s in it for me) is clear.

Next, lean on third parties to assess the environment if the requisite skills aren’t yet available in-house. Compliance assessments, regulators, customer third-party assessments and red team tests are all ways to prove the value of the safeguards you have in place.

I also recommend attending industry events and keeping your ear to the ground on the latest innovations in security. Find the companies that are ahead of the curve and network with the decision makers who have put those tools and processes in place to see what you can apply to your own organization.

Conversely, it’s helpful to take notes of other companies’ mistakes. It can be difficult to prove the value of a security framework since the ideal outcome in security is the absence of an event. This can sometimes make board members forget the value of the tools in place. When well-known companies have public security events, this is an opportunity to shore up your security measures and provide an analysis of why you’re immune to that same event– why the investment has paid off.

Start small, with a clear map—then get on the road

Remember: AI is not the noun in your strategy, but rather the adjective. Start small, with specific use cases: for example, customer support, search, collaboration and software development. In cases when the tech or data teams define a use case to demonstrate value from AI, the business can be unsure of which worthy business problem is being solved. Engaging the business at the outset and having them define the problem will help the tech and data teams stay aligned.

If it’s not clear already, a security framework to assess and categorize risks across different AI system components, from raw data to machine learning models, is paramount.

Even highly regulated industries such as healthcare and financial services are finding value from AI. In fact, we’ve found they’re the fastest growing industries to adopt it. The companies I’ve seen find the greatest success with the boldest experiments have achieved that by joining together the “gas” and the “brakes” camps. It’s in our best interest to make these two groups work in harmony.



Source link