Treat AI like a human: Redefining cybersecurity


In this Help Net Security interview, Doug Kersten, CISO of Appfire, explains how treating AI like a human can change the way cybersecurity professionals use AI tools. He discusses how this shift encourages a more collaborative approach while acknowledging AI’s limitations.

Kersten also discusses the need for strong oversight and accountability to ensure AI aligns with business goals and remains secure.

Treating AI like a human can accelerate its development. Could you elaborate on how this approach changes the way cybersecurity professionals should interact with AI tools?

Treating AI like a human is a perspective shift that will fundamentally change how cybersecurity leaders operate. This shift encourages security teams to think of AI as a collaborative partner with human failings. For example, as AI becomes increasingly autonomous, organizations will need to focus on aligning its use with the business’ goals while maintaining reasonable control over its sovereignty. However, organizations will also need to consider in policy and control design AI’s potential to manipulate the truth and produce inadequate results, much like humans do.

While AI has highly innovative capabilities and is an undeniably transformative value-add, it can also be tricked and can trick its users. This human trait requires evaluating AI security controls similarly to human-focused controls. AI prompt creation training is a practical example because its purpose is to get an accurate response from AI by ensuring the language used is interpreted the same way by both parties. This is a very human concern and few, if any, technological advances have had this impact.

As a result, the rapid pace of AI advancements, from evolving capabilities to emerging vendors, has created a more dynamic environment than we’ve ever experienced before. Cybersecurity leaders will need to adapt quickly and coordinate closely with legal, privacy, operations, and procurement teams to ensure strategic alignment and comprehensive oversight when working with AI. Traditional best practices—like controlling access and minimizing data loss—still apply, but they must evolve to accommodate AI’s flexibility and user-driven but human-like nature.

“Trust, but Verify” is central to AI interaction. What are some common mistakes cybersecurity teams might encounter if they blindly trust AI-generated outputs?

The ‘trust, but verify’ principle is central to cybersecurity best practice. The principle is also used in AI: leveraging AI’s speed and efficiency while applying human expertise to ensure its outputs are accurate, reliable, and aligned with organizational priorities. Blindly trusting AI outputs can lead to compromises in security and decision-making by cybersecurity teams. Like humans, AI, while powerful, is not infallible—it can make mistakes, propagate biases, or produce outputs that don’t align with organizational goals.

One common mistake is over-relying on AI’s accuracy without questioning the data it was trained on. AI models are only as good as the data they consume, and if that data is incomplete, biased, or outdated, the outputs may be flawed. Cybersecurity teams must verify AI-generated recommendations against established knowledge and real-world conditions. In security speak, this helps eliminate false positives.

Another risk is failing to monitor for adversarial manipulation. Attackers can target AI systems to exploit their algorithms to produce false information while masking genuine threats. Without proper oversight, teams may unknowingly rely on compromised outputs, leaving systems vulnerable.

In the context of cybersecurity, what does effective human oversight of AI look like? What frameworks or processes must be in place to ensure ethical and accurate AI decision-making?

Effective human oversight should include policies and processes for mapping, managing, and measuring AI risk. It also should include accountability structures, so teams and individuals are empowered, responsible, and trained.

Organizations should also establish the context to frame risks related to an AI system. AI actors in charge of one part of the process rarely have full visibility or control over other parts. The interdependencies among the relevant AI stakeholders can make it difficult to anticipate the impacts of AI systems, including ongoing changes to those systems.

Performance indicators include analyzing, assessing, benchmarking, and ultimately monitoring AI risk and related effects. Measuring AI risks includes tracking metrics for trustworthy characteristics, social impact, and human-AI dependencies. Sometimes trade-offs are required, and this is a key point for human interaction. Any metrics and measurement methodologies should adhere to scientific, legal, and ethical norms and be carried out in a transparent process to ensure trustworthiness.

To effectively manage AI, plans for prioritizing risk and regular monitoring and improvement should be in place. This includes ongoing risk assessment and treatment to ensure organizations can keep up with the fast pace of change in AI.

A really interesting framework that was recently released is the National Institute of Standards & Technology (NIST) AI Risk Management Framework. This framework was designed to better manage risks to individuals, organizations, and society associated with AI and to measure trustworthiness.

With AI making decisions that affect security outcomes, where does accountability lie when an AI system makes a mistake or a misjudgment?

Accountability starts with the AI creators—the teams responsible for training and integrating AI systems. These teams must ensure that AI tools are built on robust, diverse, and ethical data sets, with well-defined parameters for how decisions should be made. If an AI system falters, it’s crucial to understand where the failure occurred—whether it was due to biased data, faulty algorithms, or an unforeseen vulnerability in the system’s design.

However, accountability doesn’t end there. Security leaders, legal teams, and compliance officers must collaborate to create governance structures that ensure proper accountability for AI-driven decisions, especially in sensitive areas like cybersecurity. These structures should include clear escalation processes when an AI system makes a misjudgment, enabling quick intervention to mitigate any negative impact.

Human oversight will always remain a critical factor of ensuring AI accountability. AI tools should never operate in a vacuum. Decision-makers must remain actively engaged with the AI’s outputs, continually assessing the system’s effectiveness and ensuring that it is aligned with ethical standards. This oversight allows organizations to hold both the technology and the people who manage it accountable for any mistakes or misjudgments.

While AI can provide valuable insights and automate critical functions, humans—across technical, security, legal, and leadership teams—must ensure that accountability is upheld when mistakes occur.

Do you foresee a point where AI will require less human oversight, or will human interaction always be a critical part of the process?

Today’s AI is designed to assist, not replace, human judgment. From a security perspective, it’s highly unlikely AI will operate independently, without human collaboration. Allowing autonomous AI operation, without human oversight, could lead to unintentional gaps in security posture. With this in mind, cybersecurity teams must stay engaged, providing oversight. This ongoing cooperation ensures that AI can serve as a trusted partner, rather than as a potential liability. However, no one can predict the future and today’s AI could become a more individualist, human-like technology that could operate independently.



Source link