Why CISOs need to understand the AI tech stack
As AI spreads, so do the risks. Security leaders are being asked to protect systems they don’t fully understand yet, and that’s a problem.
A new report from the Paladin Global Institute, The AI Tech Stack: A Primer for Tech and Cyber Policy, breaks down how AI systems are built and where the biggest security risks live. For CISOs, it offers a practical way to start thinking about how to secure AI in real-world environments.
A simple but powerful model
The report defines the AI tech stack as five distinct but interdependent layers:
- Data layer: The raw material that powers AI systems.
- Model layer: The algorithms and machine learning models that process data.
- Infrastructure layer: The compute and hardware environment that runs the models.
- Application layer: The interface that connects AI to users and other systems.
- Governance layer: The legal, ethical, and security framework that wraps around the entire stack.
Each layer has its own risks. More importantly, risks in one layer can cascade into others. For example, a poisoned dataset (data layer) can corrupt a model, lead to bad outputs (model layer), and eventually cause a public incident (governance layer).
“Risks within one layer can cascade through the system—a poisoned training dataset can corrupt a model, leading to unpredictable behavior in deployed applications and, ultimately, governance failures with public consequences,” the report warns.
Securing AI means embedding protections throughout the stack, and understanding how those layers work together.
Prioritize the data and model layers
If you have limited resources, start where the damage can be most severe: the data and model layers. These are the foundation of any AI system, and they’re highly vulnerable. The report says: “The data and model layers are of chief importance to getting AI security right.”
Attackers know this. Data poisoning, model theft, adversarial inputs, and inversion attacks can all manipulate AI behavior or extract sensitive information. A single poisoned input can skew model outputs or even allow unauthorized access. And because AI models are often built with large amounts of unstructured data, some of it sensitive, breaches can be hard to detect and even harder to fix.
The report recommends several basic defenses:
- Encrypt and restrict access to training data.
- Use data masking and input sanitization.
- Protect model endpoints with strong authentication and firewalls.
“Securing the Data Layer through encryption, access controls, and data masking is critical for preventing data breaches, securing intellectual property, maintaining user trust, and ensuring compliance with regulations,” it states.
Infrastructure and applications: Extend existing protections
While the model and data layers may require new strategies, the infrastructure and application layers offer more familiar ground.
Many CISOs already manage cloud workloads, secure APIs, and enforce least privilege across enterprise systems. These same practices apply here, though with new stakes. For instance, GPU clusters and specialized AI chips in the infrastructure layer could become high-value targets for supply chain attacks or resource hijacking.
The application layer, meanwhile, brings threats like prompt injection and API abuse. “Prompt injection, API exploitation” and “unsafe content generation, data leakage” are among the risks called out in the report.
To defend these layers, the report suggests building on existing cybersecurity frameworks with AI-specific tuning:
- Leverage TLS at the interface.
- Implement strict role-based access controls.
- Continuously monitor for misuse of prompts or APIs.
Governance layer: A work in progress
While most of the stack is technical, the governance layer focuses on policy, ethics, and oversight, and it’s the least mature. But that doesn’t mean CISOs can ignore it.
“The Governance layer is the least mature yet essential for AI trust,” the authors write. “It demands something different: moving beyond rigid regulation and toward dynamic protocol development.”
This includes defining acceptable use, ensuring human oversight, and preparing for edge cases, like AI making unauthorized decisions or generating harmful content.
The report calls for industry-led, flexible standards (not one-size-fits-all regulation), citing examples like TLS and DNSSEC that helped secure the early internet. CISOs can play a role here by building governance protocols inside their own organizations, especially around use of open-source models and third-party AI tools.
Think in systems, not silos
One of the report’s strongest messages is the need for a system-level view. Most cybersecurity programs are built around endpoints, networks, or data centers. But AI doesn’t respect those boundaries.
“Organizations must develop sophisticated, multi-layered security strategies that resolve the unique vulnerabilities at each level of the AI stack,” the report says.
This calls for cross-functional coordination. Your data team may own training pipelines. Developers control applications. IT handles infrastructure. But if each team works in isolation, AI risk slips through the cracks. CISOs will need to lead or at least coordinate a stack-wide security strategy.
Source link