The European digital landscape is currently navigating one of the most profound shifts in its regulatory history. For years, cybersecurity was largely a matter of voluntary frameworks and reactive patching. However, the rapid escalation of supply chain attacks and the geopolitical volatility observed over the last decade have rendered that approach obsolete. This urgency was starkly illustrated by the Viasatcyberattack in February 2022, which occurred just one hour before the invasion of Ukraine. This targeted strike not only disrupted military communications but cascaded across Europe, causing the loss of remote monitoring for approximately 5,800 wind turbines in Germany [1]. This incident demonstrated how a vulnerability in a single digital component could threaten critical infrastructure across borders, serving as a primary catalyst for new regulations. Against this backdrop, the European Union has moved from soft law to hard deadlines. The Cyber Resilience Act (CRA), published in late 2024, marks the end of the “release first, patch later” era [2]. But as the legal text hardens into operational reality, product providers are discovering a daunting truth: the volume of work required to prove compliance creates a massive burden of evidence that may be impossible to achieve with human effort alone.
While the legal requirements are clear, the path to demonstrating them is challenging. About 90% of products (the “default” category), product providers must perform a self-assessment [3]. While providers are permitted to engage third-party assessors to assist with this process, the ultimate responsibility for compliance remains with the provider. Whether doing it internally or hiring an external auditor, the provider must still generate a continuous stream of evidence—technical documentation, risk assessment, and vulnerability reports—that prove they fulfilled the essential and other requirements imposed by the CRA. As part of the compliance process, the manufacturer needs to identify and make sure that the product enters the market without any exploitable vulnerability. This is no small task, and it shall not be just the manufacturer claiming but rather proving it though traceable audit trails that link identified risksto implemented controls, proving that the risk was not just found, but adequately mitigated. Without automation, this demand for total traceability becomes a bottleneck that threatens to stall innovation.
While before considering the implementation and coverage of the CRA requirements, one first needs to become aware of the CRA and have a concrete and substantiated answer to the question: Does the Cyber Resilience Act apply to my product(s)?
Before evaluating security features, product providers must perform an applicability assessment. They must determine if their product falls under a regulatory exclusion (e.g., Medical Devices or National Security purposes). This initial screening is where AI tools can assist by parsing regulatory texts to generate the necessary exclusion justification documents, ensuring that the legal scope is defined correctly before technical work begins. Once confirmed, the product must be classified to determine the type of conformity assessment process the product should undergo. The CRA categorizes products based on their potential impact [4]:
- Important Products with digital elements (Article 7), which are divided into i) Class I: Products with key cybersecurity functions but lower systemic risk (e.g., standalone and embedded browsers, password managers, software that searches for, removes or quarantines malicious software) and ii) Class II: High-risk products (e.g., firewalls, Intrusion and prevention systems, hypervisors and container runtime systems).
- Critical Products with digital elements (Article 8): Highly sensitive items requiring full third-party assessment (e.g., smart meter gateways, Hardware Security Modules).
Navigating this classification is the foundational guide to scoping CRA compliance. Automated decision-support tools act as a first line of defense, guiding operators through these lists to instantly determine theclassification of their products with digital elements and provide them in a simplified way their obligations not only in terms of organizational and technical requirements but also the requirements related to the type of conformity assessment their products should undertake.
Once classified, the manufacturer needs to fulfil the applicable requirements by following CRA. This is a phase that is unique for each product and each manufacturer. Starting from a product cybersecurity and resilience risk assessment, ending up with the implementation of the necessary controls at all levels. It is recommended to follow the well-known steps of the PDCA cycle, is to check whether the objectives regarding cybersecurity have been correctly and effectively implemented [5]. This means that for every requirement, suitable evidence has to be collected and evaluated, firstly in terms of completeness and then in terms of effectiveness.
The adoption of Integrated AI Frameworks is critical to solving the “Completeness” challenge. Recent research proposes a Composite AI Model that integrates Ensemble Learning with Generative AI (GPT-3.5) [6]. This approach leverages Natural Language Processing (NLP) to bridge the gap between unstructured data and structured compliance:
• Automated Audit Evidence Mapping (NLP): Compliance involves reading thousands of pages of technical documentation and regulatory text. NLP models can automatically parse these unstructured documents, identify relevant clauses and mapping a product’s technical features directly to the CRA’s Security Functional Requirements (SFRs). This ensures that no legal obligation is overlooked due to human error.
• Prioritizing via Exploitability Prediction: Not all vulnerabilities are dangerous. The integrated model predicts the Exploitability Score (EPSS) of each flaw. This allows providers to filter out noise and focus on the Control Vulnerability Risk—the specific vulnerabilities that are actually likely to be exploited in the wild.
The above description underlines the important role that suitable AI tools can play in the CRA compliance process. But the introduction of an AI tool, creates questions regarding to its trustworthiness. Trustworthy AI refers to the design, development, and deployment of AI systems that are lawful, ethical, and robust. It ensures that AI operates reliably and transparently, fostering confidence among users and regulators that the system’s decisions are accurate, fair, and secure. However, relying on AI for generating audit evidence for the conformity assessment may introduce new risks, such as algorithmic bias, inaccurate risk levelidentification, security controls and many more. Moreover, lack of explanation of the AI system outcome for critical security risk mitigation may pose significant challenges for auditors who need to verify compliance. In this context, Trustworthy AI (T-AI) aims to ensure that AI systems are developed and used in a way that is secure, robust and explainable to all user groups. T-AI significantly strengths the certification process with traceable and explainable audit evidence with continuous conformity assurance. For instance, automated evidence generation, vulnerability exploitability prediction, consistency of risk assessment and controls selection are critical capabilities that focus on three distinct characteristics: Privacy, Security, and Explainability.
- Privacy: is one of the key trustworthy AI characteristics ensures that the use of AI tools, does not become a data leak. Before any training occurs, techniques like Differential Privacy and Metric-Based Auditing are applied to sanitize the dataset. This ensures that sensitive technical data—such as unpatched source code or proprietary architecture details—cannot be extracted or reverse-engineered by competitors or malicious actors, preserving the integrity of the provider’s intellectual property.
- Security: ensures that the AI system itself is robust against manipulation. Adversarial attacks can attempt to fool AI into classifying a risky product as safe. To prevent this, the T-AI pipeline employs Adversarial Hardening, training the model against specific attack vectors like FGSM and PGD. This ensures the conformity assessment tool cannot be tricked by malicious inputs designed to hide vulnerabilities, maintaining the validity of the risk assessment.
- Explainability: ensures that decisions are understandable to humans. It uses tools like SHAP and LIME to generate clear, human-readable rationales for every risk decision. Crucially, explainability extends beyond risk identification to Control Adequacy Validation via Marginal Analysis. To explain why a control is effective, the system employs Marginal Analysis (a Counterfactual Sensitivity Test) [7]. It simulates a “what-if” scenario to calculate the Gap Closure—the precise percentage reduction in risk achieved by a specific control. Instead of a vague claim like “Secure,” the AI generates a specific proof: “The implemented Access Enforcement control reduces the risk of CVE-2023-XYZ by 39%, satisfying the adequacy threshold.” This combination of qualitative explanation and quantitative Marginal Analysis supports the Traceability of Audit Evidence, a fundamental requirement for the CRA. It allows auditors to trace the entire chain of evidence and by visualizing this chain, T-AI transforms the AI from a “black box” into a transparent verification partner, providing the Proof of Completeness auditor’s demand.
- Security Objective: This refers to the high-level protective goal that the product must achieve (e.g., “Prevent Unauthorized Access” or “Ensure Data Integrity”). It establishes the operational context for why security is needed.
- Security Requirement: This identifies the specific legal mandate from the CRA that necessitates the security measure. For example, the NLP maps the objective directly to Annex I, Section 1.3 of the regulation.
- Risks: This identifies the threat that could be realized in a product by the exploitation of a known vulnerability (e.g., CVE-2023-XYZ). The AI uses this to extract risk levels, highlighting exactly what could go wrong.
- Implemented Control: This describes the specific technical countermeasure deployed to fix the issue, such as Multifactor Authentication or an Input Validation Filter. Within the Integrated AI Framework, this control is validated by Marginal Analysis to prove it is adequate.
The transition to the Cyber Resilience Act represents a watershed moment for digital product security, shifting the industry from a culture of reactive patching to one of proactive, verifiable resilience. While the burden of evidence, completeness, and continuous monitoring presents a significant hurdle for product providers, the adoption of AI offers a viable path forward. By leveraging Integrated AI frameworks to automate evidence generation and Trustworthy AI principles to ensure the validity of those results, organizations can transform compliance from a static administrative cost into a dynamic security advantage. Ultimately, this symbiotic relationship between regulation and advanced technology is what will enable the European Union to build a truly secure digital ecosystem, where trust is not just claimed, but rigorously engineered and proven.
This is the path that the CURIUM project has been walking on since the beginning of 2025. The CURIUM project, a project funded by the European Cybersecurity Industrial, Technology and Research Competence Centre (‘granting authority’), under the powers delegated by the European Commission (‘European Commission’), under the Grant Agreement No. 101190372[8]. The project has been implementing tools covering all the steps mentioned above, by levering Trustworthy AI. Specifically, the project has created:
- the CyReA tool, a questionnaire that allows the user to identify whether the product under evaluation falls within the scope of the CRA and provides an indication of the classification of the product.
- The DPRA tool, an AI enhanced risk assessment tool, allowing for users to perform an evidence based risks assessment. The risk assessment starts with identification of the vulnerabilities of the components of the product, based on information from available vulnerability databases, and the classification of risks based on the exploitation potential of the identified vulnerabilities.
- The DPMA tool, which consolidates information from multiple standards to provide a unified database of possible controls that can be implemented for each threat.
- The CAC tool, features an automated self-assessment process with full visualization, allowing users to clearly understand the requirements and identify potential gaps to compliance with the CRA. The tool includes comprehensive technical documentation management capabilities. It assists users in the creation of technical documentation in line with the requirements of the Cyber Resilience Act (CRA). The CAC Tool also enables the import of Software Bill of Materials (SBOM), allowing users to define the key components of their software products.
- The PSTVA tool, is a customizable vulnerability assessment toolkit designed for efficiency and simplicity. It integrates multiple open source tools to perform security assessments of web services and Active Directory environments, practically assisting organization in fulfilling the relevant requirements of the CRA.
For more information on the CURIUM project and to learn how you can gain access to these free tools and services, contact us at https://curium-project.eu/contact/
References
1. Reuters: “Enercon says 5,800 German wind turbines affected by Viasat outage,” Reuters, February 28, 2022.
2. European Union: Regulation (EU) 2024/2847 of the European Parliament and of the Council of 23 October 2024 on horizontal cybersecurity requirements for products with digital elements.
3. European Commission / IAPP: “Impact Assessment Report accompanying the Cyber Resilience Act,” SWD(2022) 282 final.
4. CRA Annexes: Regulation (EU) 2024/2847, Annex III (Important Products) and Annex IV (Critical Products).
6. Islam, S. et al.: “Hybrid AI-Based Dynamic Risk Assessment Framework with Explainable AI Practices for Composite Product Cybersecurity Certification,” Eur. Phys. J. C, 2025.
7. Fraunhofer IBMT: “A Holistic Trustworthy AI Pipeline for Building Trusted AI-Enabled Applications,” Proceedings of the Fraunhofer IBMT Industrial Pilot, 2025.

