NIS2 and the chain liability’s impact on Secure Software Development


If you are a software supplier and your customer is covered by the EU NIS2 directive, you might very well be hit also.

By Michael Rask Christensen, CISSP, CSSLP at NNIT A/S in Copenhagen

Being based in Copenhagen, Denmark, the NIS2 directive, which was recently adopted by the European Council, is the top topic that everyone is talking about. In fact, NIS2 is on the top of the agenda amongst IT security specialists throughout the European Union. Companies and organizations are discussing what exactly will the impact on businesses become?

What is NIS2?
NIS2 is a directive as opposed to GDPR, which is a regulation. The difference is that a directive does not have direct legal impact but must be turned into local legislation by the member states. The deadline for implementing the directive is mid-October 2024. The NIS2 directive affects a long list of critical and important industries, which will be audited on their cyber security. That is both procedures and actual technical implementations.

The purpose of NIS2 could be summed up in one of the prerequisite texts of the directive. Item (77) says (my emphasis): “Responsibility for ensuring the security of network and information system lies, to a great extent, with essential and important entities. A culture of risk management, involving risk assessments and the implementation of cybersecurity risk management measures appropriate to the risks faced, should be promoted, and developed.” But in the end, the NIS2 directive is aiming for nothing less than a change of culture!

This NIS2 directive covers a lot of content – 73 pages if you download the directive as PDF from the EUR-Lex web site. A HTML version can be accessed at https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32022L2555. Even legal advisors probably have had to read it more than once to get all the details. This article will focus one of these details – the chain liability that will affect software suppliers even outside the EU.

Chain liability in NIS2
The types of businesses, which are covered directly by NIS2, are quite well defined in Annex I and II of the NIS2 directive. The sectors of high criticality are for example: Energy, transport, banking, health, digital infrastructure, and public administration. There are more to the list, but if your organization is in the EU and operates in one of these areas, it is directly affected by the NIS2 and will be held directly accountable for its security measures.

Of course, the directive does not specify exactly how organizations should implement their security measures. That would be outside the scope of the legislators. But Article 21, “Cybersecurity risk-management measures”, lists the areas that the essential and important entities must cover when protecting their IT systems. The directive uses the term “an all-hazards approach”.

In Article 21, paragraph 3 it is further stated that “Member States shall ensure that, when considering which measures referred to in paragraph 2, point (d), of this Article are appropriate, entities take into account the vulnerabilities specific to each direct supplier and service provider and the overall quality of products and cybersecurity practices of their suppliers and service providers, including their secure development procedures.” Paragraph 2, point (d) lists supply chain
security, including security-related aspects concerning the relationships between each entity and its direct suppliers or service providers as one of the measures that must be included in the all- hazards approach.

When it comes to the supply chain security, the directive does not distinguish between suppliers that reside within the EU and outside the union. The companies, which are covered by the NIS2, all have an obligation to ensure the security of all their suppliers.

The CIO of Copenhagen Airports recently gave a speech at a NIS2 conference. He mentioned that they had asked one of their SaaS providers to write an account of how they backed up customer data and how they did restore tests. The SaaS provider’s reply was, “We are in the cloud, so we don’t have to do backups and restores”. I am afraid that this is a widespread misconception among providers of SaaS offerings!

Public tenders have for a while included specific requirements, where we as a custom application developer must document in details our secure development procedures as part of the offer. With NIS2, this will not only be a requirement from the public sector in the EU member states but also be required by companies that are being audited according to NIS2 requirements. If you are in the business of developing e.g., water-treatment plant design software, your costumers in the
EU will be asking you these things whether your company is based in the USA or in the EU.

Secure development procedures
Secure development procedures are not about adding security features to the software. Those should be functional requirements already and be tested accordingly. However, security features may become requirements because of the secure development process.

A secure development procedure focuses on the how to ensure that the application becomes secure. It starts with a threat model:

  • Model the software solution.
  • Identify the attack surfaces. That is everywhere someone can interact with my application ether through a user interface or an API.
  • Identify dependencies on external components including open-source libraries.
  • Analyze the data model and identify data that needs to be protected. That could be person identifiable data or intellectual property.
  • List all relevant threats.

Based on the identified threats, define requirements, which mitigate the threats. Implement these and test if they are protecting the application.

A recommended framework for secure software development is the Microsoft Security Development Lifecycle (SDL). Some of us remember that 20 years ago, Windows was referred to as a hacker’s paradise. New fancy functions and not security seemed to be on the top of the agenda in Redmond. In 2004 the SDL became an integral part of Microsoft’s development process. Microsoft have made their SDL practices publicly available at https://www.microsoft.com/en-us/securityengineering/sdl. The SDL consists of twelve main practices:

Practice 1: Training: Security is not only the responsibility of a handful of security experts. All people around the development process should be trained in the SDL practices and standards.

Practice 2 – Security Requirements: The result of the risk assessment is security requirements.

Practice 3 – Metrics and Compliance Reporting: What cannot be measured is usually forgotten!

Practice 4 – Threat Modeling: Microsoft has some free tools available to do threat modelling. The threat modelling will identify vulnerabilities and risks, which form input to security requirements.

Practice 5 – Design Requirements: When security features are added to the design, it is important to know exactly why and what vulnerabilities the features are supposed to handle.

Practice 6 – Cryptography Standards: Which cryptography standards are used where and why. At this point it is also important to stress the importance of using libraries and methods that are generally accepted by the industry. And remember that today’s secure cryptography standard is tomorrow’s insecure standard. The design must be flexible enough to handle new standards. I personally remember when export of 128 Kbit encryption software from the USA was banned because the
government thought it was too hard to break (I downloaded a 128 Kbit Netscape browser via my then-employer’s VPN, but that’s another story).

Practice 7 – manage the Security Risk of Using Third-Party Components: Hardly anyone is coding everything from bottom-up. All professional software is built on frameworks and includes third-party libraries and services. When including these components into your software solution, the safety of these components directly influences the safety of your application.

Practice 8 – Use Approved Tools: What compilers and compiler settings should be used? The code should always be compiled with up-to-date compilers with the latest approved patches.

Practice 9 – Static Analysis Security Testing – There is a long list of tools to perform static analysis security testing. At my own company, we are using one, which does the analysis on the binaries to ensure that all third-party libraries are also analyzed.

Practice 10 – Dynamic Analysis Security Testing: The dynamic analysis security testing will test the application during run-time. This should include web scanning but could also include fuzzing, where the application’s UIs and APIs are accessed with extreme data to see if e.g., buffer overflows, can be handled correctly.

Practice 11 – Penetration Testing: A skilled penetration tester is in fact a hacker, who is working for you and not the “Dark Side”. The penetration test is often performed in conjunction with the other tests, mentioned above, but may reveal vulnerabilities not found otherwise.

Practice 12 – Standard Incident Response Process: Even a relatively simple application like Notepad++ currently has five known vulnerabilities: https://cve.report/vendor/notepad-plus-plus – two of which are rated with high severity. All software vendors need to develop a standard incident response process. You need – in advance – to plan how to handle security incidents in your software product, how to inform your customers and how to fix the problem.

ISO standards

The NIS2 directive refers to various ISO standards as relevant to document. It refers to ISO 30111 and ISO 29147, which cover the process around vulnerability disclosure and vulnerability handling. These are good inspiration to cover the above-mentioned Practice 12, “Standard Incident Response Process”.

The ISO 27001 and related standards (see https://www.iso.org/isoiec-27001-information-security.html) are in general referred to at a good framework for building a security organization that meets the NIS2 requirements. If your company holds an ISO 27001 certification, it is usually a good starting point for infusing trust between you and a customer, which is defined a critical entity by the NIS2. But, of course, having all the right policies in place is just a good start. The policies themselves do not prevent anyone from hacking your systems! It is generally recommended to combine the ISO standards with the 18 CIS controls: https://www.cisecurity.org/controls/cis-controls-list.

Conclusion
If your company delivers software or software services to any company or organization, which is deemed critical or important by the NIS2 directive, you will indirectly become under the same regime. You may see this as a threat or a possible way to increase your already great product or service. If you a located in the USA, I bet that the similar actions to protect critical infrastructure will be adopted sooner or later!



Source link