Maintaining human oversight in AI-enhanced software development


In this Help Net Security, Martin Reynolds, Field CTO at Harness, discusses how AI can enhance the security of software development and deployment. However, increased reliance on AI-generated code introduces new risks, requiring human oversight and integrated security practices to ensure safe software delivery.

How can AI be further leveraged to improve the security of software development and deployment?

AI can be used to automatically analyse code changes, test for flaws and vulnerabilities and identify the risk of any impact. AI can also be used to rollback any deployment issues.

What’s more, Generative AI can go one step further – acting as a live assistant for developers. The likes of Large Language Models (LLMs) can help developers not only to create new code faster but can help triage and analyze any vulnerabilities immediately. Any security backlogs and critical issues can be addressed quickly, with significantly reduced toil.

What are the potential security risks associated with AI-generated code?

As more developers lean on Generative AI to help them with writing code, the sheer volume of code shipped is increasing by an order of magnitude. We expect that the manual toil developers undertake to test and remediate security issues will increase in line with that growth as a result. In other words, as more code is generated, it is becoming more difficult for developers to keep up the work needed to test, secure, and remediate issues in every line of code they deliver.

If developers can’t effectively check code for security issues, it’s more likely that any flaws and vulnerabilities could creep into production, with businesses facing increased downtime and breaches as a result. It’s not that AI-generated code introduces new security gaps; it just means that even more code will make its way through existing gaps. This increases the risk of bugs and vulnerabilities escaping into production, which could create a major headache for developers. For example, when Log4J was first discovered, it took enterprises months to ascertain the full impact on their organization and fix it. With Generative AI creating yet more code to sift through, developers would have to find the same needle in a much larger, and ever-increasing haystack.

How can organizations mitigate these risks using AI code completion tools like GitHub Copilot or Amazon CodeWhisperer?

Code generation tools such as these can help mitigate some of the risk, but don’t form the whole solution. The problem is that most of additional work comes in the downstream stages, such as testing and deployment. Whilst AI-enabled copilots can help speed up code creation, they aren’t perfect, and can still add to the developers’ workload in the later stages of software delivery. Research shows that AI copilots have resulted introducing software bugs 40% of the time. As a result, any productivity gained by using these code generation tools can be quickly offset by the increase in cycles developers must spend on testing and security.

Instead, the likes of GitHub Copilot and Amazon Code Whisperer, should be used alongside an Internal Developer Platform (IDP), underpinned with well governed Continuous Delivery (CD). An IDP will help by providing a single unified view of every single process – from build right through to security and deployment. This helps developers to retain control and oversight over every aspect of software delivery, so they can quickly act when needed. These IDPs are also best supported by using modern DevOps practices, underpinning the need for reliable, automated pipelines. In this way, organizations can empower developers by giving them access to AI, but in such a way that is well-governed and safe for the entire business.

How important is human oversight when working with AI-generated code?

Whilst AI and automation will be vital tools for mitigating any security risks, it’s imperative humans retain control. If the technology is left to govern itself, there’s a real risk of bugs and vulnerabilities making their way into production. To that end, its critical developers still have visibility and control of all that’s going on within the SDLC.

This involves retaining control of the policies used to govern AI-code production, and having visibility of all pipelines to ensure security flaws don’t go unnoticed. IDPs go a long way towards giving developers the visibility and control they need to ensure AI is aiding and not harming efforts to deliver software securely.

What best practices should organizations implement to ensure the security and accuracy of AI-generated code?

There’s a few steps companies can take to reduce the risk of AI-generated code. Firstly, organizations should ensure security is integrated into every phase of the SDLC. This involves having secure, governed pipelines that can automate every single test, verification, and check. Automated testing not only drives efficiency, giving developers more time to potentially spot any issues, but makes sure no code can slip through the cracks by automatically flagging any flaws. Businesses can also adopt a policy-as-code approach to the entire software delivery process. This will make it so any code that fails to meet strict standards with regards to availability, performance, and security, will not be allowed into production.

Another crucial step enterprises should take is to extend secure software delivery practices beyond their own four walls. As seen with the SolarWinds and MoveIT incidents, it’s not enough for businesses to simply secure their own Software Development Lifecycle. Developer and security teams must have a way of automating the monitoring and control of any open source software components and third-party artefacts in use within the organization. This includes the ability to generate a Software Bill of Materials (SBOM), which acts as an inventory of any external components that are in use. It should also involve rigorous code attestation using the SLSA framework.

Lastly, organizations can embrace shift left within their approach to software development and security. It emphasizes the need to integrate security and testing earlier within the SDLC. By giving developers the information they need much sooner, thanks to the aid of automated security scanners and IDPs, any security issues can be quickly rectified before hitting production. Moreover, shift-left security promotes greater collaboration between development, operations, and security teams. Involving security experts from the beginning fosters better communication and understanding of security requirements.



Source link