AI In Software Development: Balancing Innovation and Security in An Era of Lowered Barriers


AI is reshaping software development. The advent of sophisticated AI models such as DeepSeek and Ghost GPT has democratized access to powerful AI-assisted coding tools, pushing the boundaries of innovation – a staggering 76% of developers are either already leveraging AI coding tools or planning to do so in the near future – while simultaneously lowering the barrier to entry for developers. However, this growth in AI also presents a double-edged sword as while it promises unprecedented efficiency and accuracy for developers, at the same time it also lowers the barrier to entry for less skilled hackers, amplifying the urgency for robust security measures as potential vulnerabilities in AI-generated code emerge.

This widespread adoption of AI underscores a critical need for continuous education and stringent governance and policies within organizations to ensure secure AI practices in software development. Recognizing these challenges, initiatives such as the OWASP AI project are quickly stepping up to address the burgeoning concerns surrounding AI security, providing developers with precise, practical advice on how to design, develop, and test systems that prioritize security and privacy, ultimately seeking to pave the way towards a safer digital future.

Lowering the bar, expanding the risk

AI-assisted coding has made software development more accessible to a wider audience by speeding up initial training processes and providing open access to those starting out. Even the most proficient developers are reporting that AI tools are enhancing the quality of their code, reducing production-level incidents, and boosting overall code output — all key metrics for assessing developer performance. Additionally, AI coding tools are proving popular for collaborative tasks such as code reviews and programming, with the aim of fostering a more efficient development environment.

However, this accessibility comes with its own set of security challenges, and if AI is increasingly being used to write code, there’s even more of an impetus for security education and training within development teams due to the shift required from writing code to doing code reviews for something that is AI-generated. The principle of ‘trust no one, verify everything’ is paramount and must be applied to the use of AI in software development. The integrity of AI-assisted code relies on developers treating the outputs of LLMs as untrusted data, however many developers lack the necessary knowledge, education, visibility, and context to identify risks in AI generated source code or effectively triage AppSec findings. In a similar way, AI models often lack the contextual awareness and intent of human developers, thereby effecting code quality and compromising the overall security of a system when over relied upon in the software development process. As such, developers must establish a thorough understanding of AI-generated code in order to proactively interrogate vulnerabilities and validate source code pre-deployment.

Similarly, while AI opens the door to a long line of budding developers by reducing the skill level required to produce code with relative ease, it also extends an inadvertent invitation to attackers to exploit vulnerabilities as security is put on the backburner. AI models such as Ghost GPT empower less experienced hackers to launch attacks and adept malicious actors to operate at scale and enhance the sophistication of their exploits. Paired with a lack of education and experience amongst novice developers utilizing AI models in development, cyber criminals are rubbing their hands.

Empowering human intervention 

Organizations must be reminded that AI is not an entity, but rather a supplementary tool deployed to support those that enable it. As such, we cannot underestimate the importance of people in building security in from the start of any development process, and secure coding education for developers is vital to ensuring a base level of security. Just 1 in 5 organizations are confident in their ability to detect a vulnerability before an application is released, meaning that the security knowledge in most development lifecycles is insufficient. But during production is a critical time to detect and remediate vulnerabilities, and developers need to be trained to create secure software and to sniff out insecure code in the rest of the code base – responding and fixing it quickly. Without this, AppSec and security teams are left with an unnecessary burden of security, which will only ultimately require more time, spend, and potential for business risk.

Better training is therefore required so that teams relying on generative AI are more capable of spotting mistakes early and that architectures are hardened against attacks. If training is done well, it will also arm developers with the knowledge needed to be able to use AI models more effectively. Tools like GitHub Copilot and ChatGPT show much promise in making developers more efficient and may in time reduce insecure code. However, like the promise of self-driving cars, we are probably still decades away from this reality.

A multi-tiered approach to protecting your organization from security attacks is still the best practice for today. The best approach to minimizing exposure to attacks includes combining:

  • A well-educated staff
  • A culture that values security
  • The use of automated tools (static and dynamic analysis tools, for example)
  • A security-first software development lifecycle

Our level of understanding surrounding AI models is bound to mature, but the threats facing AI are unlikely to change drastically and the ways we secure AI are fundamentally the same as the way we secure everything else. As such, we will benefit from practicing and instilling comprehensive security fundamentals across organizations. In going back to basics, most of the emerging threats brought in by AI are going to be mitigated by practices already understood and implemented across the board, decreasing the likelihood of vulnerabilities in code and slamming the door shut on attackers.

When looking to utilize AI in software development, organizations should educate, then innovate, realising the importance of secure code training as the AI threat landscape evolves. This approach helps establish a comprehensive understanding of any vulnerabilities that may arise in code or elsewhere due to inadvertent oversight and opportunistic malicious intent. By taking the necessary steps to foster and maintain fundamental security principles through continuous security training, development teams can balance risk and reward, ensuring the secure deployment of AI in development and beyond.

About the Author

Michael Birch is an Ex-Army Green Beret turned application security engineer. He currently serves as Director of Application Security at Security Journey, where he is responsible for creating vulnerable code examples and educating developers on the importance of establishing solid security principles across the entire SDLC.

Before joining Security Journey, Michael worked as a Senior Cyber Security Specialist for the North Carolina National Guard. In this role, he led the Cyber Network Defender program and mentored a team of soldiers in communication engineering and security. Michael also served as a Cyber Security Specialist, responding to incidents, conducting network assessments, and maintaining security systems.

Michael can be reached online at the Security Journey website and LinkedIn.



Source link