Pieter Adieux Co-Founder and CEO, Secure Code Warrior
The possibilities of generative AI (GAI) technology have had both developers and non-developers wide-eyed with excitement, particularly around automation, productivity and business development. What makes it so engaging is that it’s clearly more than just hype: Developers are finding real use cases for GAI, signaling the likelihood that it will become an everyday tool in most roles before long.
However, the free rein some developers have been given to test GAI tools has seen many security processes overlooked, resulting in poor, insecure coding patterns and opening the door to new threats. This is on top of other potential security issues GAI is capable of creating.
ChatGPT, the most prominent GAI application, has had a rough first year when it comes to security. Only a few months after its launch, OpenAI disclosed that it took ChatGPT offline due to a bug in an open-source library, potentially exposing payment-related information. Purdue University recently found ChatGPT consistently gave the wrong answers to programming questions 50% of the time. And a team at Université du Québec discovered that nearly a quarter of programs generated by ChatGPT were insecure.
The reality of the situation is that GAI tools, while fast, make the same mistakes we do, but because it’s generated by a computer we tend to trust it too much, just as a high-school student might rely too much on Wikipedia to write their history essay. However, the consequences could be much greater than receiving an F grade.
GAI will inevitably become an everyday part of the way developers work but even as the technology improves, developers must be in a position to hone their security skills and maintain code quality.
Embed security within your team from the beginning
The lack of secure code created by GAI has solidified the long-term importance of cyber-skilled workers. Developers with expertise in secure coding best practices that work with GAI will be able to develop code at scale while keeping quality to the highest standard. Our own research found that, at present, just 14% of developers are focused on security, but as GAI takes on the role of generating code it will be on security-skilled developers to ensure both quality and security.
Ultimately, it will take a security-aware team to enable safer coding patterns and practices. At the Université du Québec, researchers studying insecure code generated by ChatGPT found that the chatbot was able to deliver secure code, but only after an expert stepped in to identify the vulnerabilities and prompt it to make the right amends.
Organisations need to get on the front foot and enforce clearer guidance and training on how developers should use the technology. This security-first approach will have the additional benefit of enforcing a more secure culture throughout the organisation beyond the use of GAI to accelerate the development process.
Safe research and testing is important
A blanket ban on GAI may be tempting, but it is not the right solution. A survey from BlackBerry found that 66% of IT decision-makers were considering banning the technology from employee devices altogether; however, this approach may just drive the use of AI underground—aka “shadow AI”—and limit the ability to mitigate any issues caused by GAI.
These tools will be part of the way modern developers work, so it’s imperative that we get familiar with them. Just as human error is an issue that developer teams deal with nearly every day, AI error will need to be mitigated too (after all, these LLMs are trained using human-created information).
By learning how to securely manage and identify which tools are most useful, businesses will find that it adds much more value than those which refuse to engage with this technology. By setting up secure learning environments, developers will be in a position to identify benefits from GAI, but they will also gain a better understanding of how to best protect the organisation from emerging threats and vulnerabilities.
Create a code of conduct for best practices
Gartner recently announced that GAI was one of the top risks worrying enterprise IT leaders in 2023. With more developers using the technology, IT leaders need to be proactive and start to establish best practices and guidelines for integrating AI into their work.
The Cabinet Office recently issued guidance to all civil servants on the use of GAI, which included never inputting classified information as well as being aware that any output from GAI tools is susceptible to bias and misinformation, which should be checked and cited appropriately. While we can look to the Government as an example to follow, it’s up to each organisation to manage its own risk appropriately.
GAI is maturing and is expected to become part of our standard suite of IT tools, just like email and cloud storage. But before we get there, we need to be strict on the way developers use GAI by creating healthy and safe environments to test and use it.
Research has shown that on its own and without oversight from security experts, inappropriate use of GAI can have serious consequences. However, there is a real opportunity for developers to stand out by building their skills and demonstrating that they can use these new tools while ensuring quality and security.
The post Deploying AI Code: Safety Goggles Needed appeared first on Cybersecurity Insiders.