GenAI is quickly changing the software development process by automating tasks that once took developers hours, if not days, to complete, bolstering efficiency and productivity, according to Legit Security.
“As GenAI transforms software development and becomes increasingly embedded in the development lifecycle, there are some real security concerns among developers and security teams,” said Liav Caspi, CTO at Legit. “Our research found that teams are challenged with balancing the innovations of GenAI and the risks it introduces by exposing their applications and their software supply chain to new vulnerabilities. While GenAI is undoubtedly the future of software development, organizations must be mindful of its new risks and ensure they have the appropriate visibility into and control over its use.”
Increased use of GenAI in software development
88% of developers report using it within their development organization, reflecting a broad shift in how development teams augment their capabilities with AI to meet tight deadlines and complex project demands. Despite the high rate of adoption, security is a critical concern. For instance, previous research by Legit revealed that LLMs and AI models contain bugs and vulnerabilities that can lead to AI supply chain attacks.
96% of security and software development professionals report that their companies use GenAI-based solutions for building or delivering applications. Among these respondents, 79% report that all or most of their development teams regularly use GenAI.
84% of security professionals are concerned about using code assistants and cite unknown and/or malicious code as their primary concern.
98% believe that security teams need a better handle on how GenAI-based solutions are used in development. 94% report they need more effective ways to manage GenAI use in their company’s research and development efforts. 85% of developers and 75% of those in security have security concerns over relying too much on GenAI solutions to develop software.
Developers worry about AI’s impact on critical thinking
More developers than security professionals report concern over loss of critical thinking due to AI use in development (8% vs. 3%). 95% of respondents predict that software developers will be more reliant on GenAI in the next five years, with none foreseeing reduced reliance.
Regarding how GenAI will work alongside developers, 56% of respondents see a future where GenAI handles most tasks with human oversight, 29% believe in collaborative human-AI work, and 15% predict GenAI will eventually replace developers. Only 1% think that GenAI will fade as a tool.
There is a notable difference in perspectives between security professionals and developers. 61% of security professionals believe GenAI will do most of the work, with human oversight, compared to 51% of developers.
On the other hand, 36% of developers believe that GenAI and developers will work in close collaboration, compared to 23% of security professionals.