AI is becoming the weapon of choice for cybercriminals


AI changes how organizations look at cybersecurity
GenAI is compromising security while promising efficiency

This article highlights key findings from 2024 reports on AI and GenAI technologies, focusing on their potential and major challenges.

Overreliance on GenAI to develop software compromises security

96% of security and software development professionals report that their companies use GenAI-based solutions for building or delivering applications. Among these respondents, 79% report that all or most of their development teams regularly use GenAI.More developers than security professionals report concern over loss of critical thinking due to AI use in development (8% vs. 3%).

AI learning mechanisms may lead to increase in codebase leaks

43% of respondents concerned about the potential for increased leaks in codebases highlighted the risk of AI learning and reproducing patterns that include sensitive information. Additionally, 32% identified the use of hardcoded secrets as a key risk point within their software supply chain.

Strong privacy laws boost confidence in sharing information with AI

The survey reveals that 63% believe AI can be useful in improving their lives. The use of GenAI has nearly doubled, with 23% of respondents using it regularly, up from 12% last year. Focusing on privacy, 30% of GenAI users say they enter personal or confidential information, including financial and health details, into GenAI tools. This is despite 84% being concerned about that data going public.

Hackers are finding new ways to leverage AI

While only 21% of hackers believed that AI technologies enhance the value of hacking in 2023, 71% reported it to have value in 2024. Additionally, hackers are increasingly using GenAI solutions, with 77% now reporting the adoption of such tools—a 13% increase from 2023.

15% of office workers use unsanctioned GenAI tools

Ivanti’s research shows that 81% of office workers report they have not been trained on GenAI and 15% are using unsanctioned tools. 32% of security and IT professionals have no documented strategy in place to address GenAI risks. Unapproved GenAI tools — just like any other shadow IT — introduce risk by expanding the organization’s attack surface without any oversight from security, potentially introducing unknown vulnerabilities that compromise an organization’s security posture.

Security leaders consider banning AI coding due to security risks

92% of security leaders have concerns about the use of AI-generated code within their organization. 66% of survey respondents report security teams can’t keep up with AI-powered developers. As a result, security leaders feel like they are losing control and that businesses are being put at risk, with 78% believing AI-developed code will lead to a security reckoning and 59% losing sleep over the security implications of AI.

GenAI buzz fading among senior executives

GenAI adoption has reached a critical phase, with 67% of respondents reporting their organization is increasing its investment in GenAI due to strong value to date. While selecting and quickly scaling the GenAI projects with the most potential to create value is the goal, many GenAI efforts are still at the pilot or proof-of-concept stage, with 68% saying their organization has moved 30% or fewer of their GenAI experiments fully into production.

GenAI models are easily compromised

95% of cybersecurity experts express low confidence in GenAI security measures while red team data shows anyone can easily hack GenAI models. 35% of respondents are fearful of LLM reliability and accuracy, while 34% are concerned with data privacy and security. The lack of skilled personnel accounts for 28% of the concerns.

The most urgent security risks for GenAI users are all data-related

Using global data sets, the researchers found that 96% of businesses are now using GenAI, a number that has tripled over the past 12 months. On average, enterprises now use nearly 10 GenAI apps, up from three last year, with the top 1% adopters now using an average of 80 apps, up significantly from 14.

Pressure mounts for C-Suite executives to implement GenAI solutions

87% of C-Suite executives feel under pressure to implement GenAI solutions at speed and scale. While 43% of executives believe that GenAI is critical to retaining their competitive advantage, 68% acknowledge that they find it difficult to identify genuine innovators in today’s noisy AI market.

Organizations weigh the risks and rewards of using AI

Two-thirds of organizations prioritize AI risk assessment using existing internal processes (65%) and/or guidance and best practices from professional organizations (63%). Another 55% say they use current and pending laws/regulations to prioritize risk. Nearly half of respondents describe their risk tolerance towards AI as very high (17%) or high (29%), while only 12% report a low (9%) or very low (3%) AI risk tolerance.

GenAI keeps cybersecurity pros on high alert

When asked how much of a threat GenAI technology is to the overall cybersecurity landscape, a remarkable 96% of all respondents agreed it’s a threat with more than a 36% stating its use for manipulating or the creation of deceptive content (deepfakes) is a significant threat.

Cybersecurity pros change strategies to combat AI-powered threats

75% of security professionals had to change their cybersecurity strategy in the last year due to the rise in AI-powered cyber threats, with 73% expressing a greater focus on prevention capabilities. The rise of adversarial AI is also taking a toll on cybersecurity professionals, with 66% admitting their stress levels are worse than last year and 66% saying AI is the direct cause of burnout and stress.

Organizations go ahead with AI despite security risks

AI adoption remains sky high, with 54% of data experts saying that their organization already leverages at least four AI systems or applications, according to Immuta. 79% also report that their budget for AI systems, applications, and development has increased in the last 12 months.

AI is creating a new generation of cyberattacks

Most businesses see offensive AI fast becoming a standard tool for cybercriminals, with 93% of security leaders expecting to face daily AI-driven attacks. AI will not only be used as a tool to enhance cyberattacks, but in cyber defence, too. While a report from the Office of National Statistics reported that 83% of businesses had no plans to adopt AI, this is not borne out when looking at cybersecurity.

Security pros are cautiously optimistic about AI

AI integration into cybersecurity is not just a concept but also a practical reality for many, with 67% of respondents stating that they have tested AI specifically for security purposes. As for the ability to leverage AI, 48% of professionals expressed confidence in their organization’s ability to execute a strategy for leveraging AI in security, with 28% feeling reasonably confident and 20% very confident.

22% of employees admit to breaching company rules with GenAI

92% of security pros have security concerns around generative AI, with specific apprehensions including employees entering sensitive company data into an AI tool (48%), using AI systems trained with incorrect or malicious data (44%), and falling for AI-enhanced phishing attempts (42%).

AI tools put companies at risk of data exfiltration

As today’s risks are increasingly driven by AI and GenAI, the way employees work, and the proliferation of cloud applications, respondents state they need more visibility into source code sent to repositories (88%), files sent to personal cloud accounts (87%), and customer relationship management (CRM) system data downloads (90%).

Businesses banning or limiting use of GenAI over privacy risks

Most organizations are putting in place controls to limit exposure: 63% have established limitations on what data can be entered, 61% have limits on which employees can use GenAI tools, and 27% said their organization had banned GenAI applications altogether for the time being. Consumers are concerned about AI use involving their data today. Yet, 91% of organizations recognize they need to do more to reassure their customers that their data was being used only for intended and legitimate purposes in AI.



Source link