Enterprises must secure a transformation driven by generative AI (GenAI) bidirectionally: by securely adopting GenAI tools in the enterprise with zero trust while leveraging it to defend against the new AI-driven threat landscape, according to Zscaler.
AI has already become a part of business as usual, as enterprises leverage and integrate new features and tools into their day-to-day workflows, multiplying the volume of transactions and data generated. The much higher volume is reflected in the nearly 600% increase in transactions as well as the 569 terabytes of enterprise data sent to AI tools that researchers analyzed between September 2023 and January 2024.
“Data is the lifeblood of every enterprise and the gold of this new era in the AI revolution,” said Deepen Desai, Chief Security Officer, Zscaler.
AI transactions continue to accelerate
From April 2023 to January 2024, ThreatLabz saw AI/ML transactions grow by nearly 600%, rising to more than 3 billion monthly across the Zero Trust Exchange platform in January. Despite the mounting security risk and increasing number of data protection incidents, enterprises are adopting AI tools in large numbers.
Manufacturing was found to be the industry leader in AI transactions across the platform, driving nearly 20% of the total volume. From analyzing vast amounts of data from machinery and sensors to preemptively detect equipment failures to optimizing supply chain management, inventory, and logistics operations, AI is proving instrumental to manufacturers.
The other notable verticals that comprise the top five are finance and insurance (17%), technology (14%), services sectors (13%), and retail/wholesale (5%).
Research shows that ChatGPT accounted for more than half of all enterprise AI transactions (52%), while the OpenAI application itself ranked third (8%). Drift, the popular AI-powered chatbot, generated nearly 20% of enterprise traffic, while LivePerson and BoldChat also made the list. Writer was the favorite GenAI tool for creating written enterprise content.
Even as enterprise AI adoption continues to surge, organizations are increasingly blocking AI and ML transactions because of data and security concerns. Today, enterprises block 18.5% of all AI transactions, a 577% increase from April to January, for a total of more than 2.6 billion blocked transactions.
Some of the most popular AI tools are also the most blocked. Indeed, ChatGPT holds the distinction of being both the most-used and most-blocked AI application. This indicates that despite—or even because of—the popularity of these tools, enterprises are working actively to secure their use against data loss and privacy concerns.
Countries generating the most enterprise AI transactions
AI adoption trends differ globally as regulations, requirements, technology infrastructure, cultural considerations, and other factors play key roles. At 40%, the US produces the highest percentage of enterprise AI transactions globally. India was second at 16%, propelled by the country’s accelerated commitment to driving innovation.
Although the UK’s share of global enterprise AI transactions is only 5.5%, it leads enterprise AI traffic in EMEA with over 20%. France (13%) and Germany (12%), as expected, follow closely behind as the second and third largest enterprise AI traffic generators in EMEA. However, the United Arab Emirates is a rapidly growing technological innovator in the region that has also emerged as a prominent AI adopter.
In the APAC region, researchers discovered a staggering increase of nearly 1.3 billion (135%) more enterprise AI transactions compared to EMEA. This surge can likely be attributed to India’s extensive usage and adoption of AI tools for conducting business across the tech sector, and it may suggest a higher concentration of tech jobs, stronger willingness to adopt new innovations, and fewer barriers to usage.
AI empowered threat actors amplify enterprise risk and security challenges
As the power of AI has advanced, it has become a double-edged sword for enterprises. While AI offers immense potential for innovation and efficiency, it also brings forth a new set of risks that organizations must grapple with–namely, risks associated with leveraging GenAI tools within the enterprise and an evolving landscape of AI-assisted threats.
The utilization of GenAI tools within enterprises introduces significant risks that can be categorized into three main areas:
- Protection of intellectual property and non-public information: the risk of data leakage
- AI application data privacy and security risks: including an expanded attack surface, new threat delivery vectors, and increased supply chain risk
- Data quality concerns: the concept of “garbage in, garbage out” and the potential for data poisoning
Simultaneously, enterprises are constantly exposed to a barrage of cyberthreats, some of which are now AI-driven. The possibilities of AI-assisted threats are virtually limitless, as attackers can leverage AI to orchestrate sophisticated phishing and social engineering campaigns, develop highly evasive malware and ransomware, exploit vulnerabilities in enterprise attack surfaces, and amplify attacks’ speed, scale, and diversity.