AI: Armageddon or auspicious new beginning? 

The transformative capabilities of Generative AI have been compared to electricity and the internet but also the atom bomb by industry pundits, illustrating its power to elevate and emancipate or to destroy unless the technology is harnessed for the benefit of humanity. 

For most of us, the immediate question is whether it will make our working lives easier or take over our jobs, and the answer, according to the International Monetary Fund (IMF), is both. It anticipates that 60% of jobs in advanced economies like the UK will be impacted by GenAI, with 30% seeing productivity gains and 30% job takeover, leading to lower labour demands, wages and hiring, but AI will only see jobs disappear in the most extreme of cases.

The productivity gains mean organisations are forging ahead with implementing language learning models (LLMs) such as ChatGPT, Google PaLM and Gemini, Meta’s LLaMA. In fact, 91% of executives claim their business is now using AI or expects to do so within the next 18 months, according to the Thomson Reuters Future of Professionals C-Suite Survey. The greatest gains are initially expected to come from augmenting the day-to-day activities of IT, sales and marketing, and R&D staff, according to McKinsey, but it will inevitably impact cybersecurity professionals too.

Where AI will help

The AI Cyber 2024: Is the Cybersecurity Profession Ready? report by ISC2 found AI will help with much of the mundane work in cybersecurity, with 81% foreseeing its use in analysing user behaviour patterns, 75% the automation of repetitive tasks, and 71% monitoring network traffic and malware, while 62% thought it would be put to use predicting areas of weakness and detecting and blocking threats. We’ve also heard of use cases where the technology is being applied to governance, risk and compliance (GRC) to create and summarise reports and documentation, by DevSecOps to check code, and in the context of incident response, to make recommendations to security analysts.

With GenAI lightening the load, it’s no wonder that 56% of those surveyed by the ISC2 believe parts of their job will become obsolete but at the same time 82% agreed this will improve their efficiency. What this means from a role-based perspective is that we could see much more democratisation of security roles. Most job openings are for professionals with between two to six years’ experience at present, as evidenced in the Cyber security skills in the UK labour market 2023 study, with these outnumbering entry level roles by two to one, according to The State of Cybersecurity 2023 report from ISACA. AI could make a dramatic difference here by acting as a mentor to new recruits. Indeed, The Security Predictions 2024report determined that 86% of CISOs expect GenAI to help alleviate skills gaps and talent shortages.

Should we surrender our trust?

Yet the danger here is that these inexperienced personnel will place implicit trust in the guidance offered by GenAI and as we know, these systems are far from being infallible. They’ve been shown to exhibit bias, succumb to hallucinations and data poisoning. TheGenerative AI Snapshot series from Salesforce claims that risks include threats to data integrity, personnel inexperienced in the use of AI, a failure to properly configure or integrate GenAI with the existing tech stack, and a lack of AI data strategies. Similarly, the ISC2 survey revealed concerns over the current lack of regulation, ethical use, and data privacy.

To overcome these issues, it’s crucial that organisations put in place the necessary guardrailsby adopting an AI framework and developing policies governing use. However, only 27% of those questioned by the ISC2 had a formal policy in place outlining acceptable use and only 15% a policy on securing and deploying the technology despite there are now being a number of standards and frameworks available (notable examples include ISO 42001 and the NIST AI Risk Management Framework).

Don’t get left behind

The market is moving swiftly, which means there’s a real risk that threat actors will use the democratising effects of GenAI themselves to lower the barrier to entry and significantly increase the scale and sophistication of attacks while the enterprise is still laying the groundwork. Already 54% of the ISC2 survey cohort said they’d seen an increase in cyber attacks with 13% able to confirm they had seen some which were AI-generated. Worryingly, 41% said they have minimal or no experience in AI, giving the attackers the upperhand, and there’s still a certain apathy towards the technology, with 17% ignoring it, 12% saying their organisation had banned it, and 10% admitting they didn’t know what their organisation was doing about it.

Realistically, AI will become much more widely adopted now that commercial offerings have come to market and the EU has passed regulation in the form of the AI Act. Cybersecurity professionals therefore have an important role to play in helping drive adoption by making everyone aware of the risks as well as the opportunities. They’ll have a steep learning curve themselves, with 41% in the ISC2 survey admitting they have minimal or no experience in securing GenAI, but once they have adapted to the technology it has the potential to streamline security processes and generate output to drive decision making. Ultimately, it promises to make everyone’s jobs more productive and more interesting, provided we don’t fall behind the curve and get outpaced by the threat actors.

Source link