Gartner: IT leaders need to prepare for GenAI legal issues


IT leaders are being advised to prepare for the rise in legal disputes and regulatory compliance mishaps arising from the use of generative AI (GenAI) in their organisations.

A Gartner survey of 360 IT leaders involved in the roll-out of GenAI tools found that over 70% indicated that regulatory compliance is within their top three challenges for their organisation’s widespread GenAI productivity assistant deployment.

GenAI is fast becoming a core component of enterprise software. Gartner recently stated that in less than 36 months, GenAI capability will become a baseline requirement for software products. “Every software market has already surpassed first-mover advantage, and by 2026, more money will be spent on software with GenAI than without,” Gartner said.

But the increase in the use of GenAI is having a profound effect on the ability of organisations to remain secure and compliant with regulations. The analyst reported that only 23% of respondents are very confident in their organisation’s ability to manage security and governance components when rolling out GenAI tools in their enterprise applications.

“Global AI regulations vary widely, reflecting each country’s assessment of its appropriate alignment of AI leadership, innovation and agility with risk mitigation priorities,” said Lydia Clougherty Jones, senior director analyst at Gartner. “This leads to inconsistent and often incoherent compliance obligations, complicating alignment of AI investment with demonstrable and repeatable enterprise value, and possibly opening enterprises up to other liabilities.”

The survey also showed that the impact from the geopolitical climate is steadily growing. Over half (57%) of non-US IT leaders indicated that the geopolitical climate at least moderately impacted their GenAI strategy and deployment, with 19% of respondents reporting it has a significant impact. However, nearly 60% of those respondents reported that they were unable or unwilling to adopt non-US GenAI tool alternatives.

When deploying GenAI, Gartner recommended that IT leaders strengthen the moderation of AI-generated outputs by engineering self-correction into training models and preventing GenAI tools from responding immediately, in real time, when asked a question they cannot answer.

Gartner’s advice on moderation also covers use-case review procedures that evaluate the risk of “chatbot output to undesired human action”, from legal, ethical, safety and user impact perspectives. It urged IT leaders to use control testing around AI-generated speech, measuring performance against the organisation’s established risk tolerance.

Thorough testing is another part of GenAI deployment. Here, Gartner believes IT leaders should aim to increase model testing/sandboxing by building a cross-disciplinary fusion team of decision engineers, data scientists and legal counsel to design pre-testing protocols, and test and validate the model output against unwanted conversational output. It urged IT leaders to document the efforts of this team to mitigate unwanted terms in model training data and unwanted themes in the model output.

One of the techniques that is being used in certain regulated industries like banking and finance is to use multiple AI agents, each based on different GenAI tools and AI language models, to answer a user’s query. The responses are then assessed by an AI system that acts as a judge, making the ultimate decision over which answer is most plausible. Lloyds Banking Group’s chief data and analytics officer, Ranil Boteju, describes this approach as an “agent as a judge”.

Gartner also recommends that GenAI tools should include content moderation techniques such as “report abuse buttons” and “AI warning labels”.



Source link

About Cybernoz

Security researcher and threat analyst with expertise in malware analysis and incident response.