IT leaders share tips for AI success


With artificial intelligence (AI) the recurring theme of the Domino Data RevX 2024 London event that took place on 13 June, training with real users and separating AI from IT infrastructure were revealed as areas IT leaders recognise as best practices.

IT leaders have plenty of choices when implementing AI to improve business processes. They can build or buy AI capabilities, use application programming interfaces (APIs) that provide access to AI functionality, run AI on-premise, or use complete software-as-a-service (SaaS) applications. Whatever approach is taken, training based on the organisation’s internal data is key.

Training and feedback

In his presentation, Raj Mukherjee, head of data science and AI at Direct Line Group, explained how the company uses generative AI (GenAI) to power its GreenFlag car breakdown recovery service. His presentation showcased the benefits of getting users involved in building more accurate AI models, but as is the case with AI in general, success is built on a solid data foundation.

Mukherjee said: “We have been working on our data strategy for the best part of four years. We are now executing the strategy, with 75% of our business now working on a new data strategy stack.”

This means the company’s data management data governance and data engineering capabilities are mature enough to support Direct Line’s AI strategy as it evolves.

Exploring one of the use cases at GreenFlag, Mukherjee said: “In the customer contact centre space, we are trying to see how AI can augment our contact centre agents by taking some of the cognitive load away from them.”

The general idea is to provide in-depth information to enable contact centre staff to resolve customer problems more quickly. By using AI to support their work, Mukherjee said contact centre staff are also able to focus more on empathising with customers, especially given that customers are generally not in a happy place when they need to call the breakdown recovery service.

In terms of training data, he said the company used contact centre web chat and transcripts with personally identifiable information removed. “We had just the conversation and clickstream data from our website. These data sources were used to run our analytics.” 

Mukherjee added that the call centre agents were part of the testing team. “As we evaluated the answer quality of the AI model, we were able to get the engineering done very quickly,” he said. 

For Mukherjee, this meant that the answers its AI model surfaced improved from being 68% accurate to 88% accurate. He said the improvement in accuracy was achieved “just by getting the intent of the call centre agent into the answer prompts – we did not even do any fine tuning”.

Decouple AI from IT

Beyond having call centre staff involved in the training, Sebastien Conort, chief data scientist at BNP Paribas Cardif, used his presentation to explore the benefits IT leaders can achieve if they ensure the AI systems they build and deploy are robust enough to support changes in their AI models and AI infrastructure. He recommended that IT leaders aim to decouple their AI and IT such that as the AI part of a project evolves, the IT does not have to change.

“IT is responsible for decoupling components, securely exposing services, storage, the user interface and orchestration,” he said. “AI is responsible for pre- and post-processing, orchestration of the AI steps and model evolution.”

On the AI side, Conort also recommended that IT decision-makers should use pipelines like SciKit-learn and a platform-agnostic framework for model inference such as Cuda, Hugging Face, Tensorflow, MMLabs and PaddlePaddle.

Finally, to keep costs “acceptable”, Conort suggested IT leaders consider using open source software when developing their AI-powered products.



Source link