In this Help Net Security video, Dinesh Nagarajan, Global Partner, Cyber Security Services at IBM Consulting, walks through a situation in which an employee shared production source code with a public AI tool. The tool learned from the code, including special formulas used in a fintech app, which created the risk that similar answers could later be given to other users. The video shows how this kind of action can weaken a company’s position and even reveal information that belongs to partners or clients.
Dinesh outlines why this happens, pointing to limited understanding of how generative AI works, weak governance, and uneven policies around tool use. He then gives guidance on what organizations should build, including an AI use policy, stronger data protections, approved tool lists, and training that helps employees understand how these systems treat data.
Report: AI in Identity Security Demands a New Playbook
