Securing Cybersecurity In Generative AI Era: Expert Insights


In the evolving world of financial technology, two remarkable leaders have been spearheading groundbreaking transformations at Broadridge India.

Meet Santhanam Govindaraj (Santh) and Prasad Vemuri, two dynamic personalities whose techno-functional expertise and visionary leadership have been instrumental in propelling Broadridge’s growth and innovation.

As the Chief Information Officer (CIO) of Broadridge India, Santh brings an impressive 24-year track record of delivering enterprise solutions across diverse domains, including Capital Markets and Wealth Management.

He leads the Product, Technology, Client Onboarding, and Service Delivery teams, tirelessly focused on aligning the Global Technology and Operations (GTO) organization in India to foster operational excellence, innovation, and evolution.

Age of Generative AI

Complementing Santh’s innovative drive is Prasad Vemuri, the Chief Technology Officer at Broadridge India and co-head of the Investor Communications business.

Armed with an impressive 25-year tenure in the financial services space, Prasad is a seasoned technology leader with an extensive background in delivering large-scale enterprise solutions in investment banking and digital communications.

Santh and Prasad share a passion for building an associate-centric culture, fostering growth, and championing diversity, equity, and inclusion (DEI) initiatives.

Join The Cyber Express as we discuss their journey, exploring their insights into the future of financial technology, generative AI and more.

  1. Generative AI technologies, while bringing several benefits, can also inadvertently introduce new vectors for cyber threats. In your perspective as a cybersecurity leader, what are the most significant risks these technologies pose to cybersecurity, and how do you suggest organizations can anticipate and mitigate these risks?

Prasad Vemuri: As a cybersecurity leader, we recognize that generative AI technologies indeed present both opportunities and risks. The most significant risks they pose include the generation of highly sophisticated and realistic phishing attacks, deep fake content used for social engineering, and the creation of tailored malware. To anticipate and mitigate these risks, organizations must adopt a multi-layered approach.

This includes investing in advanced threat detection systems capable of identifying AI-generated threats, implementing robust user authentication methods, conducting regular security awareness training to educate employees about emerging threats, and fostering collaborations with AI experts to understand potential vulnerabilities and develop appropriate defensive mechanisms.

Additionally, promoting responsible AI usage and adhering to ethical guidelines during AI development will be crucial in ensuring that the technology’s benefits outweigh its risks.

  1. Cybersecurity jailbreaks and workarounds can expose organizations to threats they hadn’t previously considered. How should companies approach the problem of unexpected loopholes introduced by generative AI technologies, and how should these technologies be tested before being deployed to ensure they are free of such vulnerabilities?

Santhanam Govindaraj: To mitigate the risks associated with unexpected loopholes introduced by generative AI technologies, companies must adopt a comprehensive approach to cybersecurity and testing. Firstly, organizations should implement rigorous vulnerability assessments and penetration testing throughout the development process to identify potential weaknesses.

This involves subjecting the AI system to various attack scenarios, simulating real-world threats, and attempting to exploit possible vulnerabilities. Moreover, companies should establish clear security protocols and best practices for AI deployment, emphasising continuous monitoring and updates.

Collaborating with cybersecurity experts and encouraging responsible disclosure of potential vulnerabilities can also help enhance the technology’s robustness, ensuring that it is as resistant as possible to cyber threats before being deployed.

  1. During and after the AI model training phase, there are potential points of vulnerability where a cybercriminal could inject malicious data or compromise the system. How should organizations safeguard their AI model training processes to prevent such breaches?

Prasad: Organizations can implement several essential measures to safeguard their AI model training processes and prevent potential breaches by cybercriminals.

Firstly, employing robust data security protocols is critical, including encryption and access controls, to protect the data used for training from unauthorized access. Secondly, adopting a secure development lifecycle for AI models ensure that security is embedded throughout the process, from design to deployment.

Regular security audits and vulnerability assessments can also help identify and address potential weaknesses in the system. Additionally, monitoring the training process for unexpected inputs or outputs can help detect and mitigate any attempts at injecting malicious data.

Lastly, fostering a culture of security awareness among employees and stakeholders helps ensure that everyone involved understands their role in maintaining a secure training environment.

  1. Generative AI often needs vast amounts of data to train, which may include personal user data. How can companies balance the need for comprehensive AI training with the protection of personal data privacy?

Santh: Companies can balance the need for comprehensive AI training with the protection of personal data privacy by adopting several key strategies. First, they can implement data anonymization techniques to strip away personally identifiable information, ensuring the data used for training is no longer traceable to individuals.

Secondly, they should establish strict data access controls, limiting access only to essential personnel. Thirdly, companies can explore privacy-preserving machine learning methods, like federated learning, where the model is trained locally on user devices without sharing raw data centrally.

Lastly, transparent communication with users about data usage and obtaining informed consent can build trust and ensure compliance with privacy regulations, fostering a responsible and privacy-conscious approach to AI development.

  1. With the use of generative AI, there’s an increased risk of exposure of intellectual property. How should organizations protect their intellectual property in the context of AI, especially when collaborating with third-party vendors or other external entities?

Santh: To safeguard intellectual property in the realm of AI and mitigate risks associated with collaborations, organizations must implement robust protective measures. Firstly, they should establish clear contractual agreements with third-party vendors, stipulating the ownership and usage rights of generated AI models and data.

Non-disclosure agreements (NDAs) are essential to maintain confidentiality. Additionally, implementing technological safeguards, such as watermarking data, code obfuscation, and access controls, can help prevent unauthorised access and usage.

Regular audits of AI development processes and monitoring for any potential IP breaches are also vital. Lastly, fostering a culture of awareness and education within the organization about IP protection ensures that all stakeholders are actively contributing to the preservation of valuable intellectual property.

  1. Companies engage with numerous vendors in the generative AI space. How should they ensure that these vendors are adhering to high cybersecurity standards? What criteria would you recommend for assessing the security policies of potential AI vendors?

Prasad: Ensuring that generative AI vendors adhere to high cybersecurity standards is critical for safeguarding sensitive data and maintaining the overall security posture of a company. To assess the security policies of potential AI vendors, several key criteria should be considered.

First, the vendor should have comprehensive data protection measures in place, including encryption and access controls.

Second, they should undergo regular security audits and assessments to identify and address vulnerabilities. Third, the vendor must demonstrate compliance with industry-leading security certifications and standards.

Fourth, a robust incident response and recovery plan should be in place to handle any security breaches promptly.

Lastly, clear contractual agreements regarding data ownership, sharing, and usage must be established. By evaluating vendors against these criteria, companies can confidently select AI partners with strong cybersecurity practices.

  1. Generative AI models are sometimes used to process sensitive data. What strategies and protocols would you recommend organizations implement to ensure that sensitive data is not inadvertently exposed when using these models?

Prasad: When utilising generative AI models to process sensitive data, organizations must prioritize data privacy, confidentiality and security. Several crucial strategies and protocols should be implemented to mitigate the risk of inadvertent exposure.

Firstly, ensure data anonymization, masking and aggregation before training the models to prevent individuals from being identifiable. Employ robust access controls and encryption mechanisms to safeguard data during storage and transmission.

Regularly audit and monitor model behavior to detect potential privacy breaches. Conduct thorough assessments of the model’s outputs and establish a review process to identify and address any sensitive information leakage.

Additionally, organizations should keep their staff well-informed about data handling best practices and compliance with relevant regulations to maintain a secure and responsible AI environment.





Source link