In this Help Net Security interview, Joe Baguley, CTO EMEA at Broadcom, shares insights on private AI and its significance in data security. He explains how it helps organizations maintain control over sensitive information while addressing the complexities of compliance and data privacy. Baguley also discusses the sectors leading the way in private AI adoption and the risks that come with it.
What are the key technological components that make AI “private”? Which industries or sectors are leading in adopting private AI, and why?
The concept of private AI emphasizes absolute control over data and model storage and access, which is crucial in today’s data-centric business environment. This approach is becoming increasingly important across all industries – not just traditionally regulated sectors like finance or healthcare – as businesses face greater risks when handling data containing personally identifiable information or private intellectual property. Large Language Models (LLMs), which are pre-filled with vast amounts of unknown data and logic, present additional challenges when companies augment them with proprietary information.
The growing awareness of public tools’ inherent openness drives companies to bring data back on-premise, behind existing security infrastructure. This shift requires strict access controls, with explicit permissions granted only when necessary and subject to rigorous auditing. The ultimate goal is to maintain complete control over sensitive data, corporate IP, and financial information across various systems, mainly when generating AI content.
While private AI adoption is growing across all sectors, it is particularly crucial for organizations that require guaranteed isolation and control, such as government entities and businesses in highly regulated industries like healthcare and finance. These sectors are at the forefront of private AI implementation due to their stringent security and compliance requirements.
What risks are associated with private AI, and how can they be mitigated?
The landscape of AI and data security presents several ongoing challenges and risks for companies. Firstly, the threat of bad actors remains a constant concern, prompting organizations to implement complex security controls and auditing mechanisms as protective measures.
Secondly, the proliferation of unsanctioned public AI tools poses a significant risk of data leakage. As new tools emerge daily, IT teams face an increasingly difficult task in preventing data loss, mainly when employees use these tools without proper vetting or approval.
Thirdly, there’s a growing awareness of the potential security risks associated with untested and unsecured Large Language Models (LLMs). This is likely to become a key focus area for security teams and policymakers in the near future as they work to mitigate these risks.
Lastly, the exponential data growth linked to generative AI in private environments presents a mounting financial challenge. Unlike public cloud-based AI infrastructure, which offers seemingly limitless capacity but can lead to unexpected costs, private setups often face immediate and visible cost constraints. This situation is further complicated by the potential delay between recognizing the need for expanded capacity and actually implementing it, creating a balancing act between security, functionality, and cost management.
How does private AI handle data storage, processing, and real-time analytics compared to cloud-based AI models?
The key word is choice here. private AI offers customers a more controllable and predictable level of performance, allowing for customized solutions that can be tailored to specific requirements, budget permitting. This contrasts with public cloud providers, whose offerings are often less configurable and may not fully meet specialized needs.
The key advantage of private AI lies in its ability to deliver exactly what is required, whether owned and operated by the company itself or by a dedicated service provider. Full-stack tooling simplifies management, monitoring, and consumption of AI services, unlike cloud-based AI where components are often disparate and require complex integration.
Private AI also allows for optimal data placement, ensuring that data is as close to processing as possible. This proximity guarantees rapid data access for real-time processing, as the organisation maintains total control over all aspects of the stack.
By tapping into the right tools, customers can leverage a rich ecosystem of partners contributing unique value and functionality to AI workloads. This can allow customers to maximise their investment in AI hardware, optimising resource use and sharing.
This approach provides greater choice and flexibility, allowing organizations to build AI solutions that precisely fit their needs while maintaining control over performance, security, and cost.
What are the different deployment models for private AI, and how should organizations choose the right one for their needs?
A private AI platform should be designed with the same considerations as any other IT infrastructure, taking into account various internal and external requirements and constraints. Key factors influencing the design include cost, control, regulatory compliance, and data governance rules such as GDPR or DORA.
One practical approach for organizations is to begin their AI journey using public cloud-based AI tools. This strategy allows companies to leverage readily available infrastructure and resources while simultaneously designing and implementing a private AI solution. This dual approach is particularly beneficial when transitioning to using confidential data or meeting specific regulatory requirements.
The ideal private AI solution should strike a balance between flexibility and security to meet the demands of data scientists and ML Ops teams. Flexibility is crucial, as there isn’t a one-size-fits-all deployment model for private AI. Instead, the platform should be adaptable, allowing customers and partners to deploy a solution that precisely meets their unique needs.
This approach enables organizations to start quickly with AI using public resources, while gradually building a more tailored, secure, and compliant private AI infrastructure. It provides a pathway for companies to evolve their AI capabilities in line with their growing needs and regulatory requirements, without sacrificing initial momentum or innovation.
Are there specific regulatory hurdles or compliance challenges that organizations should be aware of when adopting private AI?
Organizations must navigate a complex global landscape of AI regulations, including the EU AI Act and various U.S. federal frameworks. Before implementing AI technologies, it’s crucial to understand both local and global rules that may impact operations. Resources like the International Association of Privacy Professionals (IAPP) can help track these evolving regulations worldwide. This proactive approach to compliance is essential for responsible AI development and use, helping to build trust and avoid potential legal issues.