Google Secure AI Framework Leaves Privacy Point Unaddressed


With the Google Secure AI Framework (SAIF), a conceptual framework aimed at establishing robust security standards for the development and deployment of AI technology, the search engine giant has officially joined the AI security bandwagon.

The Google Secure AI Framework launch comes hot on the heels of the debates kicked up by the European Commission’s draft Artificial Intelligence Act – the first law on AI by a major regulator anywhere.

Google’s competitor Microsoft is already ahead in the thought leadership on AI security, actively disseminating advisories and guidelines on the hot topic.

A closer look at the specific clauses from the Google Secure AI Framework demonstrate how the framework directly addresses security risks, integrates with Google’s AI platforms, incentivizes research, and emphasizes the delivery of secure AI offerings.

However, these clauses also align with Google’s business interests by reinforcing their reputation, differentiating their offerings, and ensuring customer retention in their AI products.

Google Secure AI Framework: At a glance

“The potential of AI, especially generative AI, is immense. However, in the pursuit of progress within these new frontiers of innovation, there needs to be clear industry security standards for building and deploying this technology in a responsible manner,” said the Google blog post.

The post was written by Royal Hansen, Google Vice President of Engineering for Privacy, Safety, and Security, and Phil Venables, Chief Information Security Officer (CISO), Google Cloud.

According to the blog post, Google Secure AI Framework draws inspiration from established security best practices and incorporates an understanding of the unique risks and trends associated with AI systems.

“A framework across the public and private sectors is essential for making sure that responsible actors safeguard the technology that supports AI advancements, so that when AI models are implemented, they’re secure-by-default,” said the blog post.

According to the blog post, Google Secure AI Framework comprises six core elements designed to bolster AI system security.

The first element focuses on expanding strong security foundations to the AI ecosystem. Leveraging the secure-by-default infrastructure and expertise built over the last two decades, organizations can adapt these foundations to meet the unique security challenges posed by AI.

This includes developing organizational expertise to keep pace with AI advances and scaling infrastructure protections accordingly.

To effectively address AI-related cyber incidents, organizations must extend detection and response capabilities to encompass AI threats. Timeliness is key in identifying anomalies and anticipating attacks.

By monitoring inputs and outputs of generative AI systems and leveraging AI-specific threat intelligence, organizations can proactively detect and respond to potential security breaches.

Automating defenses is the third crucial element of the Google Secure AI Framework. With adversaries likely to employ AI to scale their impact, organizations must utilize AI technologies themselves to stay ahead.

By leveraging the latest AI innovations, organizations can enhance their response efforts and effectively protect against evolving threats.

Ensuring consistent security across the organization is another key aspect of the Google Secure AI Framework.

By harmonizing platform-level controls, organizations can scale protections and mitigate AI risks efficiently. This includes extending secure-by-default protections to AI platforms and integrating controls and protections into the software development lifecycle.

Adapting controls to respond dynamically to threats is a fundamental principle of SAIF. The dynamic nature of AI security risks necessitates continuous learning and adjustment.

Organizations can achieve this through techniques such as reinforcement learning based on incidents and user feedback. Regular red team exercises can further enhance safety assurance for AI-powered products.

Finally, contextualizing AI system risks within surrounding business processes is crucial. End-to-end risk assessments provide insights into the potential impact of AI deployment, including factors like data lineage, validation, and operational behavior monitoring.

To support and advance the Google Secure AI Framework, the company has already taken several steps.

These include fostering industry support, collaborating with organizations, sharing threat intelligence insights, expanding bug hunter programs, and delivering secure AI offerings with partners.

However, a sceptic reading reveals that the Google Secure AI Framework conveniently aligns with their AI business interests, presenting itself as a solution to the security concerns surrounding AI technology.

Google Secure AI Framework: Reading between the lines

While the Google Secure AI Framework claims to establish industry-wide security standards for responsible AI development, it also serves Google’s agenda of maintaining dominance in the AI market.

By integrating the Google Secure AI Framework principles into their AI platforms, Google can position themselves as a trusted provider of AI solutions.

This conveniently helps solidify their market dominance and maintain a competitive edge over other players in the AI industry.

After all, on multiple occasions, Google has been found guilty of demonstrating favoritism towards its own products and services as a means of safeguarding its dominant position in the search market.

SAIF supposedly addresses critical security considerations specific to AI, such as model theft and data poisoning. While these are indeed valid concerns, it is important to scrutinize whether SAIF goes beyond mere lip service and actually provides practical and effective solutions to these challenges.

Furthermore, the promotion of the Google Secure AI Framework enables Google to showcase their own AI offerings as secure and reliable. This brings us to the unavoidable scrutiny of Google’s very on generative AI tool, Bard.

Bard is in many ways susceptible to the issues that plague the much popular ChatGPT.

Bard relies on Google for its training data, which ranges from global Google search results to individual emails under the Gmail service.

Cybersecurity practitioners were quick to raise privacy concerns regarding the use of users’ Gmail accounts and the potential use of personal conversations and information for training Bard.

If Google’s Bard is accessing and utilizing personal data without explicit user consent or in a manner that violates privacy regulations, it could be seen as a violation of the privacy norms, both under GDPR and the proposed European law on artificial intelligence.

However, the Google Secure AI Framework has conveniently omitted privacy from its core elements. Addressing privacy concerns is not there in the Google Secure AI Framework summary or the step-by-step guide on how practitioners can implement SAIF.

AI security framework: Policymaking

While Google, Microsoft and several other companies are on the race to establish their thought leadership credentials in AI and neutering possible legal issues on their AI businesses, The European Parliament crossed a significant milestone towards regulating the development and use of AI systems

Members of the European Parliament (MEPs) in May gave their endorsement to new rules aimed at ensuring a human-centric and ethical approach to AI in Europe. These rules, if approved, will be the world’s first comprehensive set of regulations for AI.

The draft negotiating mandate on the rules for AI was adopted by the Internal Market Committee and the Civil Liberties Committee with an overwhelming majority, receiving 84 votes in favor, 7 against, and 12 abstentions.

MEPs made amendments to the initial proposal from the Commission, emphasizing the need for AI systems to be overseen by humans, safe, transparent, traceable, non-discriminatory, and environmentally friendly.

They also sought to establish a technology-neutral definition of AI to ensure its applicability to current and future AI systems.

The proposed draft is the latest in the stream of participative initiatives in the Europe on addressing various threats posed by AI.

The Artificial Intelligence Threat Landscape Report by the European Union Agency for Cybersecurity (ENISA) published earlier listed guidelines on cybersecurity threats, supporting policy development, enabling customized risk assessments, and aiding the establishment of AI security standards.

Google Secure AI Framework
Image: Artificial Intelligence Threat Landscape Report, ENISA

The deployment of AI systems in Europe had to be on the bases of trustworthy solutions through comprehensive cybersecurity practices, the report specified.

Unlike the- business-focused approach of the Google Secure AI Framework, the European rules take a risk-based approach. It seeks to categorize and oversee artificial intelligence applications based on their potential to cause harm.

These categories primarily encompass banned practices, high-risk systems, and other AI systems.

“The rules follow a risk-based approach and establish obligations for providers and users depending on the level of risk the AI can generate,” the European Parliament announcement said.

“AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring.”

Interestingly, Google has an established method and system for social credit scoring based on online activity of a user.

“Social credit scores may be a source of information for advertisers, employers, bankers, and others. Information on social credit scores may be analyzed to determine users with high scores to distinguish top score users and mark them with special prizes, titles, or symbols,” said the Google briefing.

“Additionally, a list of top scorers may be published via a social networking engine or otherwise as an additional factor of stimulation for other users.”

The European rules also aim to support innovation while protecting citizens’ rights. Exemptions have been included for research activities and AI components provided under open-source licenses.

The legislation encourages the establishment of regulatory sandboxes to facilitate controlled testing of AI before deployment.

The next step involves the endorsement of the draft negotiating mandate by the entire Parliament, with the vote expected during the session scheduled for June 12-15.

Meanwhile, NIST in January introduced the AI Risk Management Framework (RMF) along with supporting resources such as the NIST AI RMF Playbook, AI RMF Explainer Video, and AI RMF Roadmap.

The AI RMF provides two lenses through which to consider such questions, wrote Cameron Kerry, former general counsel and acting secretary of the U.S. Department of Commerce.

Firstly, it presents a conceptual roadmap that defines the various types and origins of risk associated with AI. It also outlines seven essential attributes of trustworthy AI, including safety, security, interpretability, privacy, fairness, accountability, and reliability.

Secondly, the framework provides a series of organizational processes and practices to evaluate and mitigate risk.

It establishes a connection between the socio-technical aspects of AI and the different stages of an AI system’s lifecycle, while considering the roles and responsibilities of the involved stakeholders.

The framework aimed at addressing risks associated with artificial intelligence (AI) and promoting trustworthiness in AI products and systems. It was developed through an inclusive and collaborative process, incorporating input from various stakeholders.

NIST also launched the Trustworthy and Responsible AI Resource Center on March 30 to support the implementation and global alignment of the AI RMF.

Does the Google Secure AI Framework violate the European norms?

The Google Secure AI Framework is about collaboratively securing AI technology, while the European Commission’s draft Artificial Intelligence Act aims to classify and regulate artificial intelligence applications based on their risk to cause harm.

Upon comparing the proposed AI law with the six core elements of Google Secure AI Framework, we found some potential areas where the two frameworks may intersect or potentially conflict.

The proposed AI law does not directly address the automation of defenses. However, it emphasizes the need for AI systems to be safe, non-discriminatory, and transparent. Automated defenses could potentially support these goals if implemented responsibly and in line with ethical guidelines.

Another core element, harmonizing platform-level controls to ensure consistent security across the organization, is a bone of contention.

This core element is relevant to the proposed AI law’s objective of having a uniform definition of AI and applying consistent regulations to AI systems. With Google at the helm, establishing standardized rules and controls on AI technologies can be expected to be biased, given the company’s history.

“With support from customers, partners, industry and governments, we will continue to advance the core elements of the framework and offer practical and actionable resources to help organizations achieve better security outcomes at scale,” said the SAIF guide.

Meanwhile, we step away to check with Google Bard on the definitions of self-service and greater good.





Source link