Promoting responsible AI: Balancing innovation and regulation


As AI technology advances, it is essential to remain mindful of familiar and emerging risks. Education is critical to fostering responsible AI innovation, as understanding the technology and its limitations raises standards and benefits everyone.

In this Help Net Security interview, Nadir Izrael, co-founder & CTO of Armis, discusses the global efforts and variations in promoting responsible AI, as well as the necessary measures to ensure responsible AI innovation in the United States.

What are your initial impressions of the Biden-Harris Administration’s efforts to advance responsible AI? Are they on the right track in managing the risks associated with AI?

The effort to address the issue of responsible AI is a proactive step in the right direction. However, whether they are on the right track in managing the risks associated with AI is still uncertain and will depend on various factors, including finding the right balance between innovation and regulation.

In a free market, there may not be sufficient incentives to prioritize responsible AI research and development. So, it’s commendable that the administration is taking initiative in this regard. It is crucial for the initiatives to strike a balance that allows for innovation and financial reward. Without such flexibility, AI developers may seek ways to bypass the initiatives, potentially undermining their intended purpose.

While the blueprint put forward is a great start, it is merely that, a start. The AI Bill of Rights is laying the foundation for our journey towards a responsible framework regarding the right policy and regulations in this space.

The Administration seeks public input through a Request for Information (RFI) on critical AI issues. What significance does public input hold in shaping the government’s strategy to manage AI risks and harness opportunities? How might this engagement with the general public impact policy decisions?

The administration’s request for public input demonstrates an inclusive approach that recognizes the importance of diverse perspectives. Seeking public input is a vital component as it allows the government to gather insights from a wide range of stakeholders, including industry, academia, civil society, and the general public. This input is instrumental in identifying unique opportunities and risks associated with AI, shaping policies, and establishing regulations that align with the values and concerns of the public.

Engaging with the general public also promotes awareness and understanding of AI-related issues while fostering trust between the government and the public. By involving the public in the development of AI policies and regulations, the government can promote responsible deployment of AI that serves the best interests of society.

The US Department of Education’s report on AI in education highlights both the opportunities and risks associated with AI in teaching and learning. Could you elaborate on some of the risks mentioned in the report, particularly algorithmic bias? How can trust, safety and appropriate guardrails be ensured in implementing AI in educational settings?

There are a variety of risks with AI that we should address. Some of those include:

1. Ethical concerns: AI systems are often marketed to consumers as objective tools. They are, however, tools designed by humans with their own subjective experiences and biases that are introduced into the technologies. These ethical challenges must be identified and introduced to teachers and students.

2. Algorithmic bias and discrimination: AI systems can perpetuate and amplify existing biases and discrimination. First, an AI system might be trained on biased data. Second, the AI system might learn to discriminate based on features that are correlated with protected characteristics, such as race or gender, even if those features are not explicitly categorized in the training data. This can lead to unfair treatment of certain groups of people, including students.

3. Cybersecurity risks: The use of AI in education can increase cybersecurity risks, particularly if sensitive student data is collected and stored in a vulnerable manner. As AI learns from the data that it has been fed, it may also inadvertently share confidential information that it has collected with other users.

4. Privacy concerns: The use of AI in education can raise privacy concerns, particularly if sensitive student data is collected and used without proper consent or protection.

5. Inaccuracy: AI systems are only as accurate as the data they are trained on, and errors in the data can lead to inaccurate results and decisions. Known as “hallucinations,” AI can produce inaccurate results that are not backed by data or evidence.

6. Overgeneralization: AI systems can overgeneralize from limited data, again leading to inaccurate or unfair decisions.

From a cybersecurity perspective, we must address privacy and security concerns. Bad actors are successfully using confidentiality attacks to draw out sensitive information from AI systems. Without proper security measures, institutions and individuals are at risk. To protect students, for example, institutions may put in place policies curbing the use of AI tools in specific instances or provide educational content cautioning them against sharing confidential information with AI platforms.

Algorithmic biases, inaccuracies, overgeneralizations represent intrinsic limitations of the technology since the models are a reflection of the data they are trained on. Even if care is taken to ensure input data is fact-checked and accurate, hallucinations may still occur. Therefore, a human element is still important in the use of AI. Fact checks and discerning eyes can help weed out inaccuracies. Councils guided by community-oriented ethical guidelines can help reduce biases.

Regarding national security, what concerns about AI systems need to be addressed? Could you provide some examples of cyber threats to AI systems?

From a geopolitical perspective, we cannot ignore the adversarial nature of the AI innovation race. AI-powered cyberwarfare is an incredibly impactful, cost-effective tool for adversaries to disrupt world order. AI can be weaponized against both networks and people and used to gain competitive advantage.

For example, criminal groups could use AI-powered hacking tools to disrupt critical infrastructure. In light of recent warnings that Chinese state-sponsored hackers had compromised industries including transportation and maritime, this concern is paramount.

Additionally, the proliferation of the use of deepfakes, voice and image creation and manipulation is a particularly concerning threat on the rise. These convincing manipulations could be used to extract sensitive national security information.

Policy makers are aware of these risks and are working to foster a competitively robust environment that will allow US tech companies to innovate, while protecting national security interests and the individual rights of our citizens.

How do the actions taken by the Biden-Harris Administration compare to global efforts in promoting responsible AI? Are there any notable differences or areas where the US could learn from other countries’ approaches?

The UK, EU and Canada have released ethical guidelines encouraging responsible AI development in their countries.

The US is joining the club with the Biden administration’s latest announcement. Though still in the early stages, the administration is putting serious consideration into it by relying on experts’ insights and requesting public input.

What additional steps or measures are necessary to ensure responsible AI innovation in the United States? Are there any areas that still need to be addressed or potential risks that require further attention?

Designing and developing the right model calls for an increased emphasis on public-private partnerships between the public and private sectors. We are off to a good start but the legacy collaborative models of the past won’t suffice. We know other countries are making rapid advances in AI research, so we need to move at a speed and velocity never before seen in addressing this topic and creating a productive and protective environment.

AI technology is very much a revolution in progress. On top of the familiar risks, we should be mindful and continuously evaluate new risks that the technology may present. A key component to ensure responsible AI innovation is education. When people understand what is behind the technology they consume and its current limitations, the standard rises, and we all benefit.



Source link