Executive Order On Artificial Intelligence Signed By The US


The President of the United States, Joe Biden, signed a landmark Executive Order addressing how to work with Artificial Intelligence in an effective, conscious, and ethical way. This Executive Order on Artificial Intelligence addressed developing the technology to prevent threats and enhance cybersecurity.

The Biden Administration signed the Executive Order on Artificial Intelligence, comprising 13 comprehensive sections.

This Executive Order, as clarified in section 13, is to be enacted in accordance with existing laws. It begins with defining its purpose, followed by outlining policy and principles. The order also includes a specific section on the definition of terms used. Key areas such as ensuring the safety and security of AI technology, promoting innovation and competition, and supporting workers are addressed.

The order further emphasizes advancing equity and civil rights, alongside protecting consumers, patients, passengers, and students. Privacy protection is a crucial aspect, as is the advancement of the federal government’s use of AI.

Moreover, the order focuses on strengthening American leadership abroad and lays out clear guidelines for implementation. Each section of the Executive Order on Artificial Intelligence is meticulously detailed, extending into various subsections to ensure a comprehensive approach to AI governance and development.

Breakdown of the Executive Order on Artificial Intelligence

“Artificial intelligence (AI) holds extraordinary potential for both promise and peril,” aptly read Section 1 of the EO on Artificial Intelligence. The EO stressed the need to make a society-wide effort together with the government, the private sector, academia, and civil society to utilize AI to the fullest.

Since, the use and technology around AI has been progressing non-stop, the Biden Administration outlined in the EO the urgency of governing its development and use. Harnessing of AI for justice, security, and opportunity, was noted in the EO for Artificial Intelligence.

Alerting the US executive departments and agencies, the EO asked the key entities to cohesively adhere to the principles as feasible. Addressing the same, the Department of Homeland Security published a post affirming that they will manage AI in critical infrastructure and cyberspace, and promote the adoption of AI safety standards globally.

The DHS also assured that they will try to reduce the risks that AI can be used for the creation of weapons of mass destruction.

The other concerns voiced in the EO on Artificial Intelligence included the standardized evaluations of AI systems, including policies. One of the biggest issues around AI security risks posed to biotechnology was also addressed in the latest Executive Order on AI.

The US government will take over the task of effective labeling and content provenance mechanisms to help the citizens distinguish between AI-generated content and otherwise.

The Biden-Harris Administration looks to solve some of the most difficult challenges unleashing the power of AI and its ethical use. For this, the US government will invest in AI to educate the American people while also tackling novel intellectual property.

Addressing how the US looks to prevent threats and lead the way in the evolution of AI, EO stated, “The Federal Government should lead the way to global societal, economic, and technological progress, as the United States has in previous eras of disruptive innovation and change”

Addressing the way America handled the changing trends in the use of AI, the EO read, “I firmly believe that the power of our ideals; the foundations of our society; and the creativity, diversity, and decency of our people are the reasons that America thrived in past eras of rapid change.”

Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.





Source link