ChatGPT Chief Testifies on AI risks To US Congress


To mitigate the threats posed by increasingly potent AI systems, government action will be essential, according to the CEO of the artificial intelligence company that produces ChatGPT.

The success of OpenAI’s chatbot, ChatGPT, provoked worries and an AI arms race among legislators during a Parliamentary session.

“As this technology advances, we understand that people are anxious about how it could change the way we live. We are too,” OpenAI CEO Sam Altman said at a Senate hearing.

For the most potent AI systems, Altman suggested the establishment of a U.S. or global agency with the capacity to “take that license away and ensure compliance with safety standards.”

Raised Concerns About The Next Generation

ChatGPT Chief Testifies on AI risks To US Congress

Concerns about the coming years of “generative AI” tools’ potential to deceive people, distribute false information, violate copyright laws, and displace some jobs have grown out of what began as an educator’s panic about ChatGPT’s usage to cheat on homework assignments.

The societal concerns that brought Altman and other tech CEOs to the White House earlier this month have prompted U.S. agencies to promise to crack down on harmful AI products that violate current civil rights and consumer protection laws.

Despite this fact, there is no immediate indication that Congress will draught comprehensive new AI rules, as European lawmakers are doing.

Sen. Richard Blumenthal, a Democrat from Connecticut and chair of the Senate Judiciary Committee’s subcommittee on privacy, technology, and the law, began the hearing with a recorded speech that appeared to be him but was a voice clone that had been trained on the Blumenthal’s floor speeches and was reading ChatGPT-written opening remarks.

The result was impressive, and he continued, “What if I had asked it, and what if it had provided, an endorsement of Ukraine surrendering or (Russian President) Vladimir Putin’s leadership?”

Except for stating that the sector may “significantly harm the world” and that “if this technology goes wrong, it can go quite wrong,” Altman largely avoided giving specifics.

Both Gary Marcus, a former NYU professor who criticized the AI hype, and Christina Montgomery, vice president and director of privacy at IBM, testify. 

Montgomery underlined the importance of striking a balance between innovation and ethical behavior and cautioned against fast AI development. Altman and Montgomery recognized that AI could both create and destroy jobs.

Recently, Altman demonstrated ChatGPT’s capabilities to Parliament politicians, and all attendees acknowledged the need for AI regulation. Altman has stated his commitment to the responsible development of AI while acknowledging its risks.

Elon Musk and others, however, call for a temporary halt to developing potent AI systems because of the grave societal concerns involved.

EHA

Government Involvement Is Crucial To Regulate AI

The fact that the committee hearing on AI in government took place simultaneously with the Parliamentary hearing shows how important AI is becoming to legislators. 

The government’s emphasis on ethical AI development is evident in Altman’s encounters with senior officials, including Deputy Prime Minister Kamala Harris and Prime Minister Joe Biden. Altman favors caution and greater safety precautions, but he doubts the efficacy of the open letter calling for a suspension of training as the best course of action.

Altman’s evidence emphasized the urgent need for government engagement to regulate AI, recognizing its transformative potential while emphasizing the significance of responsible development. The conversations highlight the numerous difficulties associated with AI and the ongoing attempts to balance innovation and risk reduction.

According to Montgomery, “We think that AI should be regulated at the point of risk, essentially,” by creating guidelines that control the application of particular uses of AI as opposed to the technology itself.





Source link