Dawn Project calls out Big Tech for selling AI snake oil


Safety advocacy group The Dawn Project has ramped up its campaign to illustrate the failings of artificial intelligence (AI) systems and why they should not be deployed in safety critical use cases.

The Dawn Project is campaigning to raise awareness surrounding the danger of what it calls “cataclysmic cyber attack” resulting from the use of easily hacked, bug-ridden commercial-grade software in safety-critical applications, such as water systems and the power grid.

On its website, the group stated that despite obvious defects in AI systems, such as chatbots still failing to answer basic questions, or rewriting facts based on misunderstandings of its training input, AI “pioneers” continue to overstate the progress these systems have made. This progress, according to The Dawn Project, suggests AI developers are now ready to deploy AI systems into major infrastructure across the world.

“When AI systems are utilised in major technologies with potentially devastating impact, it is paramount that they never fail in any safety critical incident,” the group said. “The dangers of releasing these technologies on a wider scale to be used for weapons or heavy machinery, including cars, cannot be ignored or underestimated.”

As part of its campaign, which has included advertising in The Wall Street Journal and The New York Times to raise awareness of the inherent risks of AI, Dan O’Dowd, software entrepreneur and founder of The Dawn Project, has denounced Tesla’s AI-powered Full Self-Driving software. He said that even after 10 years of development, Tesla’s AI technology still illegally overtakes a stopped school bus and runs down children crossing the road.

The US National Highways Traffic Safety Administration has previously investigated whether Tesla’s Autopilot contained a defect that created an unreasonable risk to motor vehicle safety. Its assessment involved extensive crash analysis, human factors analysis, vehicle evaluations, and assessment of vehicle control authority and driver engagement technologies.

The NHTSA’s Office of Defects investigation stated that at least 13 crashes involving one or more fatalities and many more involving serious injuries in which foreseeable driver misuse of the system played an apparent role.

On its website, The Dawn Project drew an analogy between the gambler, who insists his “system” is sound, and Al proponents who are asking for vast sums of money and power plants to build even bigger Al systems.

“They claim to be the masters of the technology that will transport humanity into a paradise where everyone gets everything they want and no one has to work, or maybe it will exterminate everyone,” the group said. “They are just modern-day itinerant preachers selling patent medicine.”

The advertising depicts Microsoft, Google and OpenAI as organisations selling AI snake oil. The advert claims Microsoft poured $13bn into OpenAl’s ChatGPT, yet when asked to list US states that ended with the letter ‘Y’, three of the five answers it gave were wrong.

“AI researchers accept that ‘hallucinations’ are a fundamental and unsolvable weakness of large language models and admit they cannot explain why AIs make certain bad decisions,” said O’Dowd. “We must demand consistency and reliability in our safety critical systems. We must reject hallucinating AIs and apply the same rigorous standards of software security we demand for nuclear security to the critical infrastructure that society and millions of lives depend on.”

The Dawn Project warned that commercial-grade software should not be used for any application where there is potential to harm or kill a person. “These systems have already been proven to not be fit for purpose,” it said. “So how can tech masters claim that they are ready to be used in potentially deadly infrastructure solutions?”



Source link