Artificial intelligence companies and governments should allocate at least one third of their AI research and development funding to ensuring the safety and ethical use of the systems, top AI researchers said in a paper.
The paper, issued a week before the international AI Safety Summit in London, lists measures that governments and companies should take to address AI risks.
“Governments should also mandate that companies are legally liable for harms from their frontier AI systems that can be reasonably foreseen and prevented,” according to the paper written by three Turing Award winners, a Nobel laureate, and more than a dozen top AI academics.
Currently there are no broad-based regulations focusing on AI safety, and the first set of legislation by the European Union is yet to become law as lawmakers are yet to agree on several issues.
“Recent state of the art AI models are too powerful, and too significant, to let them develop without democratic oversight,” said Yoshua Bengio, one of the three people known as the godfather of AI.
“It (investments in AI safety) needs to happen fast, because AI is progressing much faster than the precautions taken,” he said.
Authors include Geoffrey Hinton, Andrew Yao, Daniel Kahneman, Dawn Song and Yuval Noah Harari.
Since the launch of OpenAI’s generative AI models, top academics and prominent CEOs such as Elon Musk have warned about the risks on AI, including calling for a six-month pause in developing powerful AI systems.
Some companies have countered this, saying they will face high compliance costs and disproportionate liability risks.
“Companies will complain that it’s too hard to satisfy regulations – that ‘regulation stifles innovation’ – that’s ridiculous,” said British computer scientist Stuart Russell.
“There are more regulations on sandwich shops than there are on AI companies.”