Poison pill a risk to AI – Security


A leading cyber security researcher is warning of the risk of artificial intelligence data sets being compromised by “data poisoning”.



Cyber Security Cooperative Research Centre CEO Rachael Falk explained to the ABC’s AM current affairs program this morning that data poisoning happens when “you can attack an AI data set, and either inject it with false information or misinformation” so that it’s “not correct any more”.

The CRC said AI needs “oversight, transparency and governance measures” to protect users against data poisoning attacks.

It delivered the warning in a report released today [pdf].

The report notes that while some risks are well-known – labour market displacement and privacy threats, for example – data poisoning and human attacks on data labelling are not.

Both attacks involve interference with training data, the report said.

Data poisoning involves “malicious, biased or incorrect data” being incorporated into the training set. The report said incorrect outputs “could enable an attacker to bias decision-making towards a particular outcome, which could result in real-life harms”.

Poisoning types identified in the report include availability poisoning (corrupting the entire machine learning (ML) model, rendering the AI unusable), targeted poisoning (an attack on a handful of samples, making it difficult to detect), backdoor poisoning (training samples give the attacker a backdoor into the model), and model poisoning (attacking the trained model to inject malicious code).

The other risk identified in the report is that humans employed to label data for AI training could be subverted to mislabel malicious data into the training set.

The report noted that AI training is commonly outsourced to countries with cheap labour (including Kenya, Uganda, the Philippines and Venezuela), raising the spectre that officials could be corrupted to poison the training data.

“If this was to occur at scale across a range of different damaging scenarios – and
research indicates only 0.01 percent of training data needs to be poisoned to be effective – the impacts on LLMs [large language models] could be serious and the implications for society damaging,” the report said. 

“Most importantly, such an attack would have a deleterious impact on perceptions of generative AI at a social and cultural level, impacting the positive economic and societal effects these technologies can affect.”

Corrupted training data also opens the door to foreign interference, the report said.

“At a global level, this is an issue that needs to be considered urgently as discussions regarding AI regulation and global norms for AI systems continue”, the report stated.

The report also said Australia could apply its modern slavery regulations to AI companies, and should invest in research into AI cyber threats.



Source link