This Article is From Jul 11, 2023

Cyber Attackers Can Disable AI Systems By 'Data Poisoning': Google AI Expert

The researcher claims that attackers can seriously damage artificial intelligence models by "poisoning" data sets through minor modification.

Cyber Attackers Can Disable AI Systems By 'Data Poisoning': Google AI Expert

Attackers can critically harm artificial intelligence models.

Google Brain research scientist Nicholas Carlini has said that cyber attackers could disable AI systems by "poisoning" their data sets.

According to a report by the South China Morning Post, Mr Carlini said that by manipulating just a tiny fraction of an AI system's training data, attackers could critically compromise its functionality.

"Some security threats, once solely utilised for academic experimentation, have evolved into tangible threats in real-world contexts," Carlini said during the Artificial Intelligence Risk and Security Sub-forum at the World Artificial Intelligence Conference, according to financial news outlet Caixin.

In one prevalent attack method known as "data poisoning", an attacker introduces a small number of biased samples into the AI model's training data set. This deceptive practice "poisons" the model during the training process, undermining its usefulness and integrity.

"By contaminating just 0.1 percent of the data set, the entire algorithm can be compromised. We used to perceive these attacks as academic games, but it's time for the community to acknowledge these security threats and understand the potential for real-world implications," Carlini said.

What is Data Poisoning?

According to the International Security Journal, data poisoning involves tampering with machine learning training data to produce undesirable outcomes. An attacker will infiltrate a machine-learning database and insert incorrect or misleading information. As the algorithm learns from this corrupted data, it will draw unintended and even harmful conclusions.

.