Site icon IT World Canada

GPT-4 breaks AI-guardian defense with natural language prompts

Nicholas Carlini, a Google scientist, has demonstrated how OpenAI’s GPT-4 big language model may be used to circumvent AI-Guardian, a safeguard against adversarial attacks on machine learning models.

Carlini utilized GPT-4 to develop code capable of identifying the mask used by AI-Guardian to detect adversarial samples. This enabled Carlini to create hostile cases that could go around the defense.

By directing GPT-4 to create an attack method and explain its workings, Carlini revealed how the chatbot could compromise AI-Guardian’s detection capabilities. Specifically, GPT-4 produced Python code to manipulate images without triggering AI-Guardian’s suspicions. This ability to fool classifiers significantly reduced AI-Guardian’s robustness from 98 percent to a mere 8 percent.

The study reveals machine learning algorithms, such as image recognition systems, are vulnerable to adversarial examples—input that misleads the model’s identification process. Carlini’s revelation of the mask used to identify adversarial samples contradicted AI-Guardian’s technique of establishing a backdoor to reject hostile input, allowing the design of effective adversarial assaults.

“This work shows that GPT-4 can be used as a powerful tool for attacking machine learning models,” said Carlini. “It also raises concerns about the security of AI-Guardian and other similar defenses.”

The sources for this piece include an article in TheRegister.

Exit mobile version