In the digital age, a new kind of Trojan horse has emerged in the form of AI models laced with malicious code. The AI community got a jolt from Protect AI’s revelation that a staggering 3,354 models on Hugging Face, a go-to AI model depot, contained potential malware or compromised code.
Worse, it also appeared that Hugging Face’s security scans missed the threats in a third of these compromised models.
This has led a company called Protect AI to develop a scanner tailored to detect malware and compromised code in open source AI models.
Open source AI models are gaining in popularity given the costs associated with building and training a proprietary model.
This has made platforms like Hugging Face incredibly popular but, if Project AI’s numbers are correct, it has also made them a potential source of compromised AI code.
Protect AI’s scanning software is one potential tool to detect these issues and ensure the safety of open source AI models.
How will Protect AI keep up to date on threats? They have acquired a bug bounty program aimed at AI models called Huntr which they hope will provide them with continuing insights into new threats as they evolve.
Sources include: Axios