OpenAI has announced the creation of a new team called Preparedness. The team will be led by Aleksander Madry, the director of MIT’s Center for Deployable Machine Learning. The team’s mission will be to assess, evaluate, and probe AI models to protect against what OpenAI describes as “catastrophic risks.”
These risks include the ability of AI models to tailor messages to specific individuals in order to persuade them to do something, the ability of AI models to be used to develop new and more sophisticated cyberattacks, the ability of AI models to replicate and adapt themselves without human intervention, and the ability of AI models to be used to develop and deploy chemical, biological, radiological, and nuclear (CBRN) threats.
OpenAI says it acknowledges that some of these risks may seem far-fetched, but the company believes that it is important to be prepared for all possibilities. In addition to assessing AI risks, the Preparedness team will also be responsible for developing a “risk-informed development policy.”
The policy will detail OpenAI’s approach to building AI model evaluations and monitoring tooling, the company’s risk-mitigating actions, and its governance structure for oversight across the model development process.
OpenAI CEO Sam Altman has been a vocal advocate for AI safety, and he has warned of the potential for AI to lead to human extinction. To help expand its research on AI risks, OpenAI has also launched the AI Preparedness Challenge.
This challenge is open to anyone, and it offers up to $25,000 in API credits to the top 10 submissions that publish probable, but potentially catastrophic misuse of OpenAI. The challenge is designed to help OpenAI identify new and emerging AI risks, and to develop strategies for mitigating those risks. OpenAI encourages everyone to participate in the challenge, and to help the company make AI safer for everyone.
The sources for this piece include articles in TechCrunch and ZDNet.