The Group of Seven industrial countries, comprising Canada, France, Germany, Italy, Japan, Britain, and the United States, alongside the European Union, is set to establish a code of conduct for advanced AI development. The move comes as governments around the world seek to mitigate the risks and potential misuse of the technology.
The voluntary code of conduct, a 11-point code, aimed at promoting the safe, secure, and ethical use of AI on a global scale will provide guidance for organizations developing the most advanced AI systems, including foundation models and generative AI systems.
The code urges companies to take appropriate measures to identify, evaluate, and mitigate risks across the AI lifecycle. It also calls for companies to publish reports on the capabilities, limitations, and use and misuse of their AI systems, and to invest in robust security controls.
The G7 leaders kicked off the process of developing the code of conduct in May at a ministerial forum dubbed the “Hiroshima AI process.” The code is seen as a landmark step in how major countries govern AI, amid growing privacy concerns and security risks. The EU has been at the forefront of regulating AI, with its hard-hitting AI Act. However, Japan, the United States, and countries in Southeast Asia have taken a more hands-off approach to boost economic growth.
European Commission digital chief Vera Jourova, speaking at a forum on internet governance in Kyoto, Japan earlier this month, said that a Code of Conduct was a strong basis to ensure safety and that it would act as a bridge until regulation is in place.
The sources for this piece include an article in Reuters.