Another industry group has emerged promising to promote best practices for artificial intelligence applications at a time when governments around the world are looking at regulating AI.
Google, Microsoft, OpenAI and Anthropic said today they have formed an association to ensure safe and responsible development of emerging AI models.
The Frontier Model Forum will voluntarily draw on the technical and operational expertise of its member companies to benefit the entire AI ecosystem, such as through advancing technical evaluations and benchmarks, and developing a public library of solutions to support industry best practices and standards, the partners said in an announcement today.
Frontier models are defined as large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models and can perform a wide variety of tasks.
“Governments and industry agree that, while AI offers tremendous promise to benefit the world, appropriate guardrails are required to mitigate risks,” Google said in a blog. “Important contributions to these efforts have already been made by the U.S. and U.K. governments, the European Union, the OECD, the G7 (via the Hiroshima AI process), and others.
“To build on these efforts, further work is needed on safety standards and evaluations to ensure frontier AI models are developed and deployed responsibly. The Forum will be one vehicle for cross-organizational discussions and actions on AI safety and responsibility.”
Creation of the group comes after seven major American artificial intelligence companies, including Google and Microsoft, promised the White House last week that their new AI systems will go through outside testing before they are publicly released.
Over the coming months, the Frontier Model Forum will establish an advisory board to help guide its strategy and priorities. The founding companies will also establish key institutional arrangements including a charter, governance and funding with a working group and executive board. “We plan to consult with civil society and governments in the coming weeks on the design of the Forum and on meaningful ways to collaborate,” Google said.
Other industry AI associations include the Partnership on AI, whose members include Apple, Amazon, Google, Meta, Microsoft, IBM and academics.
In addition to European, American and Canadian legislators looking at AI regulation, the G7 nations this year created the Hiroshima AI process to harmonize AI oversight, and the Organization for Economic Co-operation and Development (OECD) has a working group on AI governance.
The Association for the Advancement of Artificial Intelligence (AAAI) works on the scientific side of AI. It will hold its next annual conference on AI and ethics in Montreal starting August 8th.