Tech bigwigs are allying on artificial intelligence. Yesterday Anthropic, Google, Microsoft and OpenAI announced the establishment of the Frontier Model Forum, a body created to ensure “safe and responsible development of AI models.”
The group will focus on coordinating security research and articulating best practices for what are called “frontier AI models” that exceed the capabilities found in existing state-of-the-art models. In fact, the Frontier Model Forum will aim to promote security research and provide a channel of communication between industry and policymakers.
As Axios recalls, the companies are among seven that committed to an agreement with the White House to minimize AI risks, conduct further AI security research, and share security best practices, among other pledges.
At the same time, the Financial Times points out that similar groups already exist. The Partnership on AI, of which Google and Microsoft were also founding members, was formed in 2016 with members from across civil society, academia and industry and with a mission to promote the responsible use of artificial intelligence.
What will change now with Frontier Model Forum? Some critics wonder, pointing out the absence of concrete goals or measurable outcomes in the four companies’ commitments.
Table of Contents
What the frontier model forum will do
The Frontier Model Forum will also work with policymakers and academics, facilitating information sharing between companies and governments.
“The body,” a note explains, “will draw on the technical and operational expertise of member companies to benefit the entire Artificial Intelligence ecosystem,” such as through advancing technical assessments, developing a public library of solutions to support best practices and industry standards.
Frontier Model Forum’s goal
Among the Forum’s goals are to promote AI security research. Identify best practices for responsible development.
Collaborate with policymakers, academics, civil society and business to share knowledge about security possibilities and risks.
And support efforts to develop applications that can help address society’s greatest challenges.