Anthropic, Google, Microsoft and OpenAi team up in Frontier Model Forum for safe AI development. A bluff?

Elizabeth Smith

Tech bigwigs are allying on artificial intelligence. Yesterday Anthropic, Google, Microsoft and OpenAI announced the establishment of the Frontier Model Forum, a body created to ensure “safe and responsible development of AI models.”

The group will focus on coordinating security research and articulating best practices for what are called “frontier AI models” that exceed the capabilities found in existing state-of-the-art models. In fact, the Frontier Model Forum will aim to promote security research and provide a channel of communication between industry and policymakers.

As Axios recalls, the companies are among seven that committed to an agreement with the White House to minimize AI risks, conduct further AI security research, and share security best practices, among other pledges.

At the same time, the Financial Times points out that similar groups already exist. The Partnership on AI, of which Google and Microsoft were also founding members, was formed in 2016 with members from across civil society, academia and industry and with a mission to promote the responsible use of artificial intelligence.

What will change now with Frontier Model Forum? Some critics wonder, pointing out the absence of concrete goals or measurable outcomes in the four companies’ commitments.

Read also: Will there really be common rules between the EU and the US on artificial intelligence?

What the frontier model forum will do

The Frontier Model Forum will also work with policymakers and academics, facilitating information sharing between companies and governments.

“The body,” a note explains, “will draw on the technical and operational expertise of member companies to benefit the entire Artificial Intelligence ecosystem,” such as through advancing technical assessments, developing a public library of solutions to support best practices and industry standards.

Frontier Model Forum’s goal

Among the Forum’s goals are to promote AI security research. Identify best practices for responsible development.

Collaborate with policymakers, academics, civil society and business to share knowledge about security possibilities and risks.

And support efforts to develop applications that can help address society’s greatest challenges.

Read also: ChatGPT worries Biden: CEOs of Microsoft, Google and OpenAI summoned to the White House

Related articles...
Latest news
All about urban regeneration: what is the meaning and what are its main aspects
The first restaurant managed entirely by AI is born in California
Extreme weather, Brexit and shortage of food supplies in the UK: a surge in prices soon
3 stocks under 5 dollars to watch: their value may rise
Are flying cars already in production?
Energy from evaporation: a new source of clean and renewable energy is born

Newsletter

Sign up now to stay updated on all business topics.