Google, Microsoft, and others launch new forum to promote responsible AI development

Google, Microsoft, OpenAI, and Anthropic have come together to address potential risks and challenges of “frontier AI models”.

bold-and-responsible-ai-google

Key Takeaways

  • Google, Microsoft, OpenAI, and Anthropic have come together to address the challenges and potential risks posed by “frontier AI models” through the creation of the Frontier Model Forum.
  • Frontier models are more advanced than existing AI models and can perform a wide range of tasks, raising concerns about their alignment with human values and public interest.
  • The Forum’s core objectives include developing best practices, sharing information, advancing AI safety research, and collaborating with stakeholders to build consensus on AI development issues. Memberships are available for organizations committed to responsible AI development.

History tells us when companies, even when they’re competitors, have a shared interest in addressing challenges, they work together to address them. And for Google, Microsoft, OpenAI, and Anthropic, the common issues that are bringing them together are the challenges and potential risks of what these companies call “frontier AI models”. In a joint statement, Google, Microsoft, OpenAI, and Anthropic announced “Frontier Model Forum,” a new industry body to promote safe and responsible frontier AI models.

Chatbots and AI services like ChatGPT, Bing Chat, and Bard specialize in performing specific tasks and have not gained human-level intelligence. But there is a growing fear that it could soon gain sentience and will be fully potent to act against the larger public interest. As explained in the joint statement, frontier models are large-scale machine-learning models that are far more advanced than the capabilities of existing models and, therefore, can perform a wide variety of tasks. And given the risks involved, the Forum wants to bring in robust mechanisms to ensure these models are more aligned with human values and work towards serving the best interests of people.

The core objectives of the Forum will be to identify best practices for developing and deploying AI models, facilitate sharing of information among companies and governments, and advance AI safety research. Part of the AI safety research will include testing and comparing different AI models and publishing the benchmarks in a public library for every key industry player to understand the potential risks and act towards eliminating them.

The Frontier Model Forum will establish an Advisory Board consisting of players from different backgrounds and perspectives to chalk out strategies and priorities. On top of it, founding companies will also form institutional arrangements for all the stakeholders to collaborate to build a consensus on issues concerning AI development. The Forum will also leverage the existing knowledge and resources already put forward by civil society groups, researchers, and others. It will learn and support existing AI initiatives, such as Partnership on AI and MLCommons, for the practical implementation of guidelines set by them.

The Frontier Model Forum also offers memberships to organizations firmly committed to adhering to rules for developing and deploying responsible frontier AI models.