According to Cointelegraph, artificial intelligence firms OpenAI and Anthropic have agreed to grant the US AI Safety Institute early access to any significant new AI models developed by their respective companies. This agreement, driven by mutual safety concerns, leaves the exact role of the government in the event of a significant technological breakthrough somewhat ambiguous.
OpenAI CEO and cofounder Sam Altman emphasized the importance of this agreement in a recent post on the X social media platform, stating, “We are happy to have reached an agreement with the US AI Safety Institute for pre-release testing of our future models. For many reasons, we think it's important that this happens at the national level. US needs to continue to lead!”
Both OpenAI and Anthropic are focused on developing artificial general intelligence (AGI), a form of AI capable of performing any task a human can, given the necessary resources. Each company has its own charter and mission aimed at creating AGI safely with humanity's interests at its core. Should either company succeed, they would become key gatekeepers of this advanced technology. By agreeing to share their models with the US government before any product launches, both companies have effectively transferred some of this responsibility to federal authorities.
As Cointelegraph recently reported, OpenAI may be nearing a breakthrough with its “Strawberry” and “Orion” projects, which are said to possess advanced reasoning capabilities and new methods for addressing AI's hallucination problem. The US government has reportedly already reviewed these tools and early iterations of ChatGPT incorporating them.
A blog post from the US National Institute of Standards and Technology indicates that the safety guidelines associated with this agreement are voluntary for the participating companies. Proponents of this light-touch regulatory approach argue that it encourages growth and allows the sector to self-regulate. Supporters, including Altman, view this agreement as a model for how cooperation between the government and the corporate sector can be mutually beneficial.
However, there are concerns about the potential lack of transparency associated with light-touch regulation. If OpenAI or Anthropic achieve their AGI goals and the government decides the public does not need to be informed, there appears to be no legal requirement for disclosure.