Global Coalition Elevating Cybersecurity
A coalition of nations, comprising the United States, United Kingdom, Australia, and 15 additional countries, has issued extensive guidelines to strengthen AI models against tampering.
Emphasising a "secure by design" approach, the coalition's 20-page document, unveiled on 26 November, addresses the imperative to elevate cybersecurity as a primary concern in the rapidly evolving AI industry.
This joint effort reaffirms the nations' mission to protect critical infrastructure and reinforces the importance of international partnership in securing digital future.
Cybersecurity and Infrastructure Security Agency (CISA), Director Jen Easterly, said:
"The release of the Guidelines for Secure AI System Development marks a key milestone in our collective commitment—by governments across the world—to ensure the development and deployment of artificial intelligence capabilities that are secure by design. As nations and organizations embrace the transformative power of AI, this international collaboration, led by CISA and NCSC, underscores the global dedication to fostering transparency, accountability, and secure practices. The domestic and international unity in advancing secure by design principles and cultivating a resilient foundation for the safe development of AI systems worldwide could not come at a more important time in our shared technology revolution."
Guidelines for Practical Security
The guidelines provide practical recommendations, urging strict control over AI model infrastructure, continuous monitoring for potential tampering, and enhanced cybersecurity training for personnel.
Notably absent from the guidelines are discussions on contentious AI issues, such as controls on image-generating models, deep fakes, and the methods of data collection used in model training.
The coalition recognises the multifaceted challenges posed by AI and the importance of ensuring security without stifling innovation.
Global AI Safety Alignment
U.S. Secretary of Homeland Security Alejandro Mayorkas emphasises the role of cybersecurity in constructing AI systems:
"We are at an inflection point in the development of artificial intelligence, which may well be the most consequential technology of our time. Cybersecurity is key to building AI systems that are safe, secure, and trustworthy."
These guidelines align with recent governmental efforts to address AI safety, including the AI Safety Summit in London and the European Union's ongoing development of the AI Act.
In October, U.S. President Joe Biden issued an executive order setting standards for AI safety and security, sparking industry debates about potential impacts on innovation.
The global coalition comprises Canada, France, Germany, Israel, Italy, Japan, New Zealand, Nigeria, Norway, South Korea, and Singapore.
Leading AI firms, including OpenAI, Microsoft, Google, Anthropic, and Scale AI, have actively contributed to shaping these guidelines.
They emphasise collaboration between governments and industry players in ensuring the responsible development and deployment of AI technologies.