In a significant step toward international cooperation, the United States, Britain, and more than a dozen other countries jointly introduced what a senior U.S. official hails as the first comprehensive international agreement outlining measures to ensure the safety of artificial intelligence (AI).
The focal point of the agreement is the advocacy for AI systems to be “secure by design,” emphasizing the imperative for companies to create AI technologies with inherent security measures.
Outlined in a 20-page document revealed on Sunday, the 18 participating countries acknowledge the responsibility of companies involved in designing and deploying AI to prioritize the safety of customers and the broader public, safeguarding against potential misuse.
While the agreement is non-binding, it offers general recommendations encompassing monitoring AI systems for misuse, protection of data from tampering, and careful vetting of software suppliers.
Jen Easterly, the director of the U.S. Cybersecurity and Infrastructure Security Agency, underscores the significance of this collective commitment, emphasizing that it marks a departure from the sole focus on market competitiveness and rapid deployment.
Easterly notes that these guidelines signify a consensus that security is paramount during the design phase of AI systems.
The agreement, albeit lacking enforcement mechanisms, aligns with a series of global initiatives to shape the trajectory of AI development.
Beyond the United States and Britain, signatories include Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, and Singapore.
The framework primarily addresses concerns related to preventing the hijacking of AI technology by hackers, advocating practices like releasing models only after thorough security testing.
Notably absent are discussions on the ethical uses of AI or the gathering of data that fuels these AI models.
The rise of AI has raised numerous apprehensions, ranging from its potential misuse in disrupting democratic processes to concerns about fraud and substantial job displacement.
While Europe is taking the lead in AI regulations, with lawmakers working on drafting AI rules, the United States, despite efforts by the Biden administration, faces challenges in passing comprehensive AI regulations due to political polarization.
In October, the White House took steps to mitigate AI risks, targeting consumer protection, worker rights, and national security through an executive order.