(VOVWORLD) - The United States, Britain, and more than a dozen other countries on Sunday unveiled the first detailed international agreement to keep artificial intelligence safe from rogue actors, urging companies to create AI systems that are "secure by design."
In a 20-page document, 18 countries agreed that companies that design and use AI need to develop or deploy it in a way that keeps customers and the wider public safe from misuse.
The agreement is non-binding and carries mostly general recommendations such as monitoring AI systems for abuse, protecting data from tampering, and vetting software suppliers.
The agreement is seen as specifying the call made by British Prime Minister Rishi Sunak at AI Safety Summit 2023, held for the first time earlier this month in the UK.
“Until now, the only people testing the safety of new AI models have been the very companies developing them. That must change. We must work together on testing the safety of new AI models before they are released, because safely harnessing this technology could eclipse anything we have ever known,” said Sunak.
The framework deals with questions of how to keep AI technology from being hijacked by hackers and includes recommendations such as only releasing models after appropriate security testing. It does not tackle thorny questions around the appropriate uses of AI, or how the data that feeds these models is gathered.
In addition to the US and Britain, the 18 countries that signed on to the new guidelines include Germany, Italy, Nigeria, and Singapore.
The rise of AI has fed a host of concerns, including the fear that it could be used to disrupt the democratic process, turbocharge fraud, or lead to dramatic job loss, among other harms.