18 Nations Sign Non-Binding Agreement to Make AI 'Secure By Design'
18 Nations Sign Non-Binding Agreement to Make AI 'Secure By Design'
18 Nations Sign , Non-Binding Agreement , to Make AI 'Secure By Design'.
NBC reports that the United States, along with over a dozen other countries, have unveiled the first detailed international agreement aimed at regulating artificial intelligence.
One senior U.S. official said the agreement aims to push companies to develop AI systems that are "secure by design" from rogue actors.
One senior U.S. official said the agreement aims to push companies to develop AI systems that are "secure by design" from rogue actors.
18 countries agreed in a 20-page document that companies need to deploy AI products that keep customers and the public safe from misuse.
This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs, Jen Easterly, Director of the U.S. Cybersecurity and Infrastructure Security Agency, via NBC.
Jen Easterly, the director of the U.S. Cybersecurity and Infrastructure Security Agency, said , "the most important thing that needs to be done at the design phase is security.”.
Signatories of the non-binding agreement include the U.S., Britain, Germany, Italy, Estonia, Poland, Australia, Chile, Israel, Nigeria, Singapore and the Czech Republic.
NBC reports that the guidelines include preventing hijack by hackers and thorough security testing.
While the Biden administration has pushed for lawmakers to regulate AI, polarization in Congress has resulted in little progress toward effective regulation.
While the Biden administration has pushed for lawmakers to regulate AI, polarization in Congress has resulted in little progress toward effective regulation.
In October, Biden signed an executive order looking to reduce risks to consumers, workers and minority groups by AI systems. France, Germany and Italy recently signed a separate agreement revolving around “mandatory self-regulation through codes of conduct” for AI models