Team Blitz India
In what is being considered the first detailed international agreement on guarding Artificial Intelligence (AI) from rogue actors, 18 countries have agreed that companies designing and using such technologies need to develop and deploy it in a way that keeps people safe from misuse.
The United Kingdom, the United States, Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, and Singapore were among the countries that inked a 20-page document, unveiled on November 26. The effort is pushing for companies to create AI systems that are “secure by design”, said reports.
Series of initiatives
This is another step in a series of initiatives undertaken by governments across the globe to check the misuse of such technology – like deepfake.
Several experts, including award-win ning computer scientist Geoffrey Hinton – known as the ‘Godfather of AI” – have been sounding the alarm over the threat such apps can pose to mankind.
Though the agreement, like some earlier initiatives, is neither binding nor an alarming instrument against rogue actors, it is a welcome step at bringing the world together in fighting the menace. Towards that, the UK published the first global guidelines to ensure the secure development of AI technology, informed the country’s National Cyber Security Centre on November 27.
Cyber security levels
Agencies from 17 other countries have confirmed they will endorse and co-seal the new guidelines, it stated, adding that the guidelines aim to raise the cyber security levels of Artificial Intelligence and help ensure that it is designed, developed, and deployed securely.
Early November, countries and companies developing frontier AI agreed to a plan on safety testing at UK’s Bletchley Park. They recognised that both parties have a crucial role to play in testing the next-generation models, to ensure safety – both before and after the AI models are deployed.