Team Blitz India
While advanced artificial intelligence (AI) can be used to boost wellbeing, prosperity, and scientific breakthroughs, like all powerful technologies, current and future developments could result in harm, cited recent research supported by over 30 nations, as well as representatives from the EU and the UN.
The first iteration of the International Scientific Report on the Safety of Advanced AI, published on May 17 shared the impact the technology could have if governments and wider society fail to deepen their collaboration on its safety.
Launched at the AI Safety Summit, the development of the report was one of the key commitments to emerge from the Bletchley Park discussions, coming as part of the landmark Bletchley Declaration. It pointed out how malicious actors can use AI to spark large-scale disinformation campaigns, fraud and scams. Future advances in advanced AI could also pose wider risks such as labour market disruption, economic power imbalances and inequalities.
According to the report, there is a lack of universal agreement among AI experts on a range of topics, including both the state of current AI capabilities and how these could evolve over time. It also explores the differing opinions on the likelihood of extreme risks which could impact society such as large-scale unemployment, AI-enabled terrorism, and a loss of control over the technology. With broad expert agreement highlighting that we need to prioritise improving our understanding, the future decisions of societies and governments will ultimately have an enormous impact.
Initially launched as the State of Science report last November, the report unites a diverse global team of AI experts, including an Expert Advisory Panel from 30 leading AI nations from around the world, as well as representatives of the UN and the EU, to bring together the best existing scientific research on AI capabilities and risks.
The report aims to give policymakers across the globe a single source of information to inform their approaches to AI safety.












