Team Blitz India
UK Artificial Intelligence (AI) Safety Institute’s evaluations platform was made available to the global AI community from May 10, paving the way for safe innovation of AI models.
It is the first time that an AI safety testing platform which has been spearheaded by a state-backed body has been released for wider use, according to a media release from the Department for Science, Innovation and Technology, and the AI Safety Institute.
By making ‘Inspect’ available to the global community, the Institute is helping accelerate the work on AI safety evaluations being carried out across the globe, leading to better safety testing and the development of more secure models. This will allow for a consistent approach to AI safety evaluations around the world, the statement added.
Inspect is a software library which enables testers – from start-ups, academia and AI developers to international governments – to assess specific capabilities of individual models and then produce a score based on their results.
It can be used to evaluate models in a range of areas, including their core knowledge, ability to reason, and autonomous capabilities. Released through an open-source licence, it means Inspect is now freely available for the AI community to use.
Alongside the launch of Inspect, the AI Safety Institute, Incubator for AI (i.AI) and Number 10 will bring together leading talent from a range of areas to rapidly test and develop new open-source AI safety tools.
Open-source tools are easier for developers to integrate into their models, giving them a better understanding of how they work and how they can be made as safe as possible, said the official release.












