AI Safety Institute will make UK a ‘global hub’, Rishi Sunak says

Rishi Sunak has launched the UK’s AI Safety Institute, hailing it as an indicator of the UK’s position as a “global hub” for AI safety.

The organisation, first announced by the Prime Minister last week, will test new artificial intelligence models developed by leading firms before they are launched to the public as part of a new deal struck at the AI Safety Summit at Bletchley Park.

Closing the two-day summit in Milton Keynes, Mr Sunak said the “historic agreement” meant the public would no longer have to rely on tech firms “marking their own homework” when it came to safety testing.

The agreement is currently voluntary. The Prime Minister said “binding requirements” are likely to be needed to regulate the technology, but now is the time to move quickly without laws.

The organisation will also test AI models after they are released to address any potentially harmful capabilities, the Government said, including exploring social harms such as bias and misinformation, to more extreme risks such as loss of human control.

As well as being backed by AI firms including Google DeepMind and ChatGPT developer OpenAI, it was confirmed the new institute will work closely with the UK’s national hub for data science and AI, the Alan Turing Institute, and will also partner with other international bodies on a similar footing, including a newly announced safety body in the US and the government of Singapore.

The expanded powers of the institute come after the 28 countries in attendance at the AI Safety Summit – including the US and China – signed the Bletchley Declaration, pledging to develop safe and responsible AI models, while acknowledging the risks around the technology.

The Prime Minister confirmed that Turing Award-winning AI academic Yoshua Bengio would lead the creation of the first frontier AI state-of-the-science report to provide a scientific assessment of existing research into artificial intelligence and set out priority areas for more focus.

The Prime Minister said the plan had been inspired by a similar approach to agreeing global action on climate change, and confirmed that every country at the summit had agreed to nominate an expert to join the report’s global panel.

The findings of the report will be presented at future AI safety summits, with South Korea set to host a virtual mini-summit in the next six months, before the next in-person event takes place in France a year from now.

– Advertisement –
– Advertisement –