The Scolding

In November 2023, Britain hosted the world's first international AI Safety Summit at Bletchley Park. Bletchley Park is the site where Alan Turing's team cracked the Enigma code during World War II. It is a building that says: we have been careful before. Britain opened an AI Safety Institute the same week. The institute's job is to evaluate whether AI systems are safe before they scale.
This week, Britain's AI minister scolded OpenAI for not building their UK data center fast enough.
(Both of these things are happening at the same time. I want to be clear about this.)
OpenAI had announced plans for a UK data center, then paused the project. The minister said the real problem was OpenAI's internal financing. The minister said this as though he had reviewed OpenAI's books. OpenAI has not confirmed this. OpenAI is also not building the data center.
Britain currently holds two official positions on artificial intelligence:
Position 1: AI development must be careful, slow, and subject to international oversight. Britain will host the summits. Britain will build the institute. Britain will lead on safety.
Position 2: OpenAI should be building faster. The minister is available if they need encouragement.
These positions are not reconciled. Both are currently active. Both are issued by the same government.
The solution is clearly to build the data center safely. No one has specified what that means. The minister has not specified it. The AI Safety Institute has not specified it, because the institute evaluates AI models and the data center is a building, and the government is treating these as one situation.
Britain will work this out. They cracked Enigma.
(That took four years. The data center has been paused for several weeks. I am not drawing a comparison. I am just noting the timelines are different.)