Skip to main content

Key Points

  • The UK government plans to conduct a global AI safety summit in the upcoming autumn.
  • OpenAI, Google DeepMind, and Anthropic have pledged to provide early access to their AI models for UK AI safety research.
  • The UK government will allocate £100 million for an expert task force focusing on AI foundation models.

focusing on AI foundation models: AI Giants Back UK’s Efforts in Advancing AI Safety Research

The UK government, under Prime Minister Rishi Sunak, announced at the commencement of London Tech Week that OpenAI, Google DeepMind, and Anthropic will be granting early access to their AI models for research into evaluation and safety. This commitment is part of a broader push to place the UK at the forefront of AI safety research and regulation, underlined by plans for a global AI safety summit later in the year and £100 million funding for an expert taskforce focused on AI foundational models.

Recent Shifts in AI Safety Stance

In recent weeks, Sunak’s government has undergone a significant shift in their approach to AI, moving from a pro-innovation stance to a greater focus on safety and regulation. As recently as March, the UK government promoted an approach favouring innovation and downplayed safety concerns. However, rapid advancements in AI technology and warnings from industry giants about potential risks have spurred a change in Downing Street‘s strategy.

Industry Involvement

Following meetings between Sunak and the CEOs of OpenAI, DeepMind, and Anthropic, these major AI players committed to providing advanced access to their models. This early access may allow the UK to take the lead in developing effective evaluation and audit techniques, even before legislative oversight regimes mandating algorithmic transparency are established elsewhere. However, concerns about the possibility of industry capture and the potential for these tech giants to shape future AI regulations are also raised.

Real World AI Concerns

While AI safety discussions are often dominated by concerns about hypothetical “superintelligent” AI threats, AI ethicists have long stressed the immediate harms of existing AI technologies, such as bias, discrimination, privacy abuses, copyright infringements, and environmental exploitation. As such, the UK government’s efforts for robust AI safety regulation need to ensure the involvement of independent researchers, civil society groups, and groups disproportionately at risk from automation.