skip navigation
skip mega-menu
Posts

“Approaching the summit” – what to expect from the UK’s upcoming AI Safety Summit

The conversation around Artificial intelligence (AI) regulation is set to go into a “supercharged” phase globally. At least, that’s the conclusion one can’t help drawing from the hype around the UK’s AI Safety Summit, which takes place in early November, and from comments this week by Senate Leader Chuck Schumer around upping the pace of AI regulation in the US.

So what can we expect from the next few months, and in particular from the upcoming UK AI Safety Summit

The UK regulatory approach – getting one’s own house in order

So far, the UK has espoused a “light touch” approach towards AI regulation, with a decentralised model that delegates oversight to sectoral regulators, and which the UK government believes will promote innovation. In the coming months, UK regulators are expected to start issuing practical guidance, setting out how organisations should implement the government’s AI principles set out in the government’s white paper on AI earlier this year. 

However, an interim report from the House of Commons Science, Innovation and Technology Committee (SITC) on  31 August took aim at this “light touch” approach, raising concerns that the UK will be “left behind” in AI regulation if it does not act soon. 

The report noted that if new legislation is not proposed in the King’s Speech on 7 November (which currently seems unlikely), the earliest any legislation could become law is 2025. In the intervening period, the EU’s AI Act might already have become the de facto regulatory standard, with UK laws being potentially forced to follow the EU’s lead. The SITC therefore advocates for a “tightly focused AI Bill in the new session of Parliament”.

This puts the UK government in a difficult position. It seems unlikely it will deviate so early from its stated approach of avoiding explicit regulation and relying instead on existing regulators, such as the ICO, FCA and CMA, to create context-specific rules. Indeed, it seems especially unlikely that the UK will change tack so shortly before it hosts the AI Summit. 

So what next? 

The government is set to respond to the SITC’s interim report, and it is still set to publish feedback following the AI white paper consultation period (which closed on 21 June). Feedback from that process is anticipated (if industry commentary is anything to go by) to include specific demands for more detail on how any proposed regime will include clear and consistent cross-sector principles, with tech providers presumably hoping for assurance that they are not going to need to take a significantly different approach with each regulator. 

For now, though, the government’s immediate focus will be on the upcoming summit, where Prime Minister Rishi Sunak will be keen to promote both the UK’s AI credentials and its approach towards AI regulation. It seems unlikely there will be very much public development in respect of AI regulation in the UK until after the summit.

The perfect host?

In a statement on 4 September, the UK Department for Science, Innovation and Technology (DSIT) detailed the government’s ambitions for the summit. It refers to “frontier AI”, up to now an academic term that encompasses “highly capable foundation models that could exhibit dangerous capabilities”. We can infer that in its use of the term the DSIT is referring to the foundation model generative AIs, including large language models, that have been causing a stir since the launch of ChatGPT, and which were a key focus of the UK government’s white paper. 

The five objectives set out by the DSIT, which will frame discussion at the summit, are:

  • developing a shared understanding of the risks posed by frontier AI and the need for action;
  • promoting international collaboration on frontier AI safety, including how best to support national and international frameworks;
  • proposing measures that organisations should take to increase frontier AI safety;
  • identifying areas for potential collaboration on AI safety research; and
  • showcasing how the safe development of AI will enable AI to be used for good globally.

The DSIT notes the summit will agree “practical next steps” to address risks from frontier AI, writing that AI technology “poses… risks in ways that do not respect national boundaries. The need to address these risks, including at an international level, is increasingly urgent”. 

Expect to see a flurry of commentary from UK politicians in the run up to the summit, as they seek to promote the UK as an influential venue in the AI space and the perfect host for conducting such an international discussion on AI regulation. 

Across the pond

Meanwhile, in the United States, Senate Majority Leader Chuck Schumer told Senate Democrats in a letter on 1 September of his plans for a series of bipartisan “AI Insight Forums”, expected to feature AI luminaries such as Sam Altman as well as big tech leaders like Elon Musk and Mark Zuckerberg.

Schumer notes that these forums will “supercharg[e] the Senate’s typical process so we can stay ahead of AI’s rapid development”.  Underlining the gravity of the task, he continues “we must treat AI with the same level of seriousness as national security, job creation, and our civil liberties”. 

Another member of the Senate’s AI working group, Todd Young, noted separately that the AI Insight Forums will be a “comprehensive way for Congress to explore key policy issues… related to artificial intelligence as we develop potential legislative solutions”.

Highlighting a topic that has been the bane of the EU’s nascent AI Act, Republican Senator Ted Cruz raised his concerns that “Democrats want to impose such stringent regulations on the development of AI that it stifles innovation”, and – underlining the difficulty of international cooperation, made a reference to the risk that China may “take the lead” through its investment in AI.  

A difficult balance

The difficulty everywhere, and something the UK’s AI Safety Summit will probably look to address, is finding a balance between overly stringent regulation that may be too cumbersome to adapt with the technology and potentially suppress innovation, and an overly laissez faire approach that may end up failing to properly account for, or guard against, the risks posed by AI. 

The EU’s AI Act has come in for criticism from some quarters in this respect, with an open letter from 150 executives from some of Europe’s largest companies expressing concern about the AI Act’s potential to “jeopardise Europe’s competitiveness and technological sovereignty” due to its overly restrictive nature. Sam Altman of OpenAI also initially noted “the current draft of the AI Act would be over-regulating”, before later recanting and declaring “most of the regulation…makes total sense”. 

Elsewhere, the chair of the Japan government’s AI strategy council, the University of Tokyo’s Professor Yutaka Matsuo, called the EU’s AI Act “a little too strict”, particularly in respect of its copyright provisions. Japan is expected to lean towards more of a light touch approach around AI regulation than the EU, perhaps tacking closer to the US or UK approaches than the stringent regulation seen in the EU’s draft AI Act. 

On the other hand, certain leading voices have tipped the EU’s AI Act to become the global standard, particularly as compliance may be a prerequisite for AI firms to continue accessing the large and lucrative EU market. Paul Barrett of the NYU Stern School of Business told Tech Monitor he believes the concept of “pro innovation” promoted in the UK and apparently being considered in Japan and the US “really means unregulated,” and argued that “regulation is not necessarily an obstacle to innovation”. Monish Darda, CTO of AI company Icertis also noted the draft EU AI Act “has potential”, and that the world is “watching with hope, and expectations that the law will do well”.

What would a successful summit look like?

The UK may have its hands full in attempting to promote a global consensus on the specifics of AI regulation at the summit. Countries look set to continue to pursue their own agendas with AI regulation, and we may expect more doubling down on the competing structures already in evidence – such as the self-regulatory, decentralised approaches of the US, UK and Japan, and the prescriptive approach being put forward by the EU. There is also the issue of China, which is promoting a state-led, information control model, where AI is required to promote “socialist values”. What that looks like, and how it contrasts to the approaches put forward elsewhere in the world, remains an open question.

Against this backdrop, any progress the summit can make towards promoting international collaboration and an international framework, or towards greater clarity on how to navigate between overly stringent and overly loose regulation, would be considered a success. All eyes will be on the UK in early November to see what consensus the summit might be able to achieve.  

Subscribe to our newsletter

Sign up here