skip navigation
skip mega-menu
Posts

The UK Approach to Generative AI - taking an ‘LLM’ in AI Regulation

#AI

Two months ago, we published an article speculating how the UK government may look to regulate generative AI such as OpenAI’s ChatGPT and Google’s Bard, as part of its broader approach to AI regulation in the UK.

On 29 March 2023, the government unveiled its White Paper entitled ‘A pro-innovation approach to AI regulation’. We’ve taken a closer look at the approach outlined in the White Paper to examine how the regime will deal with so-called “large language models”, which are the basis of most current generative AI platforms.

Generative large language foundation AI models? - Distilling some terms 

Unlike the government’s July 2022 Policy Paper, the White Paper directly references generative AI, and specifically the regulatory application towards ’foundation models’. Indeed, a ’Foundation Model Taskforce’ will support the government in assessing foundation models to ensure the UK ’harnesses the benefits’ as well as tackles their risks. Foundation models are AIs trained on huge quantities of data,  and are often used as the base for building generative AI – models that can, with some degree of autonomy, create new content such as text, images and music. Citing the fast-paced development of foundation models as “bring[ing] novel challenges for governments seeking to regulate AI”, the government notes that “given the…transformative potential of foundation models, we must give careful attention to how they might interact with our proposed regulatory framework”. The Paper pays special attention to large language models (LLMs), a type of foundation model AI trained on text data. Being trained on huge quantities of text is what allows LLMs like ChatGPT or Bard to function as generative AI. 

Avoiding ‘rigid’ definitions

In our previous article, we discussed how, rather than attempting to define AI, the initial Policy Paper set out two characteristics by which regulators could assess AI risks, which were (i) adaptiveness, i.e. an AI’s ability to learn, and (ii) autonomy, i.e. an AI’s ability to operate and react in situations in a way humans might struggle to control. The White Paper retains these characteristics and argues that in avoiding using “rigid legal definitions”, the government is “future-proof[ing] [its] framework against unanticipated new technologies that are autonomous and adaptive”. It is hard not to see this as a reference to the LLMs and generative AIs that have garnered such a surge in interest since the publication of the Policy Paper in 2022. Indeed, the White Paper explicitly states that LLMs fall within the scope of the regulatory framework as they are autonomous and adaptable, and notes that the government is “mindful of the rapid technological change in the development of foundation models such as LLMs”. 

Recognising the possible risks posed by LLMs, such as their potential to disseminate misinformation, amplify biases, and create security threats, the Paper notes that foundation models (including LLMs) require “a particular focus”, especially regarding the potential challenges they pose around accountability. It is particularly concerned about the accountability of open-source foundation models (such as Meta’s LLaMA, an LLM whose code Meta is sharing ostensibly to allow researchers to “test new approaches to limiting or eliminating…problems”, such as the risk of bias.[1]), noting that open-source models “can cause harm without adequate guardrails”, as they limit regulators’ ability to monitor AI development. One proposal the Paper offers is for LLMs to be regulated by the amount of data they compute, so that if a certain LLM is trained on quantities of data over a certain limit, it will be subject to review by regulators. Open-source AI would likely circumvent this measure due to its public access and open-door policy towards development.

The proposed regulatory framework

So how is the government proposing to regulate LLMs? The White Paper highlights that the variety of approaches towards developing foundation models complicates their regulation. Perhaps as a result, the Paper proposes what appears to be a relatively fluid regulatory ‘framework’, intended to be ‘proportionate and pro-innovation’, to identify and address risks around AI. Regulators are given leeway to issue specific risk guidance and requirements for developers and deployers of LLMs, including around transparency measures. For instance, regulators may choose to mandate specific privacy requirements. This might be analogous to the precedent set when Italy’s Data Protection Authority temporarily banned ChatGPT until OpenAI could demonstrate compliance with stipulated requirements[2]. Such an approach also aligns with calls from academics to focus regulation on AI’s “high-risk applications rather than the pre-trained model itself”[3], including obligations regarding transparency and risk management. 

The creation of ‘central functions’ is another of the White Paper’s proposals. Intended to support regulators in delivering the framework, these will bring together a wide range of interested parties at a central level, including industry and academia. In particular, the central risk function will propose mechanisms to coordinate and adapt the framework to deal with AI risks in close cooperation with regulators. Crucially in respect of LLMs and generative AI, the central risk function will involve ‘horizon scanning’ to monitor emerging trends in AI development and ensure that the framework can respond effectively. While the Paper acknowledges that the wide-ranging application of LLMs means they are unlikely to fall directly within the remit of a single regulator, the central risk function is designed to mitigate this challenge, notably by supporting smaller regulators who lack in-house AI expertise. This suggests the government may take a more active role in central monitoring and evaluation of LLMs than of other AI platforms, particularly regarding LLMs’ accountability and governance. 

The White Paper proposes statutory reporting requirements for LLMs over a certain size,  and calls out ‘life cycle accountability’ as a priority area for research and development. On this, the Paper recognises the difficulty in allocating ownership and accountability, particularly in light of complex supply chains, but suggests that regulators should allocate legal responsibility on the basis of the varied roles of actors in an AI’s life cycle. How the relevant UK regulators choose to reconcile these often complex lines of responsibility with a clear allocation of accountability remains to be seen.

A work in progress

Although the White Paper does not set out a definitive regulatory regime for generative AI, the flexibility of its proposed framework aims to ensure the regulatory environment adapts over time with the technology. The government seems at pains to emphasise that regulation should remain “proportionate”, a fundamental principle of its stance on AI regulation since the Policy Paper. For foundation models, the government acknowledges that risks can vary hugely depending on how the AI is deployed. Clearly, the risks will be significantly higher where a generative AI is providing medical advice than where a chatbot is summarising an article. However, while the latter scenario may not present the same immediate and obvious risks, it may hint at wider concerns around how generative AI will deal with Intellectual Property. On this, the Paper defers to the Intellectual Property Office, which has been tasked to provide clearer guidance.

Looking ahead

A consultation on the White Paper, through which the government plans to engage with stakeholders on its proposals, is open until 21 June. In the meantime, the framework’s implementation will be delivered in parallel with the consultation. After this, the framework may evolve further, with the intention to design an AI Regulation Roadmap and analyse research findings on potential barriers and best practices. The Paper states a longer term aim to deliver all central functions, publish a risk register and an evaluation report, and update the AI Regulation Roadmap to assess the most effective oversight mechanisms.

Against recent developments such as the Future of Life Institute’s call for a halt to advanced AI development pending implementation of adequate safety protocols[4], Italy’s temporary ban of ChatGPT, and recent US Congress hearings at which OpenAI actively invited greater regulation, governments across the globe are under increasing pressure to adequately regulate LLMs and ensure interoperability of regulation on an international scale (something we discussed in our article Transatlantic discord?). Despite the White Paper’s talking of “a clear opportunity for the UK to lead the global conversation and set global norms for the future-proof regulation of foundation models” and the intention to “establish the UK as an AI superpower”, the UK’s approach towards regulating LLMs is still very much a framework, and much still remains to be clarified.

Our previous article predicted that the drafters of the White Paper would have their work cut out keeping up with developments around generative AI, and noted that regulators would need to remain flexible and collaborative (including on an international basis) to ensure adequate regulatory certainty for providers and protection for end users. It is encouraging therefore to see the UK government making a strong case for such a collaborative approach, acknowledging that LLMs and generative AI don’t recognise borders and present risks that will continue to develop, and that ongoing feedback from industry will be invaluable. The White Paper recognises, too, that the government may need to take a central role. It remains conceivable, however, that the sheer complexities of regulating AI might result in that role becoming more extensive and prescriptive than the government currently anticipates. 

 

Subscribe to our newsletter

Sign up here