skip navigation
skip mega-menu

Artificial intelligence in your business: the legal landscape

Although there isn’t a single definition of AI, it may be generally thought of as a machine’s ability to perform the cognitive functions we usually associate with human minds, such as visual perception, speech recognition, decision making and language translation.

We can think of AI as falling into three main categories. Traditional AI has been in development since the 1950s and sits behind many banking and finance applications and targeted adverts. Generative AI can generate text, images, speech, code and even product designs. Artificial general intelligence (AGI) hasn’t been created yet, but aims to create intelligence equal to and beyond that of humans.

AI is a simulation of human intelligence. It isn’t actually intelligent in the way a human is, but it behaves as if it is. We should treat it as if it is intelligent but also recognise that it is fallible.

Regulating AI

Governments and other law makers are taking one of two main approaches to regulating AI, although there is overlap between them.

The first is to say AI is important and we need to regulate it; we need laws to govern the technology. The second is to say that the economic potential of AI is vast and that premature, heavy-handed regulation will stifle innovation.

The UK falls into the second approach. The Government is keen to provide a light touch framework with regulatory muscle behind it, giving innovation a chance to flourish and passing legislation only where it must.

The UK has a national AI strategy and a series of deliverables that flow from that. The Government’s white paper on AI, ‘A pro-innovation approach to AI regulation’ sets out five cross-sector principles as follows:

  • Safety, security and robustness
  • Transparency and explainability
  • Fairness
  • Accountability and governance, and
  • Contestability and redress.

Instead of creating a standalone body for AI regulation, the Government proposes to increase the remit and capacity of existing regulators to develop a principle-focused, sector-specific approach.

The EU is an example of the first approach. At the time of writing, the EU AI Act has been agreed in principle and is nearing the finishing line. Assuming it goes into force, there will then be different periods ranging between 6 months and 3 years before the various parts become applicable. The thrust of the legislation is to treat AI as a product that, if it poses a high risk to humans, needs to be certified as safe before it’s placed on the market, and then continually re-assessed as remaining safe while it’s on the market. The legislation will prohibit some uses of AI that are considered to pose an unacceptable level of risk. Specific transparency requirements for general purpose AI (including generative AI) will also feature.

The EU AI Act is also expected to have extra-territorial effect, applying not only to providers and users in the EU but also to those outside the EU if the AI system or its output is used or intended to be used in the EU. Therefore if your AI system or its output is available in the EU, you may be caught by that legislation when it becomes applicable.

AI and your organisation

If you use AI or you are considering introducing AI, it’s important to understand the regulatory framework in which you operate. You should weigh up your organisation’s risk appetite, establish what your AI requirements are and decide where the use of AI is necessary and proportionate.

If you would like advice on your use of AI, please contact judy.baker@wardhadaway.com or another member of our team. The next article in this series will focus on AI and data protection.

If you would like to sign up for any of our legal updates by email, please click here.


Subscribe to our newsletter

Sign up here