skip navigation
skip mega-menu

Regulating AI: Where should the line be drawn?

The promise of artificial intelligence (AI) is undeniable. From medical science to fraud detection, automation to risk management, AI has the potential to reshape how we live and work for the better. But like any powerful technology, it carries risk. Used well, it can unlock extraordinary efficiency and insight. Used irresponsibly, it could do significant harm.

The question isn’t whether AI should be regulated; it’s how. The challenge lies in finding the right balance between encouraging innovation and ensuring accountability. Regulating development too tightly could slow progress, but leaving use entirely ungoverned would be equally risky.

The real opportunity is to define clear, proportionate rules for how AI is applied, so progress and protection advance together.

By focusing on how technologies like AI are applied, as industries like gambling, banking and financial services already do, we can protect people without slowing progress.


Finding the right balance between innovation and oversight

At its core, AI is a vast collection of data, powered by algorithms and machine learning. Its ‘intelligence’ lies in its ability to interpret and connect information, not in any innate consciousness or autonomy – at least not yet. In that sense, it’s another technology platform. One that will, in time, become as familiar and indispensable as the cloud.

When cloud computing first appeared, it felt revolutionary. Now, it’s business as usual. AI will likely follow a similar path, evolving from something extraordinary to something every day – an essential tool we draw on when we need it.

To reach that point, the UK must find the right balance between encouraging innovation and ensuring accountability. Overregulating AI’s development could risk slowing progress, especially when global competitors like the US and China view technological leadership as a strategic imperative. But equally, some degree of oversight and coordination is necessary to build public trust and ensure ethical standards keep pace with innovation.

The emphasis, therefore, should be on enabling the ecosystem to thrive through investment in research, compute capacity and digital skills, while shaping proportionate safeguards that prevent harm without constraining discovery.


The challenge of getting regulation right

The UK Government’s latest proposal, the AI Regulation Bill, suggests creating a centralised AI Authority to oversee standards and enforcement across all sectors. This offers clear advantages including consistency, transparency and public trust in how AI is governed. A single authority could help avoid regulatory gaps, align ethical standards and establish a clear national position on responsible AI.

Centralisation also comes with challenges. Achieving the depth of technical understanding required across such a wide range of domains, from financial fraud detection to healthcare diagnostics, would be immensely complex. The risk is a lowest common denominator approach that protects no one adequately and constrains innovation unnecessarily.

AI isn’t used the same way in every industry, so it’s almost impossible for one rulebook to fit all. Each sector faces different risks and realities. That’s why specialist regulators work well. They understand their markets and can set rules that protect people while supporting innovation. AI used to verify a player’s age on a betting platform isn’t the same as AI making lending decisions in banking or diagnosing a tumour in healthcare.

Equally, trying to police how AI is built is difficult. The data these systems train on is vast, often public or legitimately purchased, and constantly shifting. Regulating this would mean a government agency attempting to audit petabytes of data and proprietary algorithms. This presents a bureaucratic and technical challenge that would quickly become obsolete as the technology evolves.


Regulating use, not creation

For me, the most practical answer lies in balance; government should set the framework, but industries should define the specifics. Leave development to innovators, but regulate use at an industry level, guided by shared national principles.

Every sector already has its own regulator setting boundaries for safe and responsible practice. This model works well in mature, highly regulated industries such as gambling and financial services. However, not all sectors have the same depth of oversight or compliance culture. That’s where coordination at a national level could add real value, aligning standards without dictating innovation.

The principles are already there in the form of safeguarding customers, protecting data and ensuring fairness, so we’re not reinventing the wheel. Take age verification as an example. In theory, AI could help confirm a customer is over 18, but only if it meets strict accuracy standards.

The gambling industry already uses random number generators (RNGs) to ensure games are fair. We must prove those systems work by putting millions of test spins through them. Why not apply the same logic to AI? Let us prove it works. Certify it independently. Regulate its use, not its existence. That kind of collaboration between industry and regulator creates accountability without killing innovation.


Lessons from the internet

When regulation is done right, it doesn’t just set boundaries, it sparks innovation and forces businesses to find smarter, safer ways to achieve the same outcomes.

Take the Online Safety Bill, for example. It’s a well-intentioned law – an attempt to protect vulnerable users in a rapidly evolving digital world – but it shows how difficult it is to strike the right balance. By applying broad age restrictions to almost everything, it limits access to useful content while doing little to stop bad actors.

The lesson isn’t that government should stay out of regulation, far from it, but that effective regulation is most powerful when shaped with industry input and technical expertise. When government sets the goals and regulators and companies collaborate on how to achieve them, we get protection and progress. That’s what good regulation does. It sets the goal, not the method, and trusts businesses to develop the technology that delivers it.


A practical framework

So, what would a balanced framework look like? In my view, it should be layered and collaborative:


  1. Government-led principles – national laws should focus on outcomes such as privacy, transparency and accountability, establishing a shared ethical baseline.
  2. Industry-led regulation – sector regulators (like the Gambling Commission or FCA) work directly with businesses to define safe, effective use cases and testing standards.
  3. Collaborative oversight – regulators and companies jointly test and certify use cases, sharing evidence on what works and where the risks lie.


Ultimately, neither government nor industry can manage this alone. Government provides the framework; regulators and businesses supply the expertise. Together, they can ensure innovation happens safely, transparently and for the public good.

AI is a phenomenal tool, no more inherently good or evil than a hammer. We don’t regulate the manufacture of hammers; we regulate their responsible use. Similarly, we must regulate the application of AI within established industry contexts, ensuring guardrails evolve at the same pace as the technology.

Regulate the outcomes, not the invention, and we can strike the right balance between progress and protection.


The article originally appeared on Information Security Magazine, all rights reserved.

Explore jobs at bet365

Subscribe to our newsletter

Sign up here