skip navigation
skip mega-menu

Rules of engagement: do you need an AI policy?

In short, yes. But that answer would make this a really brief blog 😉.

With the UK Government estimating that one in six UK organisations, nearly half a million companies, have embraced at least one AI technology, the chances are your employees are already using it – whether officially or unofficially.

One thing that we discovered when we started to talk to the team about using AI as part of our daily work, was the variety of ways in which it was already being used to help improve what we do and offer to clients. But when we came to write a policy back in the spring, we appeared to be ahead of the game. It was hard to find any information on what should be in a policy.

So I did what anyone would do, I got ChatGPT to write me one. It wasn’t perfect. It did however identify key areas that we needed to address. And while we’re a B2B PR agency, most of these are applicable to all businesses, so I thought that I’d share some of our learnings here.

What we learnt

Firstly, identify where and how you’re using AI (and may use it in the future). Most AIs are there to help you be more productive, but they’re also incredibly helpful for problem solving, language understanding and learning, so if you’re doing any of those then it’s likely AI is involved somewhere.

Then think about the AI journey in your organisation. Think of it as setting the rules of engagement, defining where AI can bolster your efforts and where it might fall short. This process is akin to mapping out a strategy, highlighting areas of enhancement and efficiency.

What needs to go in it

Like any other internal or external policy, you need to think about how AI is interacting with the organisation. Specifically, consider:

Data security: do you hold sensitive or confidential data on behalf of clients? If you do, then you’re likely to assure this through NDAs. This process needs to extend to AI-driven processes and ensure that you remain GDPR-compliant. 

Ethical issues: AI could be used to manipulate or deceive the public, misrepresent information, or engage in activities that could damage your company’s reputation or integrity. Be clear on how it can and should be used, as well as where it shouldn’t.

Transparency and accountability: consider mandating that employees should clearly disclose the involvement of AI technology where relevant. This will help build trust with customers.

Bias and fairness: lots has been written about the potential bias in AI. Make sure that employees are aware of the risks and take what steps they need to mitigate them.

Compliance and legal considerations: given the lack of current AI laws, it’s important to remind staff to seek help from line managers when uncertain about the ethical or legal implications of AI usage. After drafting our AI policy, I ran it past our legal advisors and we were apparently the first company to do so.

AI is multifaceted and outside of a policy; it’s worth staying informed about the latest developments in AI and its impact on your organisation. Consider regular training sessions and updates on AI-related issues to promote ongoing learning and improvement.

Remember though, that as AI continues to develop at a rapid pace, this won’t be the last policy on the subject that you may need to write!  

Subscribe to our newsletter

Sign up here