skip navigation
skip mega-menu

The Generation Game: Points to consider when leveraging generative AI in business

Many of us will by now have played around with Chat-GPT, Dall-E and other generative AI products. We will have seen how powerful and impressive these tools are. We have probably also wondered exactly where this is all heading, what it means for our business, and how quickly things will change. While the answers to the first two questions are open to speculation, the third one is easier. Things are moving fast.

What is also clear is that tools of this nature are great servants, but dangerous masters. Tempting as it might be to “plug and play” these into a business, if experimentation with generative AI moves beyond the test bench and into the real world of developing products and services for sale, things get real very quickly. Organisations could be storing up significant problems if they don’t handle it in the right way.

The issue is simple to state – as generative AI becomes embedded into the toolset of everyday work, organisations must take a moment to think what they are ingesting and outputting. The ‘winners’ will be businesses which implement the structures that ensure they can harness the benefits of AI without exposing themselves to undue liability and risk.

Where a business fails to do this, there is a danger that generative AI will be used in that organisation as a short-cut way to produce plausible but unreliable outputs, that are then let loose in the wider world with barely a second thought.

In concrete terms, organisations should be mindful of the following key risks:

  • Reputational damage – using generative AI that creates biased or low-quality outcomes for customers, runs the risk of severe reputational harm.
  • Project delays – the use of generative AI without proper monitoring gives rise to a meaningful risk that projects which use it may need to be scrapped or re-done, with all the cost and time implications this entails.
  • Loss of valuable corporate information – colleagues inputting text into the tool in breach of normal rules regarding use of corporate information could risk loss of valuable corporate data.
  • Reliance risk – using untried technologies, which are not verified, might, at least cause embarrassment, and at worst lead to liability for decisions based on errors. At least in their current form, many of these generative AI’s are capable of (convincingly) presenting inaccurate information as if it were fact. As a result, any output needs to be carefully fact-checked and reviewed.
  • IP infringement – generative AI can create new content, code, text, and images in a heartbeat, but how do you know whether this is infringing third party intellectual property rights?
  • Regulatory breaches – there is an emerging body of worldwide regulation which all uses of generative AI will need to comply with. This includes what is an acceptable and unacceptable use case, what information needs to be given to customers when using AI, and even registration requirements. Any use outside the terms of these regulations runs the risk of landing an organisation with regulatory fines or compensation to customers who have been impacted.
  • Security and Data – generative AI products ingest sensitive and personal data from within organisations as well as from outside sources. The risk of data loss or misuse is significant if this use is not properly understood and managed. In addition, any sharing of content containing personal data with such generative AI systems is likely to run afoul of privacy laws.
  • Contractual breaches – everything from confidentiality clauses to subcontracting requirements could be breached if an organisation seeks to use cloud-based generative AI systems as part of their delivery model to customers.
  • Availability and service levels – while ‘professional’ versions of some of these tools are being released with availability commitments and service levels, it would be dangerous to try to build a business function around the continued availability of free tools which might easily be offline at critical periods.
  • Industrial relations issues – if employees see AI being used, this might give rise to questions about roles and security of employment.

Given these risks, what then should be the focus for organisations in the first part of 2023 when considering the use of generative AI? In our view it is all about good governance.

  • Ensuring that any new product or service using generative AI is subject to proper scoping and risk/benefit assessment, to ensure it can be used in a way which is compliant and to understand what safeguards need to be in place.
  • Take some time to understand how the system works, its capabilities and its limitations. How has it been trained? How up to date is that training data set? What biases have been (potentially inadvertently) encoded within the AI by that training data set?
  • Carrying out testing on the product or service before it is launched, and on an ongoing basis throughout its lifetime, to make sure it is operating as intended and within the scope of the law.
  • Ensuring that there is human oversight of the output of generative AI before it is embedded in a product or service.
  • Putting in place safeguards against the product or service being used in a way that was not intended when initially launched.
  • Ensuring all regulatory requirements are met, including, where necessary, record keeping, audit trails and product registration.

This doesn’t mean that organisations should be slowing down their exploration and use of generative AI. What should be happening, in parallel with the exploration, is that organisations must ensure they have the right checks and balances in place to use it safely.

If you’d like to know more about any of the issues raised, and the steps you should be taking, please visit our website.

Subscribe to our newsletter

Sign up here