By Kate Wilson, Head of Community & Membership, Manchester Digital
The question of how to approach AI in coding interviews is one many employers are now grappling with, but very few feel they have fully answered.
This topic first came up in a conversation I had with Geraint North at Arm, who highlighted a tension that will feel familiar to many. Candidates are increasingly using AI tools in their day to day work and education, yet interview processes often still expect them to operate without them.
It felt like the kind of question that didn’t have a simple answer, but would really benefit from bringing people together. So we did exactly that.
We recently hosted a Manchester Digital member roundtable, bringing together employers to share how they are currently approaching AI use in technical interviews and, just as importantly, where they are still figuring things out.
The discussion brought together representatives from organisations including Arm, Auto Trader, Co-op, AJ Bell, the Information Commissioner’s Office, UK Biobank and others from across the Manchester Digital community.
What quickly became clear is that there is no single agreed-upon approach. Instead, there is a lot of thoughtful experimentation and a real openness to learning from each other.
A changing assessment landscape
For many organisations, the challenge is easy to describe but much harder to solve in practice.
AI tools are now widely accessible. Candidates may already be using them in their education, in current roles, or as part of their preparation for applications. In many workplaces, AI literacy is becoming an increasingly important capability.
Yet in interview settings, particularly technical assessments, employers are still working out what should and should not be allowed.
One theme that came through strongly was a growing lack of confidence in some traditional assessment methods. Take home coding tasks, in particular, were described as increasingly difficult to evaluate fairly, especially where there is limited visibility into how much assistance a candidate has received.
At the same time, there was a clear sense that core technical ability still matters. Many organisations are not ready to move away from assessing fundamentals such as coding competency, reasoning and problem solving. The challenge is how to do that in a way that reflects the reality of modern engineering work.
Different organisations, different needs
One of the most useful parts of the discussion was hearing just how different approaches are depending on context.
For some organisations, particularly those operating in highly regulated or technically specialised environments, there is still a strong need to assess deep coding capability directly.
For others, especially in fast moving digital settings, there is growing recognition that the role of the engineer is evolving, and that the ability to use AI effectively may itself become part of what good looks like.
There were also some very practical considerations shared. Recruitment volume, whether teams work remotely or in person, and how much time can realistically be spent reviewing technical exercises all play a role in shaping approach.
What this means in practice is that organisations are taking different paths. Some are currently not allowing AI use in interviews. Others are discouraging it, but without a formalised position. Some are starting to explore how AI might be incorporated in a more controlled and intentional way.
From banning AI to assessing judgment
One of the most interesting shifts in the conversation was a move away from simply asking “should we allow AI?” to something more nuanced.
A number of contributors suggested that the real question is how candidates use AI, rather than whether they use it at all.
That opens up a different way of thinking about assessment. Instead of focusing purely on prevention, there may be more value in understanding how someone thinks. Can they interrogate an AI generated solution. Can they challenge it, sense check it, and explain it. Can they spot where it might go wrong.
This feels much closer to how many teams are already working in practice.
It also reflects a broader shift that has been happening for some time. Several members spoke about the move away from testing for knowledge of a specific language, and towards assessing problem solving, communication and engineering judgement.
In that sense, AI is not creating an entirely new challenge, but accelerating a change that was already underway.
Fairness, transparency and candidate experience
Alongside the technical discussion, there was a lot of thoughtful reflection on fairness and candidate experience.
There was broad agreement that candidates need clear guidance in advance. Not just on whether AI is allowed, but on what good use looks like in that context.
At the same time, there was a recognition that this is a delicate balance. Employers want to be transparent, but not create a sense of mistrust. They want to set guardrails, but avoid being overly rigid in an area that is still evolving.
It was also noted that guidance for interviewers is just as important as guidance for candidates, particularly when it comes to consistency and avoiding assumptions.
Where members seem to agree
While there were a range of perspectives, a few common threads stood out.
Most organisations are still in an exploratory phase. Very few feel they have fully “solved” this yet.
Many are, for now, sticking with approaches they trust, such as live exercises, pair programming or structured technical discussion, while starting to think about how these might evolve.
There was also a strong sense that before deciding how to assess, organisations need to be really clear on what they are assessing for. That might include technical depth, problem solving, AI literacy, communication or judgement, depending on the role.
And perhaps most importantly, there was a shared recognition that this conversation is not going away any time soon.
Continuing the conversation
One of the things I took away from this session is just how valuable it is to create space for these kinds of open, honest conversations.
No one had all the answers, but everyone had something useful to share. That is exactly what makes this community so valuable.
We will be continuing to run member led discussions like this, creating space to explore the topics that don’t yet have clear answers.
If there is something you are currently grappling with, or a topic you would find useful to explore with peers, I would genuinely love to hear from you.
________________________________________________
Comments shared during the session are anonymised and do not represent the views of specific individuals or organisations.
With thanks to Ed Kirby and Geraint North at Arm for helping to initiate this discussion and for sharing their perspective so openly.