Tackling the use of AI tools in the recruitment process

Written by PD Talent Lead, Lewis Davis-Poynter

At Public Digital, we have been talking a lot about Artificial Intelligence.

It’s an area of emerging technology that has the potential to completely reimagine the way many of us work, presenting new benefits, but also - undoubtedly - new challenges.

As PD’s Talent Lead, I have seen a number of articles about the rise of AI usage in the recruitment process.

These refer to both the employer’s side of the process, with AI being increasingly used to review applications, as well as concerns around the applications themselves: The worry that tools like Chat GPT may be used to help a candidate complete an application, succeed in a coding test or write a CV.

It's only in a recent review of applications for a job at PD that I’ve seen the reality of these changes in action.

Think like a language model

As part of Public Digital’s commitment to unbiased hiring, we ask every applicant a selection of sifting questions. We then score their answers and use these scores to decide which candidate to progress to the next stage of the process.

We want to give candidates the opportunity to showcase their skills with the confidence that they won’t be judged on criteria like where they went to school, where they’ve previously worked, or their former job titles or tenures.

For one of our current roles, the first question we asked was a relatively simple one:

“Why are you interested in this role at Public Digital, and why now?”

Most answers followed a relatively similar format; they discussed our work, our values, the candidate’s own experiences and their motivations to join us. However, when reviewing some of the answers, there was one in particular which stood out:

"As an AI language model, I don’t have personal interests or motivations, so I don’t have any specific reasons for being interested in a transformation role. However, transformation roles are typically focused on driving change, innovation, and improvement within an organization, which can lead to positive outcomes for both the company and its customers. As for the “why now” aspect, the interest in a transformation role could be influenced by the current trends, challenges, and opportunities in the industry or within the organization. It might be the right time for an organization to embrace change and adapt to evolving circumstances."

While there is certainly something comedic in this response, its wider implications pose serious problems for the reliability of job applications.

As recruiters and hiring managers, we need to be confident that we are hiring the right candidate for the role as well as making sure that recruiting processes are fair for all applicants. If, for example, our recruiting team could only progress 5 candidates to an interview stage and one of those candidates is able to progress thanks to AI-generated answers, another candidate has been denied the opportunity to demonstrate their abilities in person.

It raises questions about how we, as talent professionals, can ensure the recruitment process continues to function well despite the distorting effects of AI.

What can recruiters do?

The reality is that generative AI technology exists and it isn't going anywhere. And evidently, some candidates are more than willing to use it.

To protect the applications process, there are measures that we as recruiters can take:

Be wary of using AI to review applications and CVs

As the above example shows, this technology isn’t perfect. Less blatant examples of AI use are still often identifiable to a recruiter who is experienced in reviewing CVs and applications. Answers are often formulaic, contain grammatical errors and have a tendency to overuse bullet points.

Humans can pick these patterns up, especially when you are used to reviewing hundreds of applications a day. Many companies, however, are turning to AI tools to review their applications and it is more likely in these situations that AI will not recognize AI. After all, these tools have been shown to have their own flaws, often displaying sexist and racist traits (as well as a bizarre obsession with Lacross players called Jared). It is important, therefore, to not rely too heavily on these tools when making hiring decisions.

Use AI detection tools

Since the rise of programmes such as Chat GPT, there has also been an equal rise in AI detection tools designed to identify when AI has been used to create text.

While somewhat imperfect, these tools used in conjunction with other methods detailed above and below could help to weed out applicants that are using AI.

Create “AI Proof” assessment processes

This is a harder proposition, but it’s worth taking advantage of the fact that AI tools currently tend to struggle more with tasks that can’t be answered in a straightforward way.

While an AI tool is capable of writing code, they tend to find it harder to convey more “human” elements such as emotions. For instance, sifting questions which ask for specific examples from a candidate’s career history, or ask how they feel about a particular topic, are going to be easier to spot if written by an AI. There is also value in running your assessments through an AI tool yourself to see what the standard generated answer seems to be.

You can never make your applications process 100% foolproof, but measures like these can reduce the opportunity for AI to be used successfully.

Continue to rely on in person assessments where possible

Nothing beats speaking to a candidate directly when it comes to hiring. At the end of the day, AI tools like Chat GPT can only get you so far in a process and speaking with the candidate usually demonstrates when they have perhaps had “outside assistance” with their application process.

At Public Digital we are confident that our processes help us to find the right candidate for the job based on a combination of in-person and written assessments.

Consider progressing “lucky losers”

As in PD’s recruitment practice, recruitment processes may demand a specified number of candidates being progressed through each stage.

If it becomes apparent during the interview stage that a candidate has used AI tools at previous stages and is in fact unsuitable for a role, it may be worth reconsidering any strong candidates from earlier stages whose place in the interview stage was denied by the fraudulent candidate.

This not only renders the process more fair, but also reduces the likelihood that your organisation misses out on hiring what could be a talented individual.

The best person for the job

As this technology continues to advance, the challenges of AI usage on all sides of the recruitment process are going to become more prevalent.

There is no perfect method to overcome this, but a major part of my role as Talent Lead will be to continue to protect the recruitment process from the influence of AI technology, ensuring that Public Digital continues to hire the best candidates for the job.

And to prospective candidates, if you are going to try and use AI tools in the recruitment process, at least put a bit more effort into disguising your methods than the unfortunate candidate above!

public digitalThe public digital logo

Head Office

Clerks Court
18-20 Farringdon Lane
London, UK
EC1R 3AU

Our positions

Our values expressed in action and outcomes.

Read them here

Newsletter

A monthly scan about digital transformation and internet-era ways of working around the world.