The fight for the very soul of the European labour market is currently underway. As more decision-making powers are increasingly handed down to technologies such as AI-based algorithms, stakeholders across the board are becoming more conscious and curious about the intricacies of these technologies. The recent surge in demand for software-based recruitment protocols is giving rise to a paradigm shift in recruitment processes. Albeit, it’s a new paradigm that both precipitates a silver-lining and also raises risks of a grim future in the event where appropriate measures are not put in place. The significance of this new paradigm resonates across the fabrics of the EU society, affecting various industries and income brackets, and touching on hot-button socio-political issues.
The Evolution of Recruitment Processes in the EU Job Market
Advances in artificial intelligence and machine learning are fueling the widespread adoption of software for recruiting junior-level personnel. From filtering through CVs to rating the technical and non-technical skill sets of candidates, and even conducting online interviews, these robo-recruiters are lending themselves to use in a myriad of ways in the recruitment process. As such, the EU is at the cusp of an era where job seekers may have to practice to impress machines rather than HR managers.
This trend has grown more rapidly over the past few years, accompanied by the emergence of a plethora of tech startups offering various customisable software solutions for recruitment-oriented tasks. In Germany, for instance, a study shows that 7 out of every 10 of the top 1000 companies have indicated vested interests in the development and integration of automation technologies in their recruitment processes. In another similar study by CareerBuilder, a talent software developer company, some 55% of human resources managers in the US reported they believe AI would become essential to their toolbox within the next five years.
Recruitment software can mean anything from a simple algorithm for filtering search results to a sophisticated AI chatbot that rates CVs, organizes interviews, and generates a collection of insightful information about candidates. This software is often capable of analyzing CVs and cover letters, scheduling trial and interview sessions, rating candidates based on their level of education, technical or professional experience, soft skills, etc., and also guiding candidates closely through the recruitment process.
The final decision is always left to humans, often after the robots have performed a laundry list of tasks to narrow down the list of potential candidates. The expediency of these software helps save time on both the recruiters’ and candidates’ parts. Evidence also abound that show that these robots are less-inclined to bias than humans when it comes to hiring decisions. The bulk of this evidence is a result of a study published by the National Bureau of Economic Research, a U.S.-based research NGO. According to the publication, respondents who were recruited through a software-based process held their jobs 15% longer than those who were hired through a process devoid of algorithms.
However, the extent to which this software should be allowed to intervene in the recruitment process is currently a hot topic for debates.
The Pitfalls of Automation in the Recruitment Process
Many experts disagree on where to draw the line for robot intervention in the recruitment process. They argue that automation might actually reinforce the negative elements that detract from a credible recruitment process. Their concerns are mostly focused on two main points:
The Loopholes for Bias and Discrimination
Discrimination against job candidates is still a thing in European job markets, as many recruiters are still beholden to prejudice against people they share little or nothing at all in common. For instance, in the 2015 Barometer survey undertaken by the International Labour Organization, 85% of the polled job seekers believe that recruitment bias is still commonplace in France.
Many studies show that robots have a lower tendency towards bias than humans, but that doesn’t rule out the possibility of bias on a robot’s part. A robot is only as good as its programming and can only determine who the best candidates are based on what it’s designed to identify as “good.” Though its design might not openly endorse discriminatory criteria, its machine learning capabilities are capable of picking up on the dominant characteristics of an existing workforce, which might have been borne out of biases.
A case in point is the defunct recruitment algorithm deployed by Amazon a few years ago. The software was programmed to make inferences based on data amassed from pre-existing employees over the previous ten years. Since the employees were mostly Caucasian males, the system struggled to access candidates in a gender-neutral, colour-blind way; downgrading resumes that featured words and phrases relating to women. This drew a lot of backlash and scrutiny from stakeholders, so much so that the program was completely abandoned in 2017 despite efforts by developers to correct the system.
In other similar cases, the tendencies to discriminate aren’t usually as blatant, and the companies are misled to wrongly assume the impartiality of their recruitment system.
The Loss of the ‘X’ Factor of Human Touch
Building relationships is a crucial part of talent acquisition. According to a survey undertaken by the American Staffing Association in 2016, 77% of respondents who have been on a job hunt over the past five years reported that human interaction is a crucial factor in the recruitment process. The absence of human touch throughout most of the recruitment process can disenchant candidates. Robots fall short of the emotional intelligence and language skills that humans wield, and this might leave a negative impression on the job seeker.
As such, it can be misleading for HR departments to rely heavily on data generated by robots. Human agents still need to engage deeply and apply their instincts and sense of judgment thoroughly in the recruitment process to make credible decisions.
The Benign Potentials of Robo-Recruiters
By being critical about these downsides, companies can realise the benign potentials of automation in their recruitment processes. As it stands, the emerging workforce of millennials and Gen Z are predisposed to computing, even in job hunts. Most of them search and apply for jobs on social media and other online platforms through their mobile devices. They’re also inclined to taking video interviews and online tests. At this rate, the recruitment process is gradually taking up less time.
These all imply that automation and algorithms aren’t strange to the emerging dominant demographic in the EU’s labour market. Therefore companies can use these automation technologies to elicit data related to recruitment criteria such as productivity, retention potentials, soft skills, cultural fit, etc. AI powers algorithms with capacities to deduce insights from dense volumes of data such as those relating to speech patterns, facial expressions, and word choices. The outcome can pay off in spades by helping both the recruiters and candidates to save time, so long as the software is programmed correctly.
Kate Glazebrook, chief executive of Applied, a hiring platform, has pointed out that one of the common sources of bias in robo-recruiters is the use of “proxies for quality” such as education. She says the remedy for this is evidence-oriented methods. “In general, the more you can make the hiring process relevant, the more likely that you will get the right person for the job,” she noted.
In need for transparency
As people are fast-becoming more conscious of the effects of digitisation, policymakers across Europe are being put under increasing pressure to oversee the role of robo-recruiters in EU’s job markets. Calls for data privacy laws have intensified over the past few years, bordering human rights protection.
There is a need for pieces of legislation that stipulate the guiding framework for developing and deploying recruitment algorithms. Lawmakers need to put policies in place to ensure the transparency of AI algorithms for recruitment. Such laws should promote the regular testing of these algorithms for biases by independent observers.
The use of robo-recruiters must be made as transparent as possible, and this implies the need for a more inclusive development and implementation processes that incorporate inputs from a wider section of stakeholders.
Talent acquisition is one of the most sensitive aspects of business for automation technologies. This requires companies to tread softly when deploying the technologies, with emphasis on upholding credibility throughout the recruitment process rather than on cutting down human time on various aspects.