Can AI or machine learning be used to effectively find good candidates for a job position?

 In short, yes, AI and even machine learning (a subset of AI) can be used to effectively find good candidates for a job. But not without a social, economic and ethical price.

That said, neither technique will at present find all good candidates, and both techniques will miss somegood candidates, each will flag some poor candidates as potential good fits, and we are likely a long ways from them being able to consistently and repeatably being able to find the best good candidate, on their own.

Part of the challenge is the incomplete, imperfect data set available to the AI and ML engines. Part of the challenge is that the unconscious biases of the creators end up influencing the outcome of such systems. And part of the challenge is that it has not been demonstrated that the economics benefits of improving such systems warrant the investment necessary (money, inconvenience, cultural and psychological impediments, and so on) to greatly improve such systems.

For example, if I told you my ML application could find you the top 99th percentile candidate but it would cost you a million dollars per candidate to produce that result, you might decide it wasn’t worth it.

If I told you my system could work for you, but we would need to monitor every spoken word, email, text message, and business outcome of existing staff, to evaluate and improve the ML system’s knowledge base of which candidates work out, you might worry about impacting current employees and workplace culture.

If I told you that resumes alone wouldn’t be enough to evaluate candidates, but I needed access to all of their social media accounts, bank accounts, credit history, family history, genetic makeup, medical history, voting history, reading history, television viewing history, internet browsing history, educational history, and so on, you might worry about candidates refusing to apply.

Yet these are some of the kinds of data (and associated “costs” both monetary and social/ ethical) that would be useful for an ML engine to have access to, to get better than humans at sourcing candidates. And these are just a few examples, not an exhaustive list.

The good news is that over time, we may find that lots of this doesn’t matter. But one of the dangers is that it will perpetuate existing social problems — in large part because it becomes tautological, or at least confirms that structural inequality is real and could re-enforce it. There is a danger that this approach both finds good candidates and perpetuates historical biases.

Chances are good that the system would learn that people from wealthier families, with better health, with “better” education, who are physically, culturally, economically and socially most like the people they interview and work with, tend to thrive better and produce better results in the workplace. The already limited opportunities for social mobility could be further reduced. The existing mono-culture of corporate leadership in global companies could be further re-enforced. And the disenfranchised could be further excluded from the “meritocracy,” which may further tend to be a fancy word for “opportunities proportional to one’s position in society.”

So maybe the most important question isn’t whether it can be done?

Popular Posts