If I were a computer, I wouldn’t hire me.

I’m glad I’m not looking for a job and it’s not five years in the future. If I were and it was? I think I’d be in trouble.

Both AI recruitment technology and an influx of ”behaviour testing” has people scrambling to find relevance.

And although culture based hiring is my business and passion, AI hiring tools is not the type of thing I expected to wake me up in the middle of the night. Profit, pipeline and real progress – maybe.

Turns out – as with most of our elevated responses – it’s personal. I simply wouldn’t be easily hired.

Shorter than preferred job stints. A CV definitely lacking in supposedly helpful ‘keywords’. A degree that doesn’t directly correlate. And then there’s the time I was profiled as an “overly aggressive sales candidate who is unlikely to share success with others”.

I think my former sales managers will attest that if anything, they’d prefer I be more aggressive on the sales front and less concerned with enabling wider success. At any rate, I missed out on the job, but have a humorous story I still tell today.

Data interpretation is completely subjective at its origin.

Just because a computer is running it, doesn’t mean it’s factual and absent of bias.

Way back at the inception of this technology, there were still people creating the parameters and assessment factors. And whilst they may be behavioural experts and otherwise, they should be questioned (hard) on their own methods, assumptions, demographic surveys and frankly – wide community input.

I don’t remember being surveyed about my motivations, career movements or ”why” before any of this technology was created. Do you?

What can we trust at the moment?

I can’t vouch from personal use, but was initially impressed when I met representatives from ”Fama” at the recent Culture First conference in San Francisco.

Far from making assumptions on job moves or formal psychometric responses, they use real data straight from the horse’s mouth. Trawling social media profiles of our very own words, they flag risks with potential employees when it comes to sexism, bigotry and crime.

Again – the data to determine parameters will always have a subjective component, but it’s ”real time” and creates a more trustworthy story of how we conduct ourselves as people and 24/7 individuals.

Apart from this, you can use chatbots and automated screening tools. But they’re really just there to speed things up, not to revolutionise ‘culture first’ hiring.

How do we navigate all the new ”stuff”?

With extreme caution, I’d suggest.

And I’m hardly jumping on the AI discussion for exposure. I do this day by day and can attest that around 70% of the CVs I review would be overlooked by technology or your standard recruitment keyword search.

Of the remaining 30%, at least half of the ”perfect backgrounds” fall short when reviewed for culture add, values and collaborative techniques.

I love technology but it has to be relevant. We’re less likely to question almighty data, but can still trail it back to personal origin.

Be less impressed and more curious.

Kim

 (PS – one of the places ”automation” can be useful is in standard communication. But it can also be abused. To see how we navigate this when hiring, have a read of ”Use Your Voice”.)

Leave a Reply