Monday 31 March 2014

The Itchy Security Blanket of Recruitment Metrics

The rise of more intuitive technology enabling the recruitment process has made for an interesting corollary - a rise in an organisation's ability to collect and report data connected to the recruitment process.  The increasing data driven programmatic approach to recruitment can do much to aid in the design and selection of a recruitment strategy.  Seemingly small changes can be tracked to measure their impact on the success or failure rates of a decision.

The growth in our ability to collect these metrics has been matched by a hunger within the stakeholder set as a whole.  Once a hiring manager has seen a report that gives seemingly scientific insight into the hiring process it will be almost impossible to revert to something which grants them less insight.  I'm not advocating that we take away metrics for these managers rather than we give them the access and supply the relevant context.  The greatest danger of data collection lies not in the information, but in its interpretation.  

So what metrics are appropriate to measure? What metrics can offer us certainty without falling into the the traps of selection or confirmation bias?  There are already a lot of hyperbolic blog posts like "The Top 10 Metrics You Must Have" or "7 Recruitment Metrics to Win" these miss the point.  The metrics of recruitment are best used for experimentation - tied to the continuous improvement of the team.  If you are producing metrics that will sit unopened in a spreadsheet to appease a hiring manager you are guilty of security blanket metrics.  Whilst you will feel all warm and fuzzy because you can prove that some *thing* is happening they will be of no real practical value, like butterflies pinned to a board underglass, nice to look at but not useful.

So whats the alternative?  When done correctly the term "metrics" is a misnomer.  The gathering of data around recruitment will give you a dataset which you can apply to provide insight into historical performance and to measure impact of the specific efficacy of projects the team undertakes.  In this way it's possible to see results in real time - does that new advert copy lead to more applications? You can see that! Which website is best to advertise on? You can test that! Did that rival companies announcement affect your response rate? You'll be able to see!  Did adding that photo of a cat to your website make it better?  Of course it did! You don't need metrics to tell you that!

What can't metrics do?  Predict the future.  In many of the articles I've read about recruitment metrics I've seen a large number of lofty claims about prediction.  All the while these claims are made without noting the limitations of the dataset we have access to.  It's the measurement of this dataset that will be the most effective use of business value not on fortune teller style inference of outcomes.  Statements like "we had 1000 applicants in 2013, so this year we will have 1500" are always going to be more wishful thinking than informed prediction.  Metrics can help in planning for the future but knowing the limitations of the basis of those predictions is key.  If we aren't aware of the limits of prediction we risk undoing the good that data can do and reaching for the crystal ball.  

In a future post I'll list the what and why of the metrics I like to measure.  Both for tracking team and individual performance within the team.  Hopefully you'll recognise it's a list high on building a dataset with experimentation in mind and low on fluffy feel goods and blame dodging.

No comments: