Psychology has its own theories that arguably explain human behaviour in a way that seems impossible to disprove: Ingroup-outgroup bias; cognitive-dissonance resolution; attributional error. They seem, intuitively, like the perfect explanation for why we prefer people who are from the same group as we are, how we try to explain our own stupid actions to ourselves, and how we make mistakes when we explain the causes of events. And like the theory of confirmation bias, once you hear about them for the first time, you see them happening everywhere you go.
But science never sleeps. We can’t simply accept a theory because it feels like common sense. Theories are never completely proven, not even the good ones that have been around for ages. Theories are tested, improved, refined, developed further, and sometimes refuted and replaced. Rival theories can spring up like seagulls around a chip shop. But theories are never proven.
There is science around hiring too. The focus being to explain the most effective way of measuring the candidate to predict their future job performance. Extensive research has been conducted into different assessment methods to identify the most powerful predictors. The challenge for the hiring manager who wants to draw on this science, is to understand the picture being painted by all these results and to make some sense of it all.
Here comes the science bit
A rich source of evidence for predictive hiring choices are meta-analytic studies. Stay with me. This is a form of research that combines all the data from loads of previous studies to come up with an overall theory of what’s going on, generally using some high-end maths Kungfu. In the case of predictive hiring, this means finding as much data as possible from previous studies, looking which recruitment tools best predict performance in the job, putting it all into one big bucket, and mathematically identifying the most powerful predictors (check out our AI in Recruitment article for more about the benefits of utilising predictive hiring in your recruiting).
In 1998, Hunter & Schmidt published the results of their meta-analytic research based on decades of previous assessment research. The paper includes a handy league-table which the beleaguered, time-pressured hiring manager who wants their hiring process to actually work, can consult in order to understand what assessment methods predict job performance the best.
The study, or at least the league table, is widely circulated as a means of communicating what assessments to avoid and which to use if you want to make decent hiring decisions.
But to rely on these results goes against science! The theory needs testing. Which is why the same researchers performed the analysis again in 1998, 2004, and 2016. The results varied across the studies because of the different data sets and refined maths being applied,.But having multiple studies enables us to feel greater confidence that the outcomes are solid and how effects may vary in different contexts.
Who do I trust?
When a business uses science to market its products, they often only present a single study, the results of which support their commercial interests or sharpen the axe they want to grind. With predictive hiring, a recruitment provider might present one piece of science that they feel is best for their message. To navigate this jungle, there are three, simple takeaways for making sense of science to improve your predictive hiring processes:
- Never trust the results of a single study!
- Look at the broader scope of research, case studies, industry best-practice
- Once you’ve made your choice, do your own science to monitor the effectiveness of your predictive hiring methods
If you’re wondering what the science says about what predicts job performance the best, take a look at the studies in the references below. You can also sign-up for our free soft-skills certification, which explores the science and practice of predictive hiring in greater detail.
References (aka the fig-leaf of credibility for a short piece like this)
Hunter, J. E., & Schmidt, F. L. (2004). Methods of meta-analysis: Correcting error and bias in research findings. Sage.
Sackett, P. R., Zhang, C., Berry, C. M., & Lievens, F. (2021). Revisiting meta-analytic estimates of validity in personnel selection: Addressing systematic overcorrection for restriction of range. Journal of Applied Psychology. https://doi.org/10.1037/apl0000994
Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124(2), 262274.
Schmidt, F. L., & Zimmerman, R. D. (2004). A Counterintuitive Hypothesis About Employment Interview Validity and Some Supporting Evidence. Journal of Applied Psychology, 89(3), 553561. https://doi.org/10.1037/0021-9010.89.3.553
Schmidt, F. L., (2016). The Validity and Utility of Selection Methods in Personnel Psychology: Practical and Theoretical Implications of 100 Years of Research Findings. Working paper. https://www.researchgate.net/publication/309203898_The_Validity_and_Utility_of_Selection_Methods_in_Personnel_Psychology_Practical_and_Theoretical_Implications_of_100_Years_of_Research_Findings
Smith, George, “Newton’s Philosophiae Naturalis Principia Mathematica”, The Stanford Encyclopedia of Philosophy (Winter 2008 Edition), Edward N. Zalta (ed.), https://plato.stanford.edu/archives/win2008/entries/newton-principia/