If you're going to work in recruiting, it seems pretty obvious that you'll need to have a good sense of people. It seems an obvious statement, but it's worth pointing out how the nature of the industry encourages recruiters to rely on their gut instincts and snap judgments.
While the name of the game might change once you're running high-level executive searches, people work their way through piles of resumes by not wasting time on the ones that don't match what they're looking for.
The problem is that this same pressure leads a lot of recruiters to rely on one of the standards of the industry, the face-to-face interview, maybe more than they should.
College admissions are a crapshoot
Probably the best evidence of the problems with the interview can be found in academia; not in a business school report, but in the admissions office. Schools slog their way through thousands upon thousands of admissions with only a few unfailingly-glowing recommendations, some silly essays and an interview.
Thankfully, a convenient measure of success (college GPA) lets schools rate their performance after the decision is made. Unfortunately, study after study after study for the better part of a century has shown that most of these criteria range anywhere from questionable to useless in sorting the good from the bad candidates. The single most consistent measure is GPA itself, from high school to predict college and undergraduate to predict graduate.
Business is no different
Recruiters might like to think that they have a better grasp of what they're looking for than a college admissions office. After all, chances are you're not necessarily looking for a world-class oboist/hockey star who ran the school newspaper.
The evidence appears to disagree here as well, though.
A well-regarded meta-analysis of business recruiting research published in Psychological Bulletin put together a list of the best predictors of job performance:
The single best predictor is easily what they call "sample work" - actually getting candidates to prove they can do the job. Which makes sense. This is not always possible, of course, since some tasks are hard to test. And some positions have several different key functions, which can complicate comparisons, but the basic idea seems to be the simplest way to separate out candidates.
Next up on the list was the "general mental ability" (GMA) test - things like IQ and the like. This part proved particularly effective when paired with some of the other measures, which makes sense. Intelligent people who can also prove they can do the job are bound to work out better than those who can't, regardless of how intelligent they might be.
Interviews finally make an appearance at number three, but only what the researchers define as "structured interviews," where all candidates are asked the same questions in a standardized format. Meanwhile, the much beloved unstructured interviews ranked ninth.
Another study from the University of South Africa looked at the relationship between job performance and a whole host of different psychological measures, as well as interview ratings. Interviews were one of only two factors to achieve highly statistically significant correlations. But the relationship still ended up being weaker than "verbal abstraction," and neither explained very much of the variation in performance.
One interesting study from the University of Pennsylvania was even less impressed with interviews, ranking them behind reference checks. But more importantly, the researchers also interviewed a number of experts about how reliable they think these different methods are, and how often they use them in their evaluations.
Interviews didn't quite come in as the most reliable in the recruiters' views - that was experience, which actually ranked fifth-most reliable in the study - but it was cited as the most commonly used approach.
Meanwhile, the study's clear winners - cognitive ability tests followed by job tryouts - were ranked the 10th and ninth most reliable measures, respectively, and were rarely used by many of the experts.
How can you fix the interview?
It's all well and good for psychologists to say that recruiters should rely more on, well, psychologists. But most staffing departments wouldn't be comfortable giving up on interviews, even if they do adopt more scientific measures.
There are some suggestions for adapting to the evidence, though. A study from the University of Pennsylvania points out that some research has shown that snap-judgments can actually prove modestly effective at predicting performance. The difference, explains Noah Eisenkraft, now an assistant professor at the University of North Carolina's Kenan-Flagler Business School, is that these studies looked at predictions in the aggregate. When individual recruiters interview a candidate, all too often individual biases throw off assessments.
So what does this suggest for recruiters looking to change up their interviews?
Get more people involved - It may be a pain, but the more people you get to assess a candidate, the closer evidence suggests you'll get to predicting their performance.
Combine with tests - Be careful about letting interviewers know about test performance, since people already have a hard enough time overcoming their biases. But it's clear that one-on-one assessments can provide some context for other measures, since GMA tests are actually more reliable in combination.
Structure interviews - It might not seem that way, but this is probably the most dramatic suggestion. Structured interviews aren't about giving recruiters some guidance, but creating a fixed formula for conducting and grading interviews. Recruiters will certainly find this restrictive, since they won't get to ask for elaboration on that particular piece of insight or this ambiguous response. But it will certainly lead to a more consistent and reliable system. It may be limited, but recruiters will understand exactly how limited it is.