Refining a Hiring Process for Product Professionals over the Course of 30+ Interviews: Product Management Case Study
Executive Summary/TL;DR
Hiring product managers is particularly difficult because interviewing them is mostly a test of soft skills. Sitting in the interviewer’s chair and conducting over 30 interviews as a client service, I was able to notice quite an amount of interesting aspects that may also be helpful for anyone in the process of getting a job or a hire.
Introduction
Disclaimer: For the sake of confidentiality and anonymity, all details contained in this case study (especially PII such as names, addresses etc.) are fictitious and don’t contain any details about the respective client. If you’d like to learn more, reach out to me directly.
In this case, my clients were looking to ultimately standardize and streamline a process by which we would be able to a) accept or decline their application based on fit, and b) continually expand on the definition of that process.
This started out with the usual “Here’s a bunch of vocation-relevant questions, judge their answers on a scale & then we calculate the total.”, but it quickly became evident why this wasn’t going to work.
Problem Statement
To start with — what does a ‘7’ mean? Or a ‘3’?
If I give a candidate a score of ‘7’ for a given question, what would someone else give them? Maybe an ‘8’? Maybe a ‘5’?
And — doesn’t it all depend on what the job/organization requires?
Hiring Product Managers is Tricky
Anyone who’s ever had to do so would agree: it’s tough.
It’s also tough for other areas of expertise — but:
when hiring developers, there’s at least the option of rating a candidate’s approach to a code challenge
when hiring designers, there’s at least the option of reviewing designs or other visual collateral
But because product management is mostly a job of soft skills, and the hard skill portion is usually
50% trivial (no one’s going to hire you primarily based on the ability to operate Jira, Miro and PowerPoint)
and 50% highly specialized (being e.g. fluent in SQL may be important for some PM jobs, but meaningless for others),
the challenge is to see how the candidate’s abilities coalesce.
Establishing a Baseline
As we needed more clarity on the challenge, we decided to just start and see where we’d land.
For clarification: we were using aptitude-based (i.e. “Demonstrate that you can do X”) and knowledge-based (i.e. Demonstrate that you know about X”) questions.
Quickly, a few things became apparent:
For aptitude-based questions, it was much more revealing how candidates responded, rather than what they responded: were they quick to think on their feet, were they able to be concise, would they check whether the interview was able to follow them, and so on.
For knowledge-based questions, it seemed less apparent that a candidate was experienced or proficient with the subject matter, but rather how well they were able to communicate their knowledge.
Even more crucially, if a candidate was asked about subject matter they were unfamiliar with, their reaction was very telling: ideally, they would ask a follow-up question trying to understand the subject matter better and/or admit their lack of familiarity; in the worst case, they would try to ramble/filibuster around the subject and/or try to divert to a different (irrelevant) area of expertise that they could speak to.
Simultaneously, we noticed that while trying to focus on objective quality, we had to examine the clients’ preferences for standardization potential as well:
What requirements are more ‘universal’, i.e. that most or every client would demand them (e.g. reliability, professional appearance, etc.)
What requirements are more individual, i.e. that would apply to clients of a certain vertical, but less to others (e.g. tone of voice-formality, leader vs. team player-personality type, etc.)
Accordingly, the first interviews were rather time-consuming — I had to often go back and check candidates’ answers again while supporting the definition of a standard.
Solution
As we approached this iteratively, we would continually refine our definitions and questions with every interview we conducted, so after about 12 interviews, processing each interview was already much faster and much more scalable.
Standardizing Questions & Answers
For the knowledge-based portion, we rather easily managed to decide on a scale of “presents as an expert”, “knows it well”, “has heard of it” and so forth all the way down to “has never heard of it and can’t relate” to map to a numeric score scale.
For the aptitude-based portion, we would decide on a handful of dimensions to factor into the calculation:
How concisely the candidate was able to provide an answer.
Whether they asked sensible questions.
Whether they were able to propose a ‘good’ solution.
Also, whether they were able to explain their process, i.e. how they got there.
This was a bit more complex to do at first, but it would eventually become routine.
Ain’t Nobody Got Time For That
Time was of the essence, so we did always inform candidates upfront that there would be a time limit and brevity was recommended.
Accordingly, we considered completely disregarding this hint a red flag.
WWTCS (What Would The Client Say)
Ultimately, an important question to always have in the back of an interviewer’s head was:
“Imagining this candidate talk to a client, would they perform well as a representative of the team & the process, or rather not?”
In order to understand this, we really went in-depth with the clients, and how they would perceive us, what they’d expect from us. In short, the answer was along the lines of “senior-level skilled, technical, professional, modern, tech-progressive/-cutting edge”.
Three-Tiered Scorecard
All of this would culminate in a three-part response:
Obviously – Yes, or no? We’d do a score tally, and recommend to pass or fail the candidate.
If yes, what specifically (not) for? We’d make a point of noting the candidate’s areas of expertise as a basis for matching— e.g. ‘enterprise’, ‘mobile apps’, ‘startups’, ‘fintech’, ‘manufacturing’ etc..
Also, who specifically (not) for? We’d also make a note of the candidate’s characteristics and expressed preferences regarding clients to prime for interpersonal fit.
Results
After around 30 or so interviews (plus a number of others that other interviewers contributed), we had a great sample size — and had optimized for efficiency.
We were able to both swiftly and thoroughly vet candidates, and deliver on the promise of having the most suitable ones presented to the clients.
It turns out that focusing on the aforementioned three key aspects yielded the results we needed:
By defining a reliable ‘quality gate’ for ourselves, we were able to quickly spot and decline candidates that we didn’t deem fitting.
By regularly auditing the clients’ structures of needs, we were able to screen applications for overlaps in areas of expertise.
By also denoting a candidate’s preferences and interpersonal fit assessment, we were able to increase satisfaction for both parties outside of the mere job specifications.
Conclusion
And now, probably the moment you’ve been waiting for:
Here are the most important lessons I was able to deduce from all of these interviews:
Give short and concise answers. I would have to fail people for talking excessively. It’s okay to be a bit elaborate/verbose as a PM, but especially ‘beating around the bush/talking a lot without saying anything’ is an indicator of bad prioritization skills — if you can’t even prioritize what to say in a limited-bandwidth-scenario (and in a PM’s work environment, pretty much every scenario has limited bandwidth), how would you be trusted with prioritizing impactful decisions?
Always describe your process. Especially with aptitude-based questions, there wasn’t a ‘right answer’ — it rather came down to how coherently you could explain the problem, the solution, and how you’d get there. This doesn’t mean to ‘just think out loud’ — rather imagine someone asking you how you’d prepare a dish, and while different recipes may get you there, you’re supposed to compellingly present your most-recommended recipe.
Ask questions about anything that’s unclear. Little hint: sometimes, interviewers will purposely leave out relevant information to implicitly prompt you to inquire about it. As a product manager, since you can’t — and don’t need to — know everything all the time, it is important to ask (good) questions — especially uncomfortable ones.
Thank you very much for reading!
If you’d like me to help you with a similar challenge you’re facing, I’m a Freelance Product Manager & Consultant for hire — feel free to reach out to me at hi[@]bertrandrothen.com