"What else should we do? This is what everyone does."
The Industry Standard v. How Can We Do Better?
It’s 4:30 pm, no actually 4:33 pm, and I’m sitting in a candidate review meeting. The meeting is in a tiny, grungy meeting room, in a windowless space off the internal stairway, probably chosen because it was always available but also due to its proximity to the cheerfully decorated desks of the recruiting team. The primary decor in the room is a whiteboard that no longer successfully erases, leaving ghost trails of meetings past like a fragmented swap disk. The attendees thus far are spread around a giant unbalanced table that is two sizes too big and better suited for this room’s older sibling. I try to grab my regular seat, facing the door, where I can still see down the hall to the windows that frame the canyon’s trailhead, at least when the weather is clear. Today it’s a little overcast, but a welcome respite from the heat.
This is my third candidate review meeting this sprint — we’re actively trying to fill up a whole new team of software developers, after all — but others are already on their fifth or sixth interview, their specialist’s knowledge and solitary perch atop the team hierarchy placing an outsized burden on their time, and a heavier weight on their review.
At 4:36 the last of the meeting attendees stumbles in apologizing profusely and sweating prodigiously — a victim of a missed desktop meeting alert and subsequent race up the stairwell, and I squeeze into the seat next to mine to allow him to take my view facing the window to the outside world. Small talk stops, phones filled with notes from our candidate tracking system open, and the meeting room door closes.
For the seemingly countless meetings like this I have attended, they all start in exactly this fashion. But the script for the following twenty minutes may vary greatly: mostly about “data points” that we have all gathered on the candidate — their ability to code, their background, their communication skills — all subjective judgments squeezed out of a mere hour spent together that are treated like scientific measures as objective as the density of the moon.
Because this meeting is inevitably filled with bright, serious-minded individuals who earnestly attempt to identify the best possible candidates to join their team, oftentimes the meeting will veer to question the nature of these “data points”, or the absurdity of attempting to assess the candidate’s probability of success in a role by observing their ability in something completely different.
“Why do you ask them about memory management when that’s not a part of our day-to-day work?” “Because every good programmer should know this.”
“She couldn’t really talk through an approach.” “Although it seemed like she could code a solution, but she ran out of space on the whiteboard and she kinda started to panic.”
“Why do we even ask this question? It doesn’t help at all.” “It’s not perfect but it’s more about protecting us from false positives. We’re fine accepting false negatives.”
On any day, but perhaps particularly when overcast, or late in the day, or with a particularly polarizing candidate, this meeting could take a sharp turn into a critique of our flawed technical interviewing process. Some reflecting on the inherent unfairness to candidates or the hidden bias that hides in all human judgment, some focusing instead on the irrelevance of favorite questions and arbitrary measures of others. Some thinking to themselves “…our system can’t be broken, I got in here and I’m a high performer, aren’t I?” The discussion might get a little heated, but never out of hand — the room is filled with friends at best, skeptical and overworked colleagues at worst, as the most volatile interviewers have already been long filtered from the process by the recruiting team.
But it’s 4:54 pm now, and it’s time to resolve this critique of the process: “What else could we even do in an interview? This is just what everyone does.” Much like the meeting always begins in the same way, this is how the critique always ends. We can’t solve for this. Time to move on to the task at hand.
“Let’s make our decision.” Sometimes you don’t even have to ask for a final response, the answer self-evident in the air. The task of this assembled crew complete, we adjourn leaving the assembled “data points” coalesced into a singular, simple judgment: Thumbs up? Thumbs down? If you’re the candidate, this judgment is a fact as certain as the 3.34 grams per cubic centimeter density of the moon.
We instinctively raise this question — “how can we do better?” — because it’s hammered into our brains as a part of the agile software development process. It’s a part of every developer’s voice-in-your-head right before submitting a pull request. And in the interviewing process — this critical component of building a modern software engineering team — we find ourselves asking the same question, but struggling to find a way to define and apply solutions, especially as a small startup fish in a hiring pond with giant whales.
The thing is, the whales know it, too. In an oft-cited interview with Adam Bryant, Laszlo Bock, former head of Google’s people function, noted1:
“We looked at tens of thousands of interviews, and everyone who had done the interviews and what they scored the candidate, and how that person ultimately performed in their job. We found zero relationship. It’s a complete random mess, except for one guy who was highly predictive because he only interviewed people for a very specialized area, where he happened to be the world’s leading expert.”
This stat is haunting to me: even with a room filled with the good intentions for “how can we make it better?” from good friends and skeptical colleagues, are we locked into industry-standard practices that are equivalent to a series of whacking your head on a random number generator?
And this is why you and I are here together today — not sitting around an oversized table in a tiny meeting room, but here in this email. The primary objective of this newsletter is not to merely amass the grievances of the state of diversity in tech, nor product and policy blind spots in tech; it’s not even to merely focus on the pains in the process of technical interviewing. Instead, the primary objective is to identify opportunities and paths for improvement. To shine the light on what we do now and what will work tomorrow, and in the best of situations, the reasons why they work at all.
To read aloud, by candlelight, the myriad issues that face our vocation with regard to diversity and openness. To find a way to improve our chances of inventing a future that is conceived and designed and built by the same people destined to live within it, and stick a gigantic gigawatt spotlight shining right on the center.
Simply, concerning the woeful state of technical interviewing, dear reader, let’s answer the critique, let’s figure out better: let’s figure out what we should do. And then do it.
Thanks for reading (and if you haven’t subscribed, c’mon along). And now let’s take a peek at some relevant links and news:
The Bullet Points for June 1, 2021
On Interviewing
Owen Hughes touches on interviewing practices that are more about rote coding skills and memorization than problem-solving, and the ingrained nature of interviewing practice (Tech Republic)
Taking the “Both Sides” approach to Whiteboarding, a classic from Scott C Reynolds (Stackoverflow)
A funny but solid assessment (2018) of the landscape of possibilities - Whiteboard v. Computer v. Take-Home v. Project-Based (FreeCodeCamp/Medium)
On Hiring/HR
Naomi Wheeless makes the case for the value of diversity increasing innovation and the bottom line, in 4 Lessons for Building Diverse Teams (HBR $)
On DevEx
What do developers want? As per GitHub’s Good Day Project - Fewer interruptions, fewer meetings, mostly.
Meanwhile, the Stack Overflow survey for 2021 is open for your input. Ballot stuffing for “What do you do when you get stuck?” >> Panic may now commence.
On Startups
Datapeople raises $8M for NLP driven recruiting analytics.
Worth noting that the Bock interview with Bryant is from 2013, and Bock was referencing research that was done years prior. While it is expectedly the case that the state of the art, especially in a behemoth like Google, has improved in years since, it is by no means a solved problem.