Does Daddy Know Best?

Reading Time: 14 minutes

In this series, I’m exploring the influence of white supremacy culture on tech (and broadly, professional) culture, following Kenneth Jones and Tema Okun’s list of characteristics as a guide. Here’s the introduction and the full series so far.

Let’s talk about the seventh element of white supremacy culture: paternalism.

Here’s the description of paternalism, verbatim:

Paternalism

  • decision-making is clear to those with power and unclear to those without it
  • those with power think they are capable of making decisions for and in the interests of those without power
  • those with power often don’t think it is important or necessary to understand the viewpoint or experience of those for whom they are making decisions
  • those without power understand they do not have it and understand who does
  • those without power do not really know how decisions get made and who makes what decisions, and yet they are completely familiar with the impact of those decisions on them

Ooooh! So this one is interesting.

Throughout this series, I’ve pointed out how elements of white supremacy culture reinforce each other. Well, folks, paternalism has joined the chat.

In an earlier post about sense of urgency as a white supremacist cultural staple, we discussed this gem of an example:

Between Android and iOS, market share in the United States is about 50-50. Globally, it’s about three quarters Android. But among Chicago companies with theoretically national or global client bases, 4 out of 5 build their MVP on iOS, most with no plans to release an Android app.

Why do you think this is? Since iOS users have a higher median income, maybe the companies are first targeting clients who can pay more. This perspective has its own issues, but at least it makes sense. This isn’t the reason.

The reason is that 4 out of 5 of the people making the what-to-build decisions use an iPhone themselves. In multiple cases I’ve seen, insurance companies prioritized iOS for their first mobile app, despite knowing that 60-70% of their end users had Android devices. Why? Because the product owner wanted to see it on his phone.

We value quick decision-making in business, but quickly-made decisions overrepresent the perspectives of the people sitting in the room. This happens even if those people have hard evidence that their perspective doesn’t line up with the perspectives of the people that the decision impacts.

Troy, Chelsea. “Haste Makes Waste.” Posted on this site in June 2020. Is it weird to cite myself?

Business leaders regularly ignore the experience of those for whom they’ve made product decisions. It’s not that they think this is fine. They don’t think about it at all. That’s what entitlement is: “it didn’t even occur to me to consider whether I ought to get to do this.”

The description for paternalism focuses on decision-making, which covers, like, everything a tech company does. So, for the rest of the piece, I’m going to focus on just one aspect of tech industry decision-making: interviewing/hiring. In which candidates are subjected to an opaque process where they don’t know how they’re getting judged, and if they are deemed worthy, they then are expected to decide—based on that opaque process—about where they will spend the majority of their waking lives for a period of years.

I encourage you to independently consider the role that paternalism plays in other tech industry decisions. If you write a blog post about it, I’ll link it in this one.

Who makes decisions about a company’s future talent?

Tech has a well-deserved reputation for ridiculous hiring processes. Google famously used to ask candidates trick questions like this one about buckets and water. Thousands of companies copied Google. Later, Google stopped asking those questions. But by then, everybody was doing it. Tech has a pattern of keeping outdated interview questions and generating false, retroactive justifications for them.

If these questions don’t tell us useful information, why do we ask them? In part, it’s survivorship bias: interviewers don’t have a dealbreaker-level problem with the interview format that got them hired. They’re making a decision from a position of power that’s kind of arbitrary—and maybe even more arbitrary than they realize, if they don’t know the real reason for it.

The interview process isn’t judged on its effectiveness as much as it’s judged on the interviewers’ sense of control. Let me provide some examples.

What information reveals someone’s fit for a role?

A person’s fit for a software engineering role, of course, has a lot to do with how they write software. This makes perfect sense. To judge this, companies subject all engineers to a self-contained code challenge. This makes a lot less sense.

Why: these challenges are usually too simple to gauge software execution capacity. The most complex that a code challenge tends to get is 1-3 classes. This is way under what an engineer would be expected to understand on the job. The challenge asserts basic syntax proficiency and whether the taker writes tests (maaaybe). It’s an objectively poor proxy for anything bigger than that, but interviewers regularly use it as such. They make massive extrapolations based on 100 lines produced under unrealistic time pressure. Massive extrapolations are super easy places to hide bias.

I’m a bit of an unusual case, but because of that, I can detect the extremes on this. I have written 350 blog posts and two books about tech. I maintain six open source projects, which anybody with an internet connection can watch me code in real-time via this playlist for 40 hours. Or these 5 hours of tutorials. Or these 3 hours of other tutorials. At this point, there is evidence documenting what I can do.

A lot of companies accept that from me as the prerequisite for an in-person interview in lieu of a code challenge. The ones that don’t have lost sight of the desired result. They instead feel the need to dictate the decision about how I should prove that I can code. Is their test a better proxy than my public works for what they would ask me to do on the job? No: it’s usually an appreciably a worse one.

When I challenge this requirement, they say “OK yeah I guess that would work, but we asked all the other engineers to do it this way, so waiving the challenge for you wouldn’t be fair to them.” I bet a lot of those engineers also had copious proof that they could code, that the company could have also used in lieu of the challenge, but they refuse to consider it. They might as well say “That’s not how it works: the way it works is, we get to tell you what to do with your Saturday, and then we judge you on what you come up with.” It’s the corporate version of “Daddy said so.”

Is a code challenge inherently a bad thing? I’m supposed to say “no” here. But most code challenges are, full stop, shite proxies. Companies need to put orders of magnitude more thought into how they’ll ascertain someone’s technical capacity. As far as I’m concerned, “assume anyone who says they can code is telling the truth” is a better procedure than most code challenges.

What’s the biggest bad-hire risk?

It’s not accidentally somehow hiring someone who doesn’t know how to code. The biggest risk is that the candidate doesn’t work well with the team, or doesn’t have the same goals as the team.

The way to catch that risk, of course, would be for the candidate to talk to the team. And not in a “You test me” situation. In a normal conversation.

Hiring managers often push back on this. I interviewed for a role where I was supposed to provide code review to a team of engineers who had explicitly requested it. I had to convince the company that I should to speak to those engineers before we decided whether I’d take the role they had asked for.

In the situation where a candidate does talk to the team, it’s usually the team members coming up with questions, and then in the last five minutes of an hour-long slot saying “So, do you have any questions for me?” Suppose you went on a first date and the person grilled you for the first 90% of it, then asked you if you had questions for them. Would you go on a second date?

Both parties need some say in the hiring process, and both parties should get time to ask questions. I come in with ten questions that I ask by default. That’s 45 minutes at least right there.

On top of that, I need to ask what’s working on the team, what isn’t working on the team, what the team’s needs and goals are. I might need them to walk me through their high-level architecture, and ask more questions from there. This is not a five minute conversation, and it would require the team to pretend they believe I am someone who could help, rather than fancying themselves my test proctors or interrogators. I call this phenomenon “the flip,” and I call it out right here.

The flip indicates that the process is not set up for the best result. It’s set up to preserve the interviewers’ perception of power and to reduce transparency for the candidate.

Everybody wants passion

Multiple companies want “passionate” people and demand that candidates be intimately familiar with the product—preferably already a power-user. This is true regardless of whether the candidate falls into the product’s intended audience. It’s often the same companies where the engineering team doesn’t even interact with the product team.

The reason “dogfooding” gets such a good reputation is that, if the engineers are using the product, then the company can survive engineers’ lack of empathy or a crappy product-engineering communication process. Relying on dogfooding for engineers to know and care about customer needs means the rest of the house is out of order.

Looking for power-users or superfans in the interview process doesn’t prove people can do the job, but it does give interviewers a whole slate of questions that they intimately know the answers to, because they work there. It’s pleasant for interviewers. So it stays.

How do we fix tech hiring?

Look, lemme level with you. Even the people who do tech hiring know it’s broken, and that wouldn’t be the case if this question were easy. People are trying (well, some people are trying anyway).

Let’s look at that question under a very specific lens: if paternalism explains some of the clusterfumblery that we see in tech hiring, then how can the antidotes to paternalism inform how we fix it?

Here are Jones and Okun’s antidotes:

  • make sure that everyone knows and understands who makes what decisions in the organization
  • make sure everyone knows and understands their level of responsibility and authority in the organization
  • include people who are affected by decisions in the decision-making

What if companies published the stakeholders for each position?

Companies sometimes publish either the job “requirements” or the hiring process. This info often ends up being inaccurate, because the requirements aren’t real or the process changes from candidate to candidate. Otherwise, requirements become rigid rules that get used to justify, for example, why the maintainer of a massive open source library needs to do a hackerrank to prove they can code. The hiring process is not an end. It’s a means to an end. That end is to make the stakeholders happy. So: who cares about this position, and what do they want? Example:

  • Hiring manager: currently struggling under the weight of liaison responsibilities with every other team. Wants someone who can help with that. So, consulting experience a plus.
  • Junior engineer: needs someone to help them learn to formulate informative pull requests, and wants timely, actionable feedback on their code.
  • Data scientist: Expert in statistics, knows just enough Python. Needs support producing models that can get packaged for deployment.
  • Whole team: We are making a product where people interact socially, and our team doesn’t have much experience identifying vectors of abuse in such a product. We’d like someone who can help us find these vectors and keep our constituency safe.

Candidates know exactly where the job requirements come from. They can figure out whether they’re a fit and decide how to demonstrate that. The company also now has a convenient list of who should get a say in whether this person comes to work here.

What if we got explicit about who has authority?

Many companies already try to do this, until somebody with power doesn’t want to do it. Example: I worked at a company where, standard practice, if you referred someone, you weren’t one of their interviewers. That fell apart when the VP of data science wanted to hire his business partner into a senior role.

The whole team got to interview the guy because “we should get to have a say,” but everyone knew it was a sham. The VP of data science not only didn’t recuse himself from the meeting about whether to hire the guy; he presided over it. When three team members expressed concerns about hiring his business partner, the VP of data science, who had the power to fire everyone in the room, started crying in the meeting. Psychologists agree: crying to process emotions is healthy. Crying to persuade or coerce other people, on the other hand, is emotional manipulation. Later, when the hired senior person submitted models with fundamental errors in them, the VP of data science closed the PRs pointing out the errors and PIPped the employees who had opened them.

You know what I druther in this situation? We shouldn’t have interviewed the guy at all. One person made this decision. Why not be explicit about it? Why force other people to either agree with it or put their positions on the line? Let’s just say: the person with all the power knew who he wanted, and got him. Believe it or not, in this kind of situation, explicitness causes less damage than a sham process that ends up in the same spot, but with more people pissed off.

What if we let candidates participate in the process?

What if, instead of saying “this is our hiring process,” companies expressed who the stakeholders are and what they want, and then worked with candidates to decide whether they meet each other’s needs? This isn’t a farfetched idea: it’s how companies decide on contracts and partnerships, and those happen all the time. The difference there is, companies perceive one another as having power. They aren’t focused on themselves having all the power and the other party having none, as often happens with hiring.

I think this would often be easier than trying to ram every candidate through the same process. Rather than “we, the company, made this one-size-fits-all obstacle course for you to traverse,” companies can say “you, the candidate, can propose how to proceed. We’ll decide if we think it shows us what we need to see.”

This does not have to be American Ninja Warrior.

When power is transparent, we have access to better solutions.

Candidates can participate in a hiring conversation that helps them find a good fit and shows the company their skills—often via better proxies than code challenges or a one-sided interrogative variety show.

But for that to work, companies need to get explicit about who gets a say in this decision and how much. They also need to delineate who should probably get a say in this decision, and acknowledge whether that differs from the first list. Then, they can allow the candidate to engage in the process of figuring out whether they’re right for the role or not.

If you liked this piece, you might also like:

The techtivism series, about how to effect your values as a tech producer and consumer

The inclusion category, which I reluctantly included on this site after like a million requests

Verifying complex data models with Alloytotally unrelated but I am proud of this one

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.