Things I learned hiring a data science leader.
I’m ending the year by pushing a handful of drafts that have something useful but that I’ll probably never finish. I’ve dubbed this draft week, and this is one of them!
Earlier this year I spent time designing, running, and tweaking the hiring process for a Head of Data Science. While I’ve spent a lot of time thinking about structuring and sizing engineering teams, I’ve come to appreciate that effective approaches are fact-specific. I wanted to avoid assuming that what I knew about engineering would apply directly to data science, which in turn shaped my approach to the hiring process as a learning exercise to better understand data science leadership in tandem with building confidence in our hiring decision.
Throughout the process I learned more about the data science function than I had in the preceding decade, and I wanted to write down what I learned, particularly as it relates to the problem of hiring a data science leader. That said, I’ll be explicit in caveating that this is a topic where I still have a lot left to learn, and I imagine I’ve missed some nuances and am wrong on some aspects.
Classifying leaders
Based on my experience chatting with about fifty data science leaders throughout the search, I’ve come to believe you can classify leaders based on three core beliefs:
- Do they view the role of analyst as a peer role to data scientists? We wanted someone who viewed analysts are a peer role with a different skill set, not as a secondary or unnecessary role
- Is the ideal data science staffing model embedded, centralized, or a hybrid? We wanted someone who was open to all these staffing models but had a strong perspective on why a given model would work for a given situation
- Should data engineering be in the same organization as data science? We wanted someone who was “data science first” rather than a more generalized leader who stretched their attention over both fields
If you wanted to add one more dimension, you could potentially classify folks based on whether they believe machine learning engineering should be in the same organization as data science, although I found it to be a fairly uniform belief within the candidate pool I chatted with that it should be in the same organization.
Once you’ve classified the leader using the above features, you probably have a good sense of whether they’re a good candidate for your particular role from a belief’s perspective. For example, I was looking for someone who: \
- Valued analysts as peers with different expertise rather than viewing them as a secondary role
- Focused on matching the decision to use embedded, centralized or hybrid model to the circumstances rather than believing exclusively in any model
- Was passionate about data science informing product direction, rather than someone with a more generalized skillset who felt strongly about also leading data engineering
With those search criteria, candidates were able to understand what we were looking for and decide if it matched their interests. At this point we hadn’t filtered for their capabilities, if we moved forward we were confident that the work would be a match with their approach. More specifically, this happened in the initial phone screen that I conducted.
Evaluating candidates
The foundation of the interview for folks who decided to move forward with us past the initial phone screen was:
- Digging into technical dimensions of data science work were designed and run directly by our data science team,
- we had them lead a presentation with our executive team on their approach to starting at the company, they interviewed with a non-engineer stakeholder they’d work with closely (who varied a bit depending on calendars, but potentially a leader from our user acquisition or product management teams),
- we discussed people management (hiring, growth, performance management, and so on)
Beyond that core of evaluation, I spent time with the candidates digging in on a few additional dimensions:
- Creating non-linear impact. The impact of a core function like data science shouldn’t be primarily limited by how many folks are in the function, instead it should be derived from the highest impact projects we deliver. How do they identify high impact projects? How do they prevent recurring, routine work from crowding those high impact projects out?
- Sizing team. How do they think about sizing data science teams? How would they know that a team is too large? Too small? A ratio-driven approach is a fine starting point, but want to push a layer deeper to understand how specifics of data science function at a given company impacts that ratio
- Centralized learning. How do they support shared learning to avoid isolated work within the team? How do they support continued learning within the team?
- Seniority mix and hiring. How do they think about the mix of seniority within their team? How do they support growth for senior hires? How do they create an onramp for less experienced hires? How would you hire folks at those various seniority levels?
- What would they focus on in their first ninety days? Typical question, but a useful one to understand their thought process and openness to understanding the current context before trying to change things
Pulling that all together and we had a pretty straightforward process ensuring that folks understand the sort of data science leader we were looking for, and ensuring they were likely to succeed at our company. Certainly nothing life changing in these notes, but hopefully interesting for someone else conducting a similar search.