Using cultural survey data.
When I was at Stripe, I reworked the hiring process for Director-plus engineering managers. My goal was to better evaluate polished senior leaders who always said the right thing. I wanted to find the real beliefs and behaviors underneath all the polish. One interview focused on a direct report sharing a mediocre strategy proposal for review, and getting a signal on whether the candidate could give useful feedback on improving the document. I knew that interview worked when a candidate said that they wouldn’t even give feedback because the proposal’s quality was below their bar to even discuss. Another interview I created focused on how candidates reviewed cultural survey results.
Almost every scaled company runs some sort of cultural survey. There are a variety of vendors like CultureAmp or Lattice, and some companies build their own tools in house. Cultural surveys ask questions about the experience working in the company, cross-team collaboration, compensation, and so on. The results are tallied together, and reports are generated for each manager about their team, and the executive team gets a report about the company overall.
Relative to most other running-the-business responsibilities, senior leaders usually don’t spend too much time on cultural survey results, but they’re a great lens into their data literacy and how they prioritize limited resources to address a very broad problem space. This is just as true of your existing executive team–and even you–as it is of potential hires, and they’re a valuable mechanism for earning credibility with your team by following through to address their concerns.
We’ll work through:
- Reading survey results
- Taking action on survey data
- Whether to modify survey questions
- When to start and how frequently to run
Even if you haven’t run a cultural survey before, by the end you’ll have a good grasp on taking advantage of this sort of data.
This is an unedited chapter from O’Reilly’s The Engineering Executive’s Primer.
Reading results
My quick advice to effectively review survey data is:
- spend a couple hours digesting the results
- focus more on absolute scores than on relative scores: a high score is still high, even if it’s relatively down a bit
- take the time to identify all issues, rather than getting caught up in the first couple you notice
You can do reasonably well by following those three steps, but as I’ve reviewed more and more of these surveys, and managed teams who reviewed results of their own, I’ve formalized the details a bit more. The approach that I follow today is:
Before starting, check if you have whole company access or only have access to the engineering report. If it’s the latter, bring it up at the next executive team meeting. If your executive team leads the company, but isn’t trusted to view the whole company’s survey results, something doesn’t make sense there, and it should be fixed.
The executive team may find it uncomfortable to see each other’s survey results, but that transparency to one another is part of an executive team gelling together. Further, it’s very helpful to see if, e.g. Engineering and Product in a given business unit are both frustrated, rather than having to reverse engineer that narrative by hand. The broader comments are also helpful, as you may identify areas where other functions are frustrated working with Engineering, which is valuable feedback that’s easily lost without full report access
Start by creating a private document to collect your notes on the survey. For each major theme you identify, take a screen capture of the data that raises the issue, and add a few sentences of commentary. This is your staging ground for analysis, not your final product, so don’t worry about keeping it tidy
Get a sense of the size of your populations in the report. This helps to build an intuition around what kinds of results might be statistically significant. Most changes will be significant in an engineering organization of 2,000, but if your technical program manager team is only seven people, then even an extreme change probably isn’t significant.
Survey tools are strongly incentivized to produce narratives, even if the narrative is relatively weak. This means that you have to be cautiously skeptical of the analysis, particularly any that lacks statistical significance. That’s not to say that you should ignore anything without statistical significance, but instead use the right sort of data: comments, follow ups, and so on, rather than raw numbers
Skim through the entire report, and start to group insights into three categories: things to celebrate, things to proactively address, and things to acknowledge. The groupings aren’t final, so don’t spend too much time on them. Make sure to capture highlights in addition to concerns: especially when things are going poorly, you’ll never want to present a purely negative update
If you identified areas of investment after your last survey, how did you perform there? If you saw an improvement, highlight the success. If you didn’t, form a hypothesis for why you didn’t have an impact
Focus on your highest and lowest absolute ratings. Have these changed? Did you expect them to change?
Focus on ratings that are changing the fastest. What’s driving the rapid shift?
Identify what stands out when you compare across cohorts. How do different managers perform? How about cohorts of tenure, gender, and location?
Read every single comment, and copy the most relevant ones over to your notes document. What are the subjective pieces of feedback that change how you interpret the data? What are comments you might quote when sharing a summary?
End this analysis process by spending an hour with a peer, e.g. the head of product, to talk through your findings. Do this after they’ve spent time with their own report. Sometimes you’ll identify connected themes across the orgs, but more importantly you’ll often find gaps in your analysis by talking it through.
You can, in theory, do this with someone on your team, but you’ll end up either putting them in a messy position discussing their peers or having to avoid the thorniest issues, both of which are bad outcomes
At this point, you will have a good sense of the data, and it’s time to move on to the fun part: figuring out what to do with it.
Taking action on the results
You’ll often see teams spend a great deal of time running the survey, even more time analyzing the results, but never actually use that analysis for anything. This frustrates the team, who become trained to ignore future surveys. On the other extreme, very earnest leaders commit to fixing everything. Your team will initially love you for this, but it won’t hit quite the same way six months later when you have to report that you’ve made very little progress. Success is walking the middle path: identify a few important areas where you believe you can make real progress, and then actually do the work.
You probably won’t be surprised that I’ve developed a standard pattern here as well:
If you identify any acutely serious issues, take action immediately. For example, a manager whose team is in open revolt, or an ethical issue is raised
Use your analysis notes to select two to three areas you want to invest into until the next survey is run. For example, increasing representation for Staff-plus engineers in key decision making forums, or increasing the sense of belonging for remote employees
Edit your notes and new investment areas into a document you’re comfortable sharing with the broader organization. I would err on being transparent, but a very optimistic sort of transparency: most things that are relatively down are still absolutely high, a recent trend down is important but we’re still significantly higher than historical averages, and so on
Review this document with your direct reports, likely in your team meeting. Review with your People or Human Resources partner. Review with your peers on the executive team. Review the document with two or three trusted individual contributors within Engineering who will likely be more sensitive to the kinds of issues that are easier to miss as an executive.
As you get feedback from reviewers, think through the feedback and incorporate it to your best judgment. Sometimes reviewers manufacture feedback to have something to say, but my reviewers have always found something important that I’ve missed, so I think it’s important to listen carefully to the feedback here, even if it doesn’t initially resonate
For the areas you want to invest in, make sure you have explicit, verifable actions to take. If these are too abstract, then it’s hard to build your team’s trust in you because it will be ambiguous whether you actually followed through on your commitments
Share the document with your organization (via email, chat, or whatnot) and schedule a meeting to discuss the findings and priorities with the entire engineering organization. I often repurpose the Engineering Q&A sessions for this discussion
Follow up on a monthly cadence on progress against your action items. I often include the updates in my weekly updates to the organization
Make sure to mention these improvements when you introduce the next round of cultural surveying. The team will take the survey much more seriously if they know you took action last time.
From a mechanical perspective, this will also help you score higher on questions on the survey related to whether you actually make improvements from last time. Unless you do this, any new hire simply won’t know if you’ve made any improvements
Although this is a straightforward process, it’ll build the team’s trust in you, and a stronger team culture. Executives are rarely held accountable unless they expose themselves to accountability, and this is a valuable opportunity to hold yourself accountable to your team. As you hold yourself accountable, it’ll become increasingly easy to hold your team accountable as well.
Questions to ask
Some executive teams will debate adding questions every time a cultural survey is about to run. I’m sympathetic to the fundamental concern: questions may be poorly worded, and they’ll never ask about whatever topic is particularly top of mind. While it’s important to ensure you get good coverage in your questions, I generally think the default questions are good enough, and that it’s not worthwhile to change.
To explain why, think about the two types of data from cultural surveys:
- Ratings on various questions, which are valuable because you can compare across cohorts, and they become even more useful as you can see their trend over time
- Free-form responses which provide subjective context
The first sort of question is expensive to change, because each change wipes out any historical context on the question. Changes away from the default questions also means you can’t compare your results against wider industry benchmarks. I would generally recommend against changing the first type of question unless folks taking the survey are frequently confused by a particular question. For example, you’ll often hear confusion between terms like “the Management Team” and “your manager” because the wider company doesn’t know that the executive team refers to itself as “the Management Team.” This kind of confusion has already invalidated the results, so a change that increases clarity is still more valuable than preserving continuity.
The second sort of question is relatively cheap to change, because there is no history to wipe out. However, my experience is that if folks have something to say, they will say it anywhere, even if the question is somewhat unrelated. Adding more free-form questions doesn’t change that, so as long as you have a handful of open ended questions, I wouldn’t worry about adding more.
Taken in sum, it’s worth evaluating these questions before your first rollout, but repeated arguments about changing these questions is just executive bikeshedding.
Starting and frequency
In small organizations where you have a genuine relationship with every member of the team, you probably won’t get too much from running a cultural survey. However, as your organization grows, they become more and more useful. I find Dunbar’s number, roughly 150, to be a good point to ensure you’re running one of these surveys.
Most organizations run these twice a year, which is a reasonable amount. If you want a more principled way to determine the right frequency, consider how much time you’re willing to invest into addressing the issues that get raised. If you’re already struggling to address concerns from biannual surveys, then it doesn’t make sense to run more frequent surveys. You’ll just annoy folks who take them, who will complain that you’re running another survey before addressing the previously raised concerns.
It can be uncomfortable
While I’ve often been proud of the cultural survey scores I receive, sometimes I’ve been pretty embarrassed. I’ve been called out for not paying attention to the team’s priorities. I’ve tried to prioritize the remote working experience, but had it regress rather than improve. I’ve had below average scores and bad scores. Take a moment, and be uncomfortable with it. It means that you care! Then, lean into the feedback, and use it to motivate you to address the issues at hand.