October 18, 2020.
While writing Managing technical quality in a codebase, I wanted to find a good reference on running developer productivity surveys, but could only find one related article, How to survey your software developers about their tools. That’s a totally fine article, but it’s advice is much more focused on running an internal survey in general rather than running a developer productivity survey, so I decided to jot down some notes.
A survey you send out to folks writing software, plus adjacent roles, within your company. You’ll ask questions about which tools are and are not supporting their needs.
Start simple. Use a Google Form or Surveymonkey or something like that. Don’t prematurely optimize.
I’ve found them most useful after a large hiring push, because new hires are the most attuned to your workflow and tooling problems. Asking the same folks the same questions is less helpful.
You also need someone who’s going to actually use the results for something.
You pick the right projects to improve the experience of being a developer at your company. Your organization gets faster and does more important work. You’re promoted and actually like the sort of work at that level, I guess?
People ignore the survey and don’t fill it out. People feel like you’re disrespecting their time. People raise real issues and you ignore them, losing trust with the organization.
You should never send surveys out more frequently than you’re able to make significant improvements on the previously raised issues. Generally, quarterly is the maximum, but it really depends on your company size. Some companies are large enough that they can survey a small segment of the company every week rather than the entire organization every quarter. That’s a pretty nifty way to get weekly data, but you’re going to need thousands of folks to make it work well.
It really depends on what you’re trying to answer, but some ideas:
To find the right questions, figure out what you want to learn, and reason backwards from that.
No, I do not.
Test run the questions with a few different folks who you want to reply to the survey. You’ll find a bunch of gaps in questions you should have asked along with questions that don’t make as much sense as you thought they did.
Also, try to focus on qualitative questions more than quantitative.
It’s easy to fall in love with the quantitative aspects of surveys, but generally speaking I’ve not found the scores to be particularly useful. Internal surveys are often filled out by so few folks that changes are not statistically significant. Folks often get bored of filling out surveys so numbers can reflect who participates more than general sentiment. Folks become accustomed to how something works over time, no longer perceiving the friction they’re existing within.
For all those reasons, I recommend against setting a direct goal on survey results. Something we experimented with at Stripe was setting a goal on how long a given item remains in your top three complaint (on the theory that going from “top three” to “not top three” correlates with quality improvement), but suffered from being too complex to explain to work well as a metric.
These results are more useful as a map than a ruler.
Yes, absolutely. Using on-demand surveys to answer a particular question after a particular change can be more helpful than generic, periodic surveys. Don’t feel pressured to keep sending surveys every three months if they’re not serving a useful purpose. Internal surveys are almost guaranteeable invalid and inaccurate in the details, so don’t get caught up pursuing a facsimile of science.
Well, I think so. I’ve never regretted doing these surveys once or twice at a given company, although I think the desire to run them frequently tends to get folks in trouble, particularly at companies with an abundance of email to work through. If you’re not sure it’s a good idea, give it a try.