While writing Managing technical quality in a codebase, I wanted to find a good reference on running developer productivity surveys, but could only find one related article, How to survey your software developers about their tools. That’s a totally _fine _article, but it’s advice is much more focused on running an internal survey in general rather than running a developer productivity survey, so I decided to jot down some notes.
What is a developer productivity survey?
A survey you send out to folks writing software, plus adjacent roles, within your company. You’ll ask questions about which tools are and are not supporting their needs.
How should you do the survey?
Start simple. Use a Google Form or Surveymonkey or something like that. Don’t prematurely optimize.
When should a company do a dev prod survey?
I’ve found them most useful after a large hiring push, because new hires are the most attuned to your workflow and tooling problems. Asking the same folks the same questions is less helpful.
You also need someone who’s going to actually use the results for something.
What’s the best-case outcome?
You pick the right projects to improve the experience of being a developer at your company. Your organization gets faster and does more important work. You’re promoted and actually like the sort of work at that level, I guess?
What’s the worst-case outcome?
People ignore the survey and don’t fill it out. People feel like you’re disrespecting their time. People raise real issues and you ignore them, losing trust with the organization.
How often should you run them?
You should never send surveys out more frequently than you’re able to make significant improvements on the previously raised issues. Generally, quarterly is the maximum, but it really depends on your company size. Some companies are large enough that they can survey a small segment of the company every week rather than the entire organization every quarter. That’s a pretty nifty way to get weekly data, but you’re going to need thousands of folks to make it work well.
What are some good questions?
It really depends on what you’re trying to answer, but some ideas:
- How would you rate our X process from 1 to 10? Where X is every common workflow at the company: test, code review, build, deploy, release, feature flagging, running experiments, reverting, incident management, on-call, debugging, and so on.
- What tools that you’ve previously used do you find yourself missing?
- Where do you feel like you lose time every week?
- Are there tools or areas of our code that you avoid when possible?
- Are these tasks or activities they feel unreasonably hard to accomplish?
- What feels hard that you know would be easy if you were doing it yourself as a hobby project?
- If you could wave a wand and fix anything about developing software at our company, what would you change?
- Is there anything you’d like to say that didn’t fit elsewhere?
To find the right questions, figure out what you want to learn, and reason backwards from that.
Do you have a template we can use?
No, I do not.
Any more advice on picking questions?
Test run the questions with a few different folks who you want to reply to the survey. You’ll find a bunch of gaps in questions you should have asked along with questions that don’t make as much sense as you thought they did.
Also, try to focus on qualitative questions more than quantitative.
Should we set a goal on our survey results?
It’s easy to fall in love with the quantitative aspects of surveys, but generally speaking I’ve not found the scores to be particularly useful. Internal surveys are often filled out by so few folks that changes are not statistically significant. Folks often get bored of filling out surveys so numbers can reflect who participates more than general sentiment. Folks become accustomed to how something works over time, no longer perceiving the friction they’re existing within.
For all those reasons, I recommend against setting a direct goal on survey results. Something we experimented with at Stripe was setting a goal on how long a given item remains in your top three complaint (on the theory that going from “top three” to “not top three” correlates with quality improvement), but suffered from being too complex to explain to work well as a metric.
These results are more useful as a map than a ruler.
Can I please stop sending out these surveys?
Yes, absolutely. Using on-demand surveys to answer a particular question after a particular change can be more helpful than generic, periodic surveys. Don’t feel pressured to keep sending surveys every three months if they’re not serving a useful purpose. Internal surveys are almost guaranteeable invalid and inaccurate in the details, so don’t get caught up pursuing a facsimile of science.
Is this worth doing?
Well, I think so. I’ve never regretted doing these surveys once or twice at a given company, although I think the desire to run them frequently tends to get folks in trouble, particularly at companies with an abundance of email to work through. If you’re not sure it’s a good idea, give it a try.