Many of the most important questions for running an organization don’t have clear answers. In most engineering organizations, both the teams working on infrastructure and the teams working on product feel they are undersized. It’s also true that most individuals feel they are undercompensated. In the boom times, there is often enough investor money laying around to say yes to all these questions, but many leaders are acutely learning the long-term costs of expanding our budget too far.
While there is no perfect way to answer these questions, the best way to answer them is benchmarks. By collecting a reasonable set of data points, you can understand where similar companies operate, and ensure that you either operate in a similar location or have a clear rationale for choosing a different spot.
Here are some concrete benchmarking examples from my work over the past couple months:
- To better understand how companies size their engineering team, we evaluated how we benchmarked against other companies R&D costs as a percentage of revenue over the past 12 months. If your company is similar in size to public technology companies like Gitlab or Datadog, then you can gather this data from their quarterly profit & loss statements. If you’re smaller, you can ask your investors for their datasets, which they will almost always be glad to provide because it will help you run a more efficient business.
- Adi Noda of DX has an upcoming blog post that explores the size of platform engineering teams at different companies. This sort of data, which you can collect by asking your network, is extremely helpful to understand whether your platform investment is typical or whether you should have a more nuanced defense given it’s abnormally high
- On the most structured side of things, compensation bands are aggregated by a number of companies (including my employer’s offering, Carta Total Compensation)
It’s easy to lean too heavily on benchmarks by believing that they answer questions: they don’t really do that. Benchmarks only ask questions, they never answer them. It’s up to whoever is using the benchmarks to extract the questions and do your own work to answer them. If you look at “R&D costs as a percentage of revenue” across companies, you’ll notice that some are four or five times higher than others. Are the high spenders early in making a calculated bet into releasing a new service, or are they just inefficient? Either, or both, could be true, and that’s the sort of interesting question-answer pair to work through when using benchmarks to evaluate.
Finally, it’s also worth noting that sometimes people don’t want data, and that’s a different problem. I previously worked at a company that wanted to improve their cost structure, and I worked with another executive to bring in benchmarks on the appropriate size of spend for each function, with the goal that we’d all set per-function targets to move towards. In that case, there was an active disinterest in looking at that data, and instead the wider team wanted to make individual pitches to the CEO about increasing their functions’ budget.