Irrational Exuberancehttps://lethain.com/Recent content on Irrational ExuberanceHugo -- gohugo.ioen-usWill LarsonSun, 07 Dec 2025 08:00:00 -0700Facilitating AI adoption at Imprinthttps://lethain.com/company-ai-adoption/Sun, 07 Dec 2025 08:00:00 -0700https://lethain.com/company-ai-adoption/<p>I&rsquo;ve been working on internal &ldquo;AI&rdquo; adoption, which is really LLM-tooling and agent adoption, for the past 18 months or so. This is a problem that I think is, at minimum, a side-quest for every engineering leader in the current era. Given the sheer number of folks working on this problem within their own company, I wanted to write up my “working notes” of what I’ve learned.</p> <p>This isn&rsquo;t a recommendation about what you should do, merely a recap of how I&rsquo;ve approached the problem thus far, and what I&rsquo;ve learned through ongoing iteration. I hope the thinking here will be useful to you, or at least validates some of what you&rsquo;re experiencing in your rollout. The further you read, the more specific this will get, ending with cheap-turpentine-esque topics like getting agents to reliably translate human-readable text representations of Slack entities into <code>mrkdwn</code> formatting of the correct underlying entity.</p> <div class="bg-light-gray br4 ph3 pv1"> <p><strong>I am hiring:</strong> If you&rsquo;re interested in working together with me on internal agent and AI adoption at Imprint, we are hiring our founding <a href="https://jobs.ashbyhq.com/imprint/2c83c162-40d4-4b27-91e5-a1a2e81262ab">Senior Software Engineer, AI</a>. The ideal candidate is a product engineer who&rsquo;s spent some time experimenting with agents, and wants to spend the next year or two digging into this space.</p> </div> <h2 id="prework-building-my-intuition">Prework: building my intuition</h2> <p>As technologists, I think one of the basics we owe our teams is spending time working directly with new tools to develop an intuition for how they do, and don&rsquo;t work. AI adoption is no different.</p> <p>Towards that end, I started with a bit of reading, especially Chip Huyen&rsquo;s <em><a href="https://www.amazon.com/AI-Engineering-Building-Applications-Foundation/dp/1098166302">AI Engineering</a></em>, and then dove in a handful of bounded projects: building <a href="https://lethain.com/our-own-agents-our-own-tools/">my rudimentary own agent platform</a> using Claude code for implementation, creating <a href="https://lethain.com/library-mcp/">a trivial MCP for searching my blog posts</a>, and <a href="https://lethain.com/notion-agent/">an agent to comment on Notion documents</a>.</p> <p>Each of these projects was two to ten hours, and extremely clarifying. Tool use is, in particular, something that seemed like magic until I implemented a simple tool-using agent, at which point it become something extremely non-magical that I could reason about and understand.</p> <h2 id="our-ai-adoption-strategy">Our AI adoption strategy</h2> <p>Imprint&rsquo;s general approach to refining AI adoption is <a href="https://lethain.com/testing-strategy-iterative-refinement/">strategy testing</a>: identify a few goals, pick an initial approach, and then iterate rapidly in the details until the approach genuinely works. In an era of crushing optics, senior leaders immersing themselves in the details is one of our few defenses.</p> <div class="ba b--light-gray"> <p><img src="https://lethain.com/static/blog/2025/ai-strategy.png" alt="First draft of Imprint&rsquo;s strategy for AI adoption"></p> </div> <p>Shortly after joining, I partnered with the executive team to draft the above strategy for AI adoption. After a modest amount of debate, the pillars we landed on were:</p> <ol> <li><strong>Pave the path to adoption</strong> by removing obstacles to adoption, especially things like having to explicitly request access to tooling. There&rsquo;s significant internal and industry excitemetn for AI adoption, and we should believe in our teams. If they aren&rsquo;t adopting tooling, we predominantly focus on making it <em>easier</em> rather than spending time being skeptical or dismissive of their efforts towards adoption.</li> <li><strong>Opportunity for adoption is everywhere</strong>, rather than being isolated to engineering, customer service, or what not. To become a company that widely benefits from AI, we need to be solving the problem of adoption across all teams. It&rsquo;s not that I believe we should take the same approach everywhere, but we need <em>some</em> applicable approach for each team.</li> <li><strong>Senior leadership leads from the front</strong> to ensure what we&rsquo;re doing is genuinely useful, rather than getting caught up in what we&rsquo;re measuring.</li> </ol> <p>As you see from those principles, and my earlier comment, my biggest fear for AI adoption is that they can focus on creating the impression of adopting AI, rather than focusing on creating additional productivity. Optics are a core part of any work, but almost all interesting work occurs where optics and reality intersect, which these pillars aimed to support.</p> <hr> <p>As an aside, in terms of the <a href="https://lethain.com/components-of-eng-strategy/">components of strategy</a> in <em><a href="https://craftingengstrategy.com/">Crafting Engineering Strategy</a></em>, this is really just the <a href="https://lethain.com/policy-for-strategy/">strategy&rsquo;s policy</a>. In addition, we used <a href="https://lethain.com/testing-strategy-iterative-refinement/">strategy testing</a> to refine our approach, defined a concrete set of initial actions to operationalize it (they&rsquo;re a bit too specific to share externally), and did some brief <a href="https://lethain.com/exploring-for-strategy/">exploration</a> to make sure I wasn&rsquo;t overfitting on my prior work at Carta.</p> <h2 id="documenting-tips--tricks">Documenting tips &amp; tricks</h2> <p>My first step towards adoption was collecting as many internal examples of tips and tricks as possible into a single Notion database. I took a very broad view on what qualified, with the belief that showing many different examples of using tools&ndash;especially across different functions&ndash;is both useful and inspiring.</p> <div class="ba b--light-gray"> <p><img src="https://lethain.com/static/blog/2025/ai-tips.png" alt="The image is a table listing AI tips and trainings with columns for the name of the tip and the relevant team. It includes topics like using Claude Code, adding Slack bots, and employing ChatGPT for marketing."></p> </div> <p>I&rsquo;ve continued extending this, with contributions from across the company, and it&rsquo;s become a useful resource for both humans and bots alike to provide suggestions on approaching problems with AI tooling.</p> <h2 id="centralizing-our-prompts">Centralizing our prompts</h2> <p>One of my core beliefs in our approach is that making prompts discoverable within the company is extremely valuable. Discoverability solves four distinct problems:</p> <ol> <li>Creating visibility into what prompt&rsquo;s can do (so others can be inspired to use them in similar scenarios). For example, that you can use our agents to comment on a Notion doc when it&rsquo;s created, respond in Slack channels effectively, triage Jira tickets, etc</li> <li>Showing what a good prompt looks like (so others can improve their prompts). For example, you can start moving complex configuration into tables and out of lists which are harder to read and accurately modify</li> <li>Serving as a repository of copy-able sections to reuse across prompts. For example, you can copy one of our existing &ldquo;Jira-issue triaging prompts&rdquo; to start triaging a new Jira project</li> <li>Prompts are joint property of a team or function, not the immutable construct of one person. For example, anyone on our Helpdesk team can improve the prompt responding to Helpdesk requests, not just one person with access to the prompt, and it&rsquo;s not locked behind being comfortable with Git or Github (although I do imagine we&rsquo;ll end up with more restrictions around editing our most important internal agents over time)</li> <li>Identifying repeating prompt sub-components that imply missing or hard to use tools. For example, earlier versions of our prompts had a lot of confusion around how to specify Slack users and channels, which I got comfortable working around, but others did not</li> </ol> <p>My core approach is that <em>every</em> agent&rsquo;s prompt is stored in a single Notion database which is readable by everyone in the company. <em>Most</em> prompts are editable by everyone, but some have editing restrictions.</p> <p>Here&rsquo;s an example of a prompt we use for routing incoming Jira issues from Customer Support to the correct engineering team.</p> <div class="ba b--light-gray"> <p><img src="https://lethain.com/static/blog/2025/ai-routing-prompt.png" alt="The image provides instructions for triaging Jira tickets, detailing steps for retrieving comments, updating labels, and determining responsible teams. It includes guidelines for using Slack for communication and references, and lists teams with their on-call aliases and areas of responsibility."></p> </div> <p>Here&rsquo;s a second example, this time of responding to requests in our Infrastructure Engineering team&rsquo;s request channel.</p> <div class="ba b--light-gray"> <p><img src="https://lethain.com/static/blog/2025/ai-infra-prompt.png" alt="The image contains detailed instructions for service desk agents on handling Slack messages related to access requests for tools such as AWS, VPN, NPM, and more. It provides step-by-step guidelines for different scenarios, including retrieving user IDs, handling specific requests, and directing users to appropriate resources or teams."></p> </div> <p>Pretty much all prompts end with an instruction to include a link to the prompt in the generated message. This ensures it&rsquo;s easy to go from a mediocre response to the prompt-driving the response, so that you can fix it.</p> <h2 id="adopting-standard-platform">Adopting standard platform</h2> <p>In addition to collecting tips and prompts, the next obvious step for AI adoption is identifying a standard AI platform to be used within the company, e.g. ChatGPT, Claude, Gemini or what not.</p> <p>We&rsquo;ve gone with OpenAI for everyone. In addition to standardizing on a platform, we made sure account provisioning was automatic and in place on day one. To the surprise of no one who&rsquo;s worked in or adjacent to IT, a lot of revolutionary general AI adoption is&hellip; really just account provisioning and access controls. These are the little details that can so easily derail the broader plan if you don&rsquo;t dive into them.</p> <p>Within Engineering, we also provide both Cursor and Claude. That said, the vast majority of our Claude usage is done via AWS Bedrock, which we use to power Claude Code&hellip; and we use Claude Code quite a bit.</p> <h2 id="other-ai-tooling">Other AI tooling</h2> <p>While there&rsquo;s a general industry push towards adopting more AI tooling, I find that a significant majority of &ldquo;AI tools&rdquo; are just SaaS vendors that talk about AI in their marketing pitches. We have continued to adopt vendors, but have worked internally to help teams evaluate which &ldquo;AI tools&rdquo; are meaningful.</p> <p>We&rsquo;ve spent a fair amount of time going deep on integrating with AI tooling for chat and IVR tooling, but that&rsquo;s a different post entirely.</p> <h2 id="metrics">Metrics</h2> <p>Measuring AI adoption is, like all measurement topics, fraught. Altogether, I&rsquo;ve found measuring tool adoption very useful for identifying the right questions to ask. Why <em>haven&rsquo;t</em> you used Cursor? Or Claude Code? Or whatever? These are fascinating questions to dig into. I try to look at usage data at least once a month, with a particular focus on two questions:</p> <ol> <li>For power adopters, what are they actually doing? Why do they find it useful?</li> <li>For low or non-adopters, why aren&rsquo;t they using the tooling? How could we help solve that for them?</li> </ol> <p>At the core, I believe folks who aren&rsquo;t adopting tools are rational non-adopters, and spending some time understanding the (appearance of) resistance goes further than top-down mandate. I think it&rsquo;s often an education gap that is bridged easily enough. Conceivably, at some point I&rsquo;ll discover a point of diminishing returns, where the lack of progress is stymied on folks who are rejecting AI tooling&ndash;or because the AI tooling isn&rsquo;t genuinely useful&ndash;but I haven&rsquo;t found that point yet.</p> <h2 id="building-internal-agents">Building internal agents</h2> <p>The next few sections are about building internal agents. The core implementation is a single stateless lambda which handles a wide variety of HTTP requests, similar-ish to Zapier. This is currently implemented in Python, and is roughly 3,000 lines of code, much of it dedicated to oddities like formatting Slack messages, etc.</p> <p>For the record, I did originally attempt to do this <a href="https://lethain.com/commenting-notion-open-ai-zapier/">within Zapier</a>, but I found that Zapier simply doesn&rsquo;t facilitate the precision I believe is necessary to do this effectively. I also think that Zapier isn&rsquo;t particularly approachable for a non-engineering audience.</p> <h3 id="what-has-fueled-adoption-especially-for-agents">What has fueled adoption (especially for agents)</h3> <p>As someone who spent a long time working in platform engineering, I still <em>want to believe</em> that you can build a platform, and users will come. Indeed, I think it&rsquo;s true that a small number of early adopters <em>will come</em>, if the problem is sufficiently painful for them, as was the case for <a href="https://lethain.com/uber-service-migration-strategy/">Uber&rsquo;s service migration (2014)</a>.</p> <p>However, what we&rsquo;ve found effective for driving adoption is basically the opposite of that. What&rsquo;s really worked is the intersection of platform engineering and old-fashioned product engineering:</p> <ol> <li>(product eng) find a workflow with a lot of challenges or potential impact</li> <li>(product eng) work closely with domain experts to get the first version working</li> <li>(platform eng) ensure that working solution is extensible by the team using it</li> <li>(both) monitor adoption as indicator of problem-solution fit, or lack thereof</li> </ol> <p>Some examples of the projects where we&rsquo;ve gotten traction internally:</p> <ul> <li>Writing software with effective <code>AGENTS.md</code> files guiding use of tests, typechecking and linting</li> <li>Powering initial customer questions through chat and IVR</li> <li>Routing chat bots to steer questions to solve the problem, provide the the answer, or notify the correct responder</li> <li>Issue triaging for incoming tickets: tagging them, and assigning them to the appropriate teams</li> <li>Providing real-time initial feedback on routine compliance and legal questions (e.g. questions which occur frequently and with little deviation)</li> <li>Writing weekly priorities updates after pulling a wide range of resources (Git commits, Slack messages, etc)</li> </ul> <p>For all of these projects that have worked, the formula has been the opposite of &ldquo;build a platform and they will come.&rdquo; Instead it&rsquo;s required deep partnership from folks with experience building AI agents and using AI tooling to make progress. The learning curve for effective AI adoption in important or production-like workflows remains meaningfully high.</p> <h3 id="configuring-agents">Configuring agents</h3> <p>Agents that use powerful tools represent a complex configuration problem. First, exposing too many tools&ndash;especially tools that the prompt author doesn&rsquo;t effectively understand&ndash;makes it very difficult to create reliable workflows. For example, we have an <code>exit_early</code> command that allows terminating the agent early: this is very effective in many cases, but is also easy to break your bot. Similarly, we have a <code>slack_chat</code> command that allows posting across channels, which can support a variety of useful workflows (e.g. warm-handoffs of a question in one channel into a more appropriate alternative), but can also spam folks. Second, as tools get more powerful, they can introduce complex security scenarios.</p> <p>To address both of these, we currently store configuration in a code-reviewed Git repository. Here&rsquo;s an example of a JIRA project.</p> <div class="ba b--light-gray"> <p><img src="https://lethain.com/static/blog/2025/ai-jira-config.png" alt="This image shows a configuration script for a Jira setup with specified project keys, a prompt ID, a list of allowed tools such as &ldquo;notion_search&rdquo; and &ldquo;slack_chat,&rdquo; and a model set to &ldquo;gpt-4.1&rdquo;. The configuration also has a setting &ldquo;respond_to_issue&rdquo; set to False."></p> </div> <p>Here&rsquo;s another for specifying a Slack responder bot.</p> <div class="ba b--light-gray"> <p><img src="https://lethain.com/static/blog/2025/ai-slack-config.png" alt="This image shows a code snippet configuring a channel in Slack for &ldquo;eng-new-hires,&rdquo; with specified Slack channel IDs, a Notion prompt ID, and a list of allowed tools like &ldquo;notion_search&rdquo; and &ldquo;jira_search_jql.&rdquo; The model specified is &ldquo;gpt-4.1.&rdquo;"></p> </div> <p>Compared to a JSON file, we can statically type the configuration, and it&rsquo;s easy to extend over time. For example, we might want to extend <code>slack_chat</code> to restrict which channels a given bot is allowed to publish into, which would be easy enough. For most agents today, the one thing not under Git-version control is the prompts themselves, which are versioned by Notion. However, we can easily require specific agents to use prompts within the Git-managed repository for sensitive scenarios.</p> <p>After passing tests, linting and typechecking, the configurations are automatically deployed.</p> <h3 id="resolving-foreign-keys">Resolving foreign keys</h3> <p>It&rsquo;s sort of funny to mention, but one thing that has in practice really interfered with easily writing effective prompts is making it easy to write things like <code>@Will Larson</code> which is then translated into <code>&lt;@U12345&gt;</code> or whatever the appropriate Slack identifier is for a given user, channel, or user group. The same problem exists for Jira groups, Notion pages and databases, and so on.</p> <p>This is a good example of where centralizing prompts is useful. I got comfortable pulling the unique identifiers myself, but it became evident that most others were not. This eventually ended with three tools for Slack resolution: <code>slack_lookup</code> which takes a list of references to lookup, <code>slack_lookup_prefix</code> which finds all Slack entities that start with a given prefix (useful to pull all channels or groups starting with <code>@oncall-</code>, for example, rather than having to hard-code the list in your prompt), and <code>slack_search_name</code> which uses string-distance to find potential matches (again, useful for dealing with typos).</p> <p>If this sounds bewildering, it&rsquo;s largely the result of Slack not exposing relevant APIs for this sort of lookup. Slack&rsquo;s APIs want to use IDs to retrieve users, groups and channels, so you have to maintain your own cache of these items to perform a lookup. Performing the lookups, especially for users, is itself messy. Slack users have a minimum of three ways they might be referenced: <code>user.profile.display_name</code>, <code>user.name</code>, and <code>user.real_name</code>, only a subset of which are set for any given user. The correct logic here is, as best I can tell, to find a match against <code>user.profile.display_name</code>, then use that if it exists. Then do the same for <code>user.name</code> and finally <code>user.real_name</code>. If you take the first user that matches one of those three, you&rsquo;ll use the wrong user in some scenarios.</p> <p>In addition to providing tools to LLMs for resolving names, I also have a final mandatory check for each response to ensure the returned references refer to real items. If not, I inject which ones are invalid into the context window and perform an additional agent loop with only entity-resolution tools available. This feels absurd, but it was only at this point that things really started working consistently.</p> <hr> <p>As an aside, I was embarassed by these screenshots, and earlier today I made the same changes for Notion pages and databases as I had previously for Slack.</p> <h2 id="formatting">Formatting</h2> <p>Similarly to foreign entity resolution, there&rsquo;s a similar problem with <a href="https://docs.slack.dev/messaging/formatting-message-text/">Slack&rsquo;s <code>mrkdwn</code> variant of Markdown</a> and JIRA&rsquo;s <a href="https://developer.atlassian.com/cloud/jira/platform/apis/document/structure/">Atlassian Document Format</a>: they&rsquo;re both strict.</p> <p>The tools that call into those APIs now have strict instructions on formatting. These <em>had</em> been contained in individual prompts, but they started showing up in every prompt, so I knew I needed to bring them into the agent framework itself rather than forcing every prompt-author to understand the problem.</p> <p>My guess is that I need to add a validation step similar to the one I added for entity-resolution, and that until I do so, I&rsquo;ll continue to have a small number of very infrequent but annoying rendering issues, To be honest, I personally don&rsquo;t mind the rendering issues, but that creates a lot of uncertainty for others using agents, so I think solving them is a requirement.</p> <h3 id="logging-and-debugging">Logging and debugging</h3> <p>Today, all logs, especially tool usage, are fed into two places. First, it goes into Datadog for full logging visibility. Second, and perhaps more usefully for non-engineers, they feed into a Slack channel, <code>#ai-logs</code> which create visibility into which tools are used and with which (potentially truncated) parameters.</p> <p>Longer term, I imagine this will be exposed via a dedicated internal web UX, but generally speaking I&rsquo;ve found that the subset of folks who are actively developing agents are pretty willing to deal with a bit of cruft. Similarly the folks who aren&rsquo;t developing agents directly don&rsquo;t really care, they want it to work perfectly every time, and aren&rsquo;t spending time looking at logs.</p> <h2 id="biggest-remaining-gap-universal-platform-for-accessing-user-scope-mcp-servers">Biggest remaining gap: universal platform for accessing user-scope MCP servers</h2> <p>The biggest internal opportunity that I see today is figuring out how to get non-engineers an experience equivalent to running Claude Code locally with all their favorite MCP servers plugged in. I&rsquo;ve wanted ChatGPT or Claude.ai to provide this, but they don&rsquo;t <em>really</em> quite get there, Claude Desktop is close, but is somewhat messy to configure as we think about finding a tool that we can easily allow everyone internally to customize and use on a daily basis.</p> <p>I&rsquo;m still looking for what the right tool is here. If anyone has any great suggestions that we can be somewhat confident will still exist in two years, and don&rsquo;t require sending a bunch of internal data to a very early stage company, then I&rsquo;m curious to hear!</p> <h2 id="whats-next">What&rsquo;s next?</h2> <p>You&rsquo;re supposed to start a good conclusion with some sort of punchy anecdote that illuminates your overall thesis in a new way. I&rsquo;m not sure if I can quite meet that bar, but the four most important ideas for me are:</p> <ol> <li>We are still very early on AI adoption, so focusing on rate of learning is more valuable than anything else</li> <li>If you want to lead an internal AI initiative, you simply <em>must</em> be using the tools, and <em>not</em> just ChatGPT, but building your own tool-using agent using only an LLM API</li> <li>My experience is that <em>real</em> AI adoption on <em>real</em> problems is a complex blend of: domain context on the problem, domain experience with AI tooling, and old-fashioned IT issues. I&rsquo;m deeply skeptical of any initiative for internal AI adoption that doesn&rsquo;t anchor on all three of those. This is an advantage of earlier stage companies, because you can often find aspects of all three of those in a single person, or at least across two people. In larger companies, you need three different <em>organizations</em> doing this work together, this is just objectively hard</li> <li>I think model selection matters a lot, but there are only 2-3 models you need at any given moment in time, and someone can just tell you what those 2-3 models are at any given moment. For example, <a href="https://platform.openai.com/docs/models/gpt-4.1">GPT-4.1</a> is just exceptionally good at following rules quickly. It&rsquo;s a <em>great</em> model for most latency-sensitive agents</li> </ol> <p>I&rsquo;m curious what other folks are finding!</p>Coding at work (after a decade away).https://lethain.com/coding-at-work/Wed, 19 Nov 2025 08:00:00 -0700https://lethain.com/coding-at-work/<p>Since joining Imprint a bit over six months ago as the CTO of a ~50 engineer team, I&rsquo;ve merged 104 pull requests, which is slightly over four per week. Many of them are very minimal configuration and documentation tweaks, and none were <em>the hardest</em> or even <em>most time-sensitive</em> task available at any given time; I&rsquo;m much more of a pull request scavenger finding opportunities that don&rsquo;t disrupt the operating teams&rsquo; rhythms.</p> <p>That said, a decent chunk represent meaningful software development tasks, and these 104 pull requests are more pull requests than I&rsquo;ve completed <em>combined</em> in my prior decade of work, where Uber was the last time I was substantially submitting pull requests (well, whatever Phabricator called its pull request equvialent). Since then, I&rsquo;ve been predominantly managing managers, and <em>not</em> writing software with my own hands at work.</p> <p>This has been a fascinating shift, and I wanted to write dome some thoughts on whether this is good for software, good for me, whether it&rsquo;s fun, and how I&rsquo;ve personally worked to adapt to <a href="https://lethain.com/good-eng-mgmt-is-a-fad/">the current era</a>.</p> <hr> <p><em>Unrelatedly, I did enjoy Peter Seibel&rsquo;s <a href="https://codersatwork.com/">Coders at Work</a> (2009).</em></p> <h2 id="dubious-return-on-effort-of-manager-coding">Dubious return-on-effort of manager coding</h2> <p>I was a computer science major, and worked as a software engineer prior to becoming an engineering manager. I continued to write software when I was a line manager, and continued to write small projects in my free time after becoming a manager. The idea of writing software at work has always appealing to me.</p> <p>However, when I became a manager of managers, I stopped writing software at work. I <em>wanted</em> to write software, but it simply felt like a lower return on time than doing something else. For example, if I focusd my time hiring another engineer onto the team, they would undoubtedly do more than I would. Similarly, if I spent time improving our plans, that would have a higher impact than me writing a piece of software. Ultimately, I got stuck in the trap that it was always clear that writing software would be valuable, but it would always be less valuable than another endeavor. Each hour I spent writing software was bad for the business overall, and a sign of questionable judgment.</p> <p>The same wasn&rsquo;t true when it came to <em>understanding</em> the codebase, and I&rsquo;ve tried&ndash;to various degrees of success&ndash;to build a workable degree of awareness of the codebases I&rsquo;ve managed. That being said, since I left line management roles, I&rsquo;ve never been truly successful at doing this beyond the superficial level. My experience is that only writing software can build a truly effective understanding of <a href="https://lethain.com/reclaim-unreasonable-software/">unreasonable software</a>, and that most startup software is unreasonable due to its origin of being iteratively designed by a shifting cast of characters with an evolving understanding of the domain they are modeling.</p> <p>You can get very far with a minimal understanding of a codebase and surrounding yourself with strong engineers who you can ask useful questions to and get honest answers. For the big things, this is generally sufficient, but it means you can&rsquo;t really engage with the small questions without spending engineers&rsquo; time on it or making low-judgment decisions.</p> <p>Whether manager coding is valuable comes down to whether you believe making large decisions more quickly with fewer interrupts for context pulling&ndash;and the ability to make numerous small decisions that are otherwise too expensive to make effectively&ndash;builds into meaningfully more impactful management. This isn&rsquo;t a proclimation for others, but for me, I&rsquo;ve really enjoyed getting back to writing software at work, in large part because the time commitment to do so has dropped significantly over the past couple years.</p> <h2 id="finding-small-pockets-of-time-to-write-software">Finding small pockets of time to write software</h2> <p>The biggest specific challenge for me when it came to writing software as a manager was finding blocks of thinking time to either understand a problem, or to implement the solution. Even simple fixes require an effective mental model of the codebase being worked on.</p> <p>This becomes inescapably true when you&rsquo;re focused on long-term impact of your work. For example, adding a bunch of tests <em>might</em> be useful, but it&rsquo;s often the case that poorly designed tests get throw away over time because they overlap with other existing tests, are flaky, or are slow in aggregate. I think traditionally, a lot of manager coding has fallen into this bucket of optically useful with somewhat dubious long-term value. Doing high quality work simply requires too complete a mental model for folks jumping in and out of writing software.</p> <p>The new wave of AI tooling like Claude Code or OpenAI Codex are extremely susceptive to creating low-quality commits, but my experience is that used effectively they also provide several opportunities for creating useful code contributions are well. They are effective at:</p> <ol> <li>answering questions into a codebase, e.g. &ldquo;what are our most common patterns for working with authn and authz? what are good examples of each?&rdquo;</li> <li>writing code that fits the existing codebase&rsquo;s patterns and structure, particularly with a well-written <code>AGENTS.md</code>&rsquo;s guidance</li> <li>taking general feedback to revise the approach, e.g. &ldquo;look for existing utilities that solve date math within our codebase and reuse those&rdquo;</li> </ol> <p>Most importantly, you can do each of those in a few minutes at a time. Between meetings at work, I generally pop back into one of several Claude Code sessions to see where it got to on a given task, review the code, and suggest next steps.</p> <p>It&rsquo;s worth acknowledging that there&rsquo;s a significant learning curve to doing this well. I&rsquo;ve spent a meaningful amount of time in the last year learning to write software this way, and each month there are new caveats that I&rsquo;ve had to understand. Slowly but surely, I&rsquo;ve built a mental model of both how writing software with AI works, and how Imprint&rsquo;s codebases work.</p> <p>As I&rsquo;ve gotten more knowledgable at both, I&rsquo;ve refound my ability to write software at work because I can make progress in the small chunks of time between other projects, combined with an hour or two over the week to think more deeply about my approach to more complex issues.</p> <h2 id="judgment--problem-selection">Judgment / problem selection</h2> <p>While new AI tools make it easier than ever for managers to write useful software at work, they also make it easier than ever to write unhelpful software at work. Particularly in senior roles, it&rsquo;s very easy to write software that makes you feel helpful, but is genuinely unhelpful to the team, and leaves them busier than they were before.</p> <p>The handful of rules that I&rsquo;ve found useful for myself here are:</p> <ol> <li> <p>Never contribute to something that is truly time sensitive unless it&rsquo;s a very constrained thing that I can solve end-to-end <em>today</em>. When that isn&rsquo;t the case, I&rsquo;m usually going to slow things down despite trying to help.</p> </li> <li> <p>Prioritize projects that are hard for teams to get to, but are obviously valuable over time. For example, technical debt clean up, small user-requested features, or missing instrumentation.</p> <p>Infrequently take on a strategic company project which doesn&rsquo;t have an owner, and which I am able to complete <em>myself</em>. A real example of this has been writing our first pass of a new <a href="https://en.wikipedia.org/wiki/Interactive_voice_response">Interactive Voice Response</a> built on an AI agent, which was clearly valuable but difficult to prioritize over the team&rsquo;s existing work.</p> </li> <li> <p>Hold myself to a higher bar than I would hold others in terms of fully releasing my software, monitoring its release, and solving bugs it creates. If I don&rsquo;t have time to do those things, then I am stealing time from the team rather than helping them.</p> </li> <li> <p>Err towards implementing feedback in pull requests, even if I consider it generally neutral. The cost of giving feedback to me is so high for folks, that I&rsquo;m responsible for going out of my way to incorporate it when it is given.</p> </li> </ol> <p>These rules <em>have</em> meant that I didn&rsquo;t work on some projects that I wanted to, but I think they&rsquo;ve done a fairly good job of allowing me to build my judgment about how our software works without getting in the way of the teams who are doing the vast majority of the heavy lifting. I&rsquo;m sure there are better versions of these rules, but in general I&rsquo;d guess that managers ignoring them are close to the border of being unhelpful.</p> <h2 id="should-you-be-coding-at-work">Should you be coding at work?</h2> <p>I&rsquo;m pretty sure that &ldquo;Should managers be coding at work?&rdquo; isn&rsquo;t nearly as interesting a question as folks want it to be, and a meaningful answer depends on each situation&rsquo;s details. However, what&rsquo;s been clearly true for me is that the overhead of writing software at work is substantially lower than it was a few years ago. If you weren&rsquo;t writing software at work because it simply took too much time away from managing the team directly: the constraints have shifted in a profound way after you learn the new wave of tooling.</p> <p>The learning loop for writing software with agents is definitely not zero, but it&rsquo;s something you can learn for a few dollars of tokens and a couple dozen hours spent writing personal projects that are safe to throw away afterwards. That feels like a worthwhile investment to remain effective in one&rsquo;s chosen profession.</p>"Good engineering management" is a fadhttps://lethain.com/good-eng-mgmt-is-a-fad/Sun, 26 Oct 2025 04:00:00 -0700https://lethain.com/good-eng-mgmt-is-a-fad/<p>As I get older, I increasingly think about whether I&rsquo;m spending my time the right way to advance my career and my life. This is also a question that your company asks about you every performance cycle: is this engineering manager spending their time effectively to advance the company or their organization?</p> <p>Confusingly, in my experience, answering these nominally similar questions has surprisingly little in common. This piece spends some time exploring both questions in the particularly odd moment we live in today, where managers are being told they&rsquo;ve spent the last decade doing the wrong things, and need to engage with a new model of engineering management in order to be valued by the latest iteration of the industry.</p> <div class="bg-light-gray br4 ph3 pv1"> <p>If you&rsquo;d be more interested in a video version of this, here is the recording of a practice run I gave for a talk centered on these same ideas (<a href="https://docs.google.com/presentation/d/17lTreuVdYMNOr7k2XLzrshEJnB-StaNUzAyh9tE0b5w/edit?slide=id.g39f551c2725_0_0#slide=id.g39f551c2725_0_0">slides from talk</a>).</p> <iframe width="560" height="315" src="https://www.youtube.com/embed/IJlrX4Z4QWs?si=m9HgL7y0AvdOSUq6" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe> </div> <h2 id="good-leadership-is-a-fad">Good leadership is a fad</h2> <p>When I started my software career at Yahoo in the late 2000s, I had two 1:1s with my manager over the course of two years. The first one came a few months after I started, and he mostly asked me about a colleague&rsquo;s work quality. The second came when I gave notice that I was leaving to <a href="https://lethain.com/digg-v4/">join Digg</a>. A modern evaluation of this manager would be scathing, but his management style closely resembled that of the team leader in <a href="https://www.amazon.com/Soul-New-Machine-Tracy-Kidder/dp/0316491977"><em>The Soul of A New Machine</em></a>: identifying an important opportunity for the team, and navigating the broader organization that might impede progress towards that goal. He was, in the context we were working in, an effective manager.</p> <p>Compare that leadership style to the expectations of the 2010s, where attracting, retaining, and motivating engineers was emphasized as the most important leadership criteria in many organizations. This made sense in <a href="https://lethain.com/productivity-in-the-age-of-hypergrowth/">the era of hypergrowth</a>, where budgets were uncapped and many companies viewed hiring strong engineers as their constraint on growth. This was an era where managers were explicitly told to stop writing software as the first step of their transition into management, and it was good advice! Looking back we can argue it was bad guidance by today&rsquo;s standards, but it aligned the managers with the leadership expectations of the moment.</p> <p>Then think about our current era, that started in late 2022, where higher interest rates killed <a href="https://www.readmargins.com/p/zirp-explains-the-world">zero-interest-rate-policy (ZIRP)</a> and productized large language models are positioned as killing deep Engineering organizations. We&rsquo;ve flattened Engineering organizations where many roles that previously focused on coordination are now expected to be hands-on keyboard, working deep in the details. Once again, the best managers of the prior era&ndash;who did exactly what the industry asked them to do&ndash;are now reframed as bureaucrats rather than integral leaders.</p> <p>In each of these transitions, the business environment shifted, leading to a new formulation of ideal leadership. That makes a lot of sense: of course we want leaders to fit the necessary patterns of today. Where things get weird is that in each case a morality tale was subsequently superimposed on top of the transition:</p> <ul> <li>In the 2010s, the morality tale was that it was all about empowering engineers as a fundamental good. Sure, I can get excited for that, but I don&rsquo;t really believe that narrative: it happened because hiring was competitive.</li> <li>In the 2020s, the morality tale is that bureaucratic middle management have made organizations stale and inefficient. The lack of experts has crippled organizational efficiency. Once again, I can get behind that&ndash;there&rsquo;s truth here&ndash;but the much larger drivers aren&rsquo;t about morality, it&rsquo;s about ZIRP-ending and optimism about productivity gains from AI tooling.</li> </ul> <p>The conclusion here is clear: the industry will want different things from you as it evolves, and it will tell you that each of those shifts is because of some complex moral change, but it&rsquo;s pretty much always about business realities changing. If you take any current morality tale as true, then you&rsquo;re setting yourself up to be severely out of position when the industry shifts again in a few years, because &ldquo;good leadership&rdquo; is just a fad.</p> <div class="bg-light-gray br4 ph3 pv1"> <p>Earlier this summer, I also gave a presentation at the YCombinator CTO summit on this specific topic of the evolution of engineering management. You can watch a <a href="https://www.youtube.com/watch?v=2Q98TAMoiMI">recorded practice run of <strong>that</strong> talk on YouTube</a> as well, and <a href="https://docs.google.com/presentation/d/1Du-oNGoN92mMQWj8EeS36ktOrDqxZbbhAmNexYkjod4/edit?slide=id.g24afdb76500_0_415#slide=id.g24afdb76500_0_415">see the slides</a>.</p> </div> <h2 id="self-development-across-leadership-fads">Self-development across leadership fads</h2> <p>If you accept the argument that the specifically desired leadership skills of today are the result of fads that frequently shift, then it leads to an important followup question: what are the right skills to develop in to be effective today and to be impactful across fads?</p> <p>Having been and worked with engineering managers for some time, I think there are <a href="https://lethain.com/categories-engineering-leadership/">eight foundational engineering management skills</a>, which I want to personally group into two clusters: core skills that are essential to operate in all roles (including entry-level management roles), and growth skills whose presence&ndash;or absence&ndash;determines how far you can go in your career.</p> <p>The core skills are:</p> <ol> <li> <p><strong>Execution</strong>: lead team to deliver expected tangible and intangible work. Fundamentally, management is about getting things done, and you&rsquo;ll neither get an opportunity to begin managing, nor stay long as a manager, if your teams don&rsquo;t execute.</p> <p><em>Examples</em>: ship projects, manage on-call rotation, sprint planning, manage incidents</p> </li> <li> <p><strong>Team</strong>: shape the team and the environment such that they succeed. This is <em>not</em> working for the team, nor is it working for your leadership, it is finding the balance between the two that works for both.</p> <p><em>Examples</em>: hiring, coaching, performance management, advocate with your management</p> </li> <li> <p><strong>Ownership</strong>: navigate reality to make consistent progress, even when reality is difficult Finding a way to get things done, rather than finding a way that it not getting done is someone else&rsquo;s fault.</p> <p><em>Examples</em>: doing hard things, showing up when it&rsquo;s uncomfortable, being accountable despite systemic issues</p> </li> <li> <p><strong>Alignment</strong>: build shared understanding across leadership, stakeholders, your team, and the problem space. Finding a realistic plan that meets the moment, without surprising or being surprised by those around you.</p> <p><em>Examples</em>: document and share top problems, and updates during crises</p> </li> </ol> <p>The growth skills are:</p> <ol> <li> <p><strong>Taste</strong>: exercise discerning judgment about what &ldquo;good&rdquo; looks like—technically, in business terms, and in process/strategy. Taste is a broadchurch, and my experience is that broad taste is an somewhat universal criteria for truly senior roles. In some ways, taste is a prerequisite to Amazon&rsquo;s <a href="https://www.amazon.jobs/content/en/our-workplace/leadership-principles">Are Right, A Lot</a>.</p> <p><em>Examples</em>: refine proposed product concept, avoid high-risk rewrite, find usability issues in team’s work</p> </li> <li> <p><strong>Clarity</strong>: your team, stakeholders, and leadership know what you&rsquo;re doing and why, and agree that it makes sense. In particular, they understand how you are overcoming your biggest problems. So clarity is not, &ldquo;Struggling with scalability issues&rdquo; but instead &ldquo;Sharding the user logins database in a new cluster to reduce load.&rdquo;</p> <p><em>Examples</em>: identify levers to progress, create plan to exit a crisis, show progress on implementing that plan</p> </li> <li> <p><strong>Navigating ambiguity</strong>: work from complex problem to opinionated, viable approach. If you&rsquo;re given an extremely messy, open-ended problem, can you still find a way to make progress? (I&rsquo;ve <a href="https://lethain.com/navigating-ambiguity/">written previously about this topic</a>.)</p> <p><em>Examples</em>: launching a new business line, improving developer experience, going from 1 to N cloud regions</p> </li> <li> <p><strong>Working across timescales</strong>: ensure your areas of responsibility make progress across both the short and long term. There are many ways to appear successful by cutting corners today, that end in disaster tomorrow. Success requires understanding, and being accountable for, how different timescales interact.</p> <p><em>Examples</em>: have an explicit destination, ensure short-term work steers towards it, be long-term rigid and short-term flexible</p> </li> </ol> <p>Having spent a fair amount of time pressure testing these, I&rsquo;m pretty sure most effective managers, and manager archetypes, can be fit into these boxes.</p> <h3 id="self-assessing-on-these-skills">Self-assessing on these skills</h3> <p>There&rsquo;s no perfect way to measure anything complex, but here are some thinking questions for you to spend time with as you assess where you stand on each of these skills:</p> <ol> <li><strong>Execution</strong> <ul> <li>When did your team last have friction delivering work? Is that a recurring issue?</li> <li>What’s something hard you shipped that went really, really well?</li> <li>When were you last pulled onto solving a time-sensitive, executive-visible project?</li> </ul> </li> <li><strong>Team</strong> <ul> <li>Who was the last strong performer you hired?</li> <li>Have you retained your strongest performers?</li> <li>What strong performers want to join your team?</li> <li>Which peers consider your team highly effective?</li> <li>When did an executive describe your team as exceptional?</li> </ul> </li> <li><strong>Ownership</strong> <ul> <li>When did you or your team overcome the odds to deliver something important? (Would your stakeholders agree?)</li> <li>What’s the last difficult problem you solved that stayed solved (rather than reoccurring)?</li> <li>When did you last solve the problem first before addressing cross-team gaps?</li> </ul> </li> <li><strong>Alignment</strong> <ul> <li>When was the last time you were surprised by a stakeholder? What could you do to prevent that reoccuring?</li> <li>How does a new stakeholder understand your prioritization tradeoffs (incl rationale)?</li> <li>When did you last disappoint a stakeholder without damaging your relationship?</li> <li>What stakeholders would join your company because they trust you?</li> </ul> </li> <li><strong>Taste</strong> <ul> <li>What’s a recent decision that is meaningfully better because you were present?</li> <li>If your product counterpart left, what decisions would you struggle to make?</li> <li>Where’s a subtle clarification that significantly changed a design or launch?</li> <li>How have you inflected team’s outcomes by seeing around corners?</li> </ul> </li> <li><strong>Clarity</strong> <ul> <li>What’s a difficult trade-off you recently helped your team make?</li> <li>How could you enable them to make that same trade-off without your direct participation?</li> <li>What’s a recent decision you made that was undone? How?</li> </ul> </li> <li><strong>Navigating ambiguity</strong> <ul> <li>What problem have you worked on that was stuck before assisted, and unstuck afterwards?</li> <li>How did you unstick it?</li> <li>Do senior leaders bring ambiguous problems to you? Why?</li> </ul> </li> <li><strong>Working across timescales</strong> <ul> <li>What’s a recent trade off you made between short and long-term priorities?</li> <li>How do you inform these tradeoffs across timescales?</li> <li>What long-term goals are you protecting at significant short-term cost?</li> </ul> </li> </ol> <p>Most of these questions stand on their own, but it&rsquo;s worth briefly explaining the &ldquo;Have you ever been pulled into a SpecificSortOfProject by an executive?&rdquo; questions. My experience is that in most companies, executives will try to poach you onto their most important problems that correspond to your strengths. So if they&rsquo;re never attempting to pull you in then either you&rsquo;re not considered as particularly strong on that dimensions, or you&rsquo;re already very saturated with other work such that it doesn&rsquo;t seem possible to pull you in.</p> <h3 id="are-core-skills-the-same-over-time">Are &ldquo;core skills&rdquo; the same over time?</h3> <p>While those groupings of &ldquo;core&rdquo; and &ldquo;growth&rdquo; skills are obvious groupings to me, what I came to appreciate while writing this is that some skills swap between core to growth as the fads evolve. Where <em>execution</em> is a foundational skill today, it was less of a core skill in the hypergrowth era, and even less in the investor era.</p> <p>This is the fundamentally tricky part of succeeding as an engineering manager across fads: you need a sufficiently broad base across each of these skills to be successful, otherwise you&rsquo;re very likely to be viewed as a weak manager when the eras unpredictably end.</p> <h2 id="stay-energized-to-stay-engaged">Stay energized to stay engaged</h2> <p>The &ldquo;<a href="https://lethain.com/frameworks-decision-making/">Manage your priorities and energy</a>&rdquo; chapter in <a href="https://www.amazon.com/Engineering-Executives-Primer-Impactful-Leadership/dp/1098149483/"><em>The Engineering Executive&rsquo;s Primer</em></a> captures an important reality that took me too long to understand: the perfect allocation of work is not the mathematically ideal allocation that maximizes impact. Instead, it&rsquo;s the balance between that mathematical ideal and doing things that energize you enough to stay motivated over the long haul. If you&rsquo;re someone who loves writing software, that might involve writing a bit more than helpful to your team. If you&rsquo;re someone who loves streamlining an organization, it might be improving a friction-filled process that is a personal affront, even if it&rsquo;s not causing <em>that much</em> overall inefficiency.</p> <h2 id="forty-year-career">Forty-year career</h2> <p>Similarly to the question of prioritizing activities to stay energized, there&rsquo;s also understanding where you are in your career, an idea I explored in <a href="https://lethain.com/forty-year-career/">A forty-year career</a>.</p> <p><img src="https://lethain.com/static/blog/2019/40-year-hero.png" alt="Diagram of different ways to prioritize roles across your career"></p> <p>For each role, you have the chance to prioritize across different dimensions like pace, people, prestige, profit, or learning. There&rsquo;s no &ldquo;right decision,&rdquo; and there are always tradeoffs. The decisions you make early in your career will compound over the following forty years. You also have to operate within the constraints of your life today and your possible lives tomorrow. Early in my career, I had few responsibilities to others, and had the opportunity to work extremely hard at places like Uber. Today, with more family responsibilities, I am unwilling to make the tradeoffs to consistently work that way, which has real implications on how I think about which roles to prioritize over time.</p> <p>Recognizing these tradeoffs, and making them deliberately, is one of the highest value things you can do to shape your career. Most importantly, it&rsquo;s extremely hard to have a career at all if you don&rsquo;t think about these dimensions and have a healthy amount of self-awareness to understand the tradeoffs that will allow you to stay engaged over half a lifetime.</p>Crafting Engineering Strategy!https://lethain.com/crafting-engineering-strategy/Sat, 25 Oct 2025 04:00:00 -0700https://lethain.com/crafting-engineering-strategy/<p>On November 3rd, 2023, I posted <a href="https://lethain.com/publishing-eng-execs-primer/">Thoughts on writing and publishing <em>Primer</em></a> to celebrate the completion of my work on my prior book, <em><a href="https://www.amazon.com/Engineering-Executives-Primer-Impactful-Leadership/dp/1098149483/">The Engineering Executive&rsquo;s Primer</a></em>. Three weeks later, I posted <a href="https://lethain.com/strategy-notes/">Engineering strategy notes</a> on November 21st, 2023, as I started to pull together thoughts to write my upcoming book, <em><a href="https://craftingengstrategy.com/">Crafting Engineering Strategy</a></em>.</p> <p>Those initial thoughts turned into my first chapter draft, <a href="https://lethain.com/llm-adoption-strategy/">How should you adopt LLMs?</a> on May 14th, 2024. Writing continued all the way through the <a href="https://lethain.com/api-deprecation-strategy/">Stripe API deprecation strategy</a>, which was my final <em>draft</em>, completed the afternoon of April 5th, 2025. In between there were another 35 chapters: two of which were <a href="https://lethain.com/wardley-mapping/">Wardley maps</a>, four of which are <a href="https://lethain.com/strategy-systems-modeling/">systems models</a>, and two of which are short section introductions.</p> <p>This post is a collection of notes on writing <em>Crafting Engineering Strategy</em>, how I decided to publish it, ways that I did and did not use foundational models in writing it, and so on.</p> <div class="bg-light-gray br4 ph3 pv1"> <p><em>Buy on <a href="https://www.amazon.com/dp/B0FBRJY116">Amazon</a>. Read online on <a href="https://craftingengstrategy.com/">craftingengstrategy.com</a>.</em></p> </div> <h2 id="why-write-this-book">Why write this book?</h2> <p>One of my decade goals for the 2020s, last updated in my <a href="https://lethain.com/2024-in-review/">2024 year in review</a>, is to write three books on some aspect of engineering. I published <em>An Elegant Puzzle</em> in 2019, so that one doesn&rsquo;t count, but have two other books published in the 2020s: <em>Staff Engineer</em> in 2021, and <em>The Engineering Executive&rsquo;s Primer</em> in 2024. So I knew I needed one more to complete this decade goal. At one point, I was planning on finishing <em><a href="https://infraeng.dev/">Infrastructure Engineering</a></em> as my fourth book, but honestly I&rsquo;ve lost traction on that topic a bit over the past few years.</p> <p>Instead, the thing I&rsquo;d been thinking about a lot instead was engineering strategy. I&rsquo;ve written chapters on this topic in my last two books, and each of those chapter was tortured by the sheer volume of things I wanted to write. By the end of 2024, I&rsquo;d done enough strategy work myself that I was pretty confident I could take the best ideas from <em>Staff Engineer</em>&ndash;anchoring the essays in amazing stories&ndash;and the topic I&rsquo;d spent so much time on over the past there years, and turn it into a pretty good book.</p> <p>It&rsquo;s also a topic that, like <em>Staff Engineer</em> in 2020 when I started writing it, was missing an anchor book pulling the ideas together for discussion. There are <em>so many</em> great books about strategy out there. There <em>are</em> some great books on engineering strategy, but few of them are widely read. My aspiration in writing this book was to potentially write that anchor book. It&rsquo;ll take a decade or two to determine whether or not I&rsquo;ve succeeded. At the minimum I think this book will either become that anchor or annoy someone enough that they write a better anchor instead. Either way, I&rsquo;ll mark it as a win for advancing the industry&rsquo;s discussion a bit.</p> <h2 id="llm-optimized-edition">LLM-optimized edition</h2> <div class="bg-light-gray br4 ph3 pv1"> <p><em>Buy the <a href="https://www.amazon.com/dp/B0FXN2J4PJ">AI Companion to Crafting Engineering Strategy</a> on Amazon.</em></p> </div> <p>While I was writing this book, I was also increasingly using foundational models at work. Then I was using them to write tools for managing this book. Over time, I became increasingly fascinated with the idea of having a version of this book optimized for usage with large language models.</p> <p>For this book in particular, the idea that you could collaborate with it on creating your own engineering strategy was a very appealing idea. I&rsquo;m excited that this ultimately translated into an LLM-optimized edition that <a href="https://www.amazon.com/dp/B0FXN2J4PJ">you can also purchase on Amazon</a>, or you can read the <a href="https://craftingengstrategy.com/aic/preface/">AI companion&rsquo;s text on craftingengineeringstrategy.com</a>, although you&rsquo;ll have to buy the actual book to get the foundational model optimized file itself (e.g. a markdown version of the book that plays well with foundational models).</p> <p>This is the culmination of <a href="https://lethain.com/competitive-advantage-author-llms/">my thinking about the advantage of authors in the age of LLMs</a>, where I see a path to book turning into things that are both read and also collaborated with.</p> <p>Regarding the foundational model optimized-version, at its core, this is just running <a href="https://repomix.com/">repomix</a> against the repository of Markdown files, but there are a number of interesting optimizations for a better product. For example, it&rsquo;s adding absolute paths to every referenced file and link, including adding a link to each chapter at its beginning to help models detect to them as references.</p> <p>It&rsquo;s also translating images into text descriptions. For example, here&rsquo;s an image from one of the chapters.</p> <p><img src="https://lethain.com/static/blog/strategy/uber-provis-model-errors.png" alt="Original image of a systems model"></p> <p>Then here is the representation after I wrote a script to pull out every image and replace it with a description.</p> <p><img src="https://lethain.com/static/blog/2025/txt-description-image-llm.png" alt="Example of gpt-4o translating an image from book into text description"></p> <p>My guess is that many books are going to move in the direction of having an LLM-optimized edition, and I&rsquo;m quite excited to see where this goes. I&rsquo;ll likely work on an LLM-optimized edition of <em>Staff Engineer</em> at some point in the near-ish future as well.</p> <h2 id="role-of-llms">Role of LLMs</h2> <p>When non-authors talk about LLMs making it easier to write books, they often think about LLMs <em>literally writing books</em>. As a fairly experienced author, I have absolutely no interest in an LLM writing any part of my book. If I don&rsquo;t bring something unique to the book, the sort of thing that an LLM would not bring, then generally either it isn&rsquo;t a book worth writing or I am the wrong author to write it. As a result, I wrote every word in this book. I didn&rsquo;t use LLMs to expand bullets into paragraphs. I didn&rsquo;t use LLMs to outline or assemble topics. I didn&rsquo;t use LLMs to find examples to substantiate my approach. Again, this is what I know how to do fairly well at this point.</p> <p>However, there are many things that I&rsquo;m not that good at, where I relied heavily on an LLM. In particular, I used LLMs to copy-edit for typos, grammatical errors and hard-to-read sentences. I also used LLMs to write several scripts that I used in writing this book:</p> <ol> <li> <p><code>grammar.py</code> which sent one or more chapters to an LLM for grammatical and spelling correction, returning the identified errors as regular expressions that I could individually accept or reject</p> </li> <li> <p><code>import.py</code> which translated from my blog post version of the posts into the <a href="https://craftingengstrategy.com/">craftingengstrategy.com</a> optimized version</p> </li> <li> <p><a href="https://lethain.com/links-script-book/"><code>links.py</code></a> which I used for standardizing format of chapter references, and balancing the frequency that strategies were referenced</p> </li> <li> <p>Generated <a href="https://craftingengstrategy.com/faq/">craftingengstrategy.com&rsquo;s FAQ</a> by attaching full LLM-optimized version to Anthropic Claude 3.7 thinking and running this prompt:</p> <pre><code> I want to make a frequently asked questions featuring topics from this book. What are twenty good question and answer pairs based on this book, formatted as markdown (questions should be H2, and answers in paragraphs). Answers should include links to relevant chapters at the end of each answer. Some example questions that I think would be helpful are: 1. Is engineering strategy a real thing? 2. How is engineering strategy different from strategy in general? 3. What are examples of engineering strategy? 4. What template should I use to create an engineering strategy? 5. Can engineers do engineering strategy? Is engineering strategy only for executives? 6. How to get better at engineering strategy? 7. Are there jobs in engineering strategy? 8. What are other engineering strategy resources? Please directly answer those 8 questions, and then include another 10-15 question/answer pairs as well. </code></pre> <p>This worked quite well, in my opinion. Generating a better FAQ than the one I created for <a href="https://staffeng.com/faq/">Staff Engineer&rsquo;s FAQ</a> in a very small amount of time. SEO seems well and truly &ldquo;cooked&rdquo; based on this experience.</p> </li> </ol> <p>Just like I don&rsquo;t see software engineers being replaced by LLMs, I don&rsquo;t see authors being replaced either.</p> <h2 id="craftingengstrategycom">craftingengstrategy.com</h2> <p>For <em>Staff Engineer</em>, I put up <a href="https://staffeng.com/">staffeng.com</a>. For <em>The Engineering Executive&rsquo;s Primer</em> I just posted things on this blog, without making a dedicated, standalone site. For <em>Crafting Engineering Strategy</em>, I decided to put together a dedicated site again, which maybe deserves an explanation.</p> <p>My writing over the past few years is anchored in my goal of <a href="https://lethain.com/advancing-the-industry/">advancing the industry</a>, and I think that having a well-structured, referencable version of the book online is a valuable part of that. If people want to recommend someone read <em>Staff Engineer</em>, they can simply point them to staffeng.com, and they can read the whole book. They might also buy it, which is great, but it&rsquo;s fine if they don&rsquo;t. <em>The Engineering Executive&rsquo;s Primer</em> is just much less referencable online, so someone ultimately has to decide to buy it before testing the content, which undermines its ability to advance the industry.</p> <p>For the record, that wasn&rsquo;t an O&rsquo;Reilly limitation, just poor planning on my part. <em>An Elegant Puzzle</em> didn&rsquo;t require a standalone site, so I thought that <em>Primer</em> wouldn&rsquo;t either, but I think that&rsquo;s more a reflection of <em>An Elegant Puzzle</em> being a collection of this blog&rsquo;s writing in the 2010s rather than the best way to support the typical book.</p> <p>At this point, my experience is that most books benefit from having a dedicated site, and that it doesn&rsquo;t detract from sales numbers. Rather, if properly done with clear calls to action on the site, it supports sales very effectively.</p> <h2 id="decision-to-publish-vs-self-publish">Decision to publish vs self-publish</h2> <p>I worked with O&rsquo;Reilly to publish this book, same as I did for my previous book, <em>The Engineering Executive&rsquo;s Primer</em>. My continued experience is that it&rsquo;s harder to create a book with a publisher, the financial outcomes are significantly muted, but the book itself is almost always a much better book than you would have created on your own.</p> <p>For me, that was the right tradeoff for this book.</p> <h2 id="my-last-book-of-2020s">My last book of 2020s</h2> <p>I&rsquo;m not at all sure this is the last book that I&rsquo;ll ever write, but I&rsquo;ve completed my decade writing goal for the 2020s, and I&rsquo;m committed to this being my last book of the 2020s. For the past seven years I have been continually writing, editing or promoting a book, and I am exhausted with that process. I&rsquo;ve <em>loved</em> getting to do it, and am grateful for having had the chance to do this four times. But, I&rsquo;m still exhausted with it!</p> <p>I could imagine eventually having another book to write, and that is definitely not now. Instead I want to spend more time with my family, my son, personally writing software professionally (rather than exclusively leading teams doing the writing), and other projects that are anything but writing another book.</p>Commenting on Notion docs via OpenAI and Zapier.https://lethain.com/commenting-notion-open-ai-zapier/Sun, 20 Jul 2025 04:00:00 -0700https://lethain.com/commenting-notion-open-ai-zapier/<p>One of my side quests at work is to get a simple feedback loop going where we can create knowledge bases that comment on Notion documents. I was curious if I could hook this together following these requirements:</p> <ol> <li>No custom code hosting</li> <li>Prompt is editable within Notion rather than requiring understanding of Zapier</li> <li>Should be be fairly quickly</li> </ol> <p>Ultimately, I was able to get it working. So a quick summary of how it works, some comments on why I don&rsquo;t particularly like this approach, then some more detailed comments on getting it working.</p> <h2 id="general-approach">General approach</h2> <p>Create a Notion database of prompts.</p> <p><img src="https://lethain.com/static/blog/2025/notion-prompts.png" alt=""></p> <p>Create a specific prompt for providing feedback on RFCs.</p> <p><img src="https://lethain.com/static/blog/2025/notion-rfc-prompt.png" alt=""></p> <p>Create a Notion database for all RFCs.</p> <p><img src="https://lethain.com/static/blog/2025/notion-rfcs.png" alt=""></p> <p>Add an automation into this database that calls a Zapier webhook.</p> <p><img src="https://lethain.com/static/blog/2025/notion-rfcs-automation.png" alt=""></p> <p>The Zapier webhook does a variety of things that culminate in using the RFC prompt to provide feedback on the specific RFC as a top-level comment in the RFC.</p> <p><img src="https://lethain.com/static/blog/2025/notion-rfc-comment.png" alt=""></p> <p>Altogether this works fairly well.</p> <h2 id="the-challenges-with-this-approach">The challenges with this approach</h2> <p>The best thing about this approach is that it actually works, and it works fairly well. However, as we dig into the implementation details, you&rsquo;ll also see that a series of things are unnaturally difficult with Zapier:</p> <ol> <li>Managing rich text in Notion because it requires navigating the blocks datastructure</li> <li>Allowing looping API constructs such as making it straightforward to leave multiple comments on specific blocks rather than a single top-level comment</li> <li>Notion only allows up to 2,000 characters per block, but chunking into multiple blocks is moderately unnatural. In a true Python environment, it would be trivial to translate to and from Markdown using something like <a href="https://pypi.org/project/md2notion/"><code>md2notion</code></a></li> </ol> <p>Ultimately, I could only recommend this approach as an initial validation. It&rsquo;s definitely not the right long-term resting place for this kind of approach.</p> <h2 id="zapier-implementation">Zapier implementation</h2> <p>I already covered the Notion side of the integration, so let&rsquo;s dig into the Zapier pieces a bit. Overall it had eight steps.</p> <p><img src="https://lethain.com/static/blog/2025/zapier-flow.png" alt=""></p> <p>I&rsquo;ve skipped the first step, which was just a default webhook receiver. The second step was retrieving a statically defined Notion page containing the prompt. (In later steps I just use the Notion API directly, which I would do here if I was redoing this, but this worked too. The advantage of the API is that it returns a real JSON object, this doesn&rsquo;t, probably because I didn&rsquo;t specify the <code>content-type</code> header or some such.)</p> <p><img src="https://lethain.com/static/blog/2025/zapier-2a.png" alt=""></p> <p>This is the configuration page of step 2, where I specify the prompt&rsquo;s page explicitly.</p> <p><img src="https://lethain.com/static/blog/2025/zapier-2b.png" alt="">)</p> <p>Probably because I didn&rsquo;t set <code>content-type</code>, I think I was getting post formatted data here, so I just regular expressed the data out. It&rsquo;s a bit sloppy, but hey it worked, so there&rsquo;s that.</p> <p><img src="https://lethain.com/static/blog/2025/zapier-3.png" alt="">)</p> <p>Here is using the Notion API request tool to retrieve the updated RFC (as opposed to the prompt which we already retrieved).</p> <p><img src="https://lethain.com/static/blog/2025/zapier-4.png" alt="">)</p> <p>The API request returns a JSON object that you can navigate without writing regular expressions, so that&rsquo;s nice.</p> <p><img src="https://lethain.com/static/blog/2025/zapier-5.png" alt="">)</p> <p>Then we send both the prompt as system instructions and the RFC as the user message to Open AI.</p> <p><img src="https://lethain.com/static/blog/2025/zapier-6.png" alt="">)</p> <p>Then pass the response from OpenAI to <code>json.dumps</code> to encode it for being included in an API call. This is mostly solving for newlines being <code>\n</code> rather than literal newlines.</p> <p><img src="https://lethain.com/static/blog/2025/zapier-7.png" alt="">)</p> <p>Then format the response into an API request to add a comment to the document.</p> <p><img src="https://lethain.com/static/blog/2025/zapier-8.png" alt=""></p> <p>Anyway, this wasn&rsquo;t beautiful, and I think you could do a <em>much</em> better job by just doing all of this in Python, but it&rsquo;s a workable proof of concept.</p>An agent to use Notion docs as prompts to comment on Notion docs.https://lethain.com/notion-agent/Sun, 20 Jul 2025 04:00:00 -0700https://lethain.com/notion-agent/<p>Last weekend, I wrote a bit about <a href="https://lethain.com/commenting-notion-open-ai-zapier/">using Zapier to load Notion pages as prompts to comment on other Notion pages</a>. That worked <em>well enough</em>, but not <em>that well</em>. This weekend I spent some time getting the next level of this working, creating an agent that runs as an AWS Lambda. This, among other things, allowed me to rely on agent tool usage to support both page and block-level comments, and altogether I think the idea works extremely well.</p> <p>This was mostly implemented by <a href="https://www.anthropic.com/claude-code">Claude Code</a>, and I think the code is actually fairly bad as a result, but you can see the full working implementation at <a href="https://github.com/lethain/basic-notion-agent">lethain:basic_notion-agent</a> on Github. Installation and configuration options are there as well.</p> <hr> <p><a href="https://www.youtube.com/watch?v=nqft66gHAMI&amp;ab_channel=WillLarson">Watch a quick walkthrough of the project I recorded on YouTube</a>.</p> <h2 id="screenshots">Screenshots</h2> <p>To give a sense of what the end experience is, here are some screenshots. You start by creating a prompt in a Notion document.</p> <p><img src="https://lethain.com/static/blog/2025/notion-prompt-example.png" alt=""></p> <p>Then it will provide inline comments on blocks within your document.</p> <p><img src="https://lethain.com/static/blog/2025/notion-inline-comment.png" alt=""></p> <p>It will also provide a summary comment on the document overall (although this is configurable if you only want in-line comments).</p> <p><img src="https://lethain.com/static/blog/2025/notion-page-comment.png" alt=""></p> <p>A feature I particularly like is that the agent is aware of existing comments on the document, and who made them, and will reply to those comments.</p> <p><img src="https://lethain.com/static/blog/2025/notion-comment-replies.png" alt=""></p> <p>Altogether, it&rsquo;s a fun little project and works surprisingly well, as almost all agents do with enough prompt tuning.</p>Moving from an orchestration-heavy to leadership-heavy management role.https://lethain.com/orchestration-heavy-leadership-heavy/Sat, 19 Jul 2025 04:00:00 -0700https://lethain.com/orchestration-heavy-leadership-heavy/<p>For managers who have spent a long time reporting to a specific leader or working in an organization with well‑understood goals, it&rsquo;s easy to develop skill gaps without realizing it. Usually this happens because those skills were not particularly important in the environment you grew up in. You may become extremely confident in your existing skills, enter a new organization that requires a different mix of competencies, and promptly fall on your face.</p> <p>There are a few common varieties of this, but the one I want to discuss here is when managers grow up in an organization that operates from top‑down plans (“orchestration‑heavy roles”) and then find themselves in a sufficiently senior role, or in a bottom‑up organization, that expects them to lead rather than orchestrate (“leadership‑heavy roles”).</p> <h2 id="orchestration-versus-leadership">Orchestration versus leadership</h2> <p>You can break the components of solving a problem down in a number of ways, and I&rsquo;m not saying this is the perfect way to do it, but here are six important components of directing a team&rsquo;s work:</p> <ol> <li><strong>Problem discovery</strong>: Identifying which problems to work on</li> <li><strong>Problem selection</strong>: Aligning with your stakeholders on the problems you&rsquo;ve identified</li> <li><strong>Solution discovery</strong>: Identifying potential solutions to the selected problem</li> <li><strong>Solution selection</strong>: Aligning with your stakeholders on the approach you&rsquo;ve chosen</li> <li><strong>Execution</strong>: Implementing the selected solution</li> <li><strong>Ongoing revision</strong>: Keeping your team and stakeholders aligned as you evolve the plan</li> </ol> <p>In an orchestration‑heavy management role, you might focus only on the second half of these steps. In a leadership‑heavy management role, you work on all six steps. Folks who&rsquo;ve only worked in orchestration-heavy roles often have <em>no idea</em> that they are expected to perform all of these. So, yes, there&rsquo;s a skill gap in performing the work, but more importantly there&rsquo;s an awareness gap that the work actually exists to be done.</p> <p>Here are a few ways you can identify an orchestration‑heavy manager that doesn&rsquo;t quite understand their current, leadership‑heavy circumstances:</p> <ul> <li><strong>Focuses on prioritization as &ldquo;solution of first resort.&rdquo;</strong> When you&rsquo;re not allowed to change the problem or the approach, prioritization becomes one of the best tools you have.</li> <li><strong>Accepts problems and solutions as presented.</strong> If a stakeholder asks for something, questions are around priority rather than whether the project makes sense to do at all, or suggestions of alternative approaches. There&rsquo;s no habit of questioning whether the request makes sense—that&rsquo;s left to the stakeholder or to more senior functional leadership.</li> <li><strong>Focuses on sprint planning and process.</strong> With the problem and approach fixed, protecting your team from interruption and disruption is one of your most valuable tools. Operating strictly to a sprint cadence (changing plans only at the start of each sprint) is a powerful technique.</li> </ul> <p>All of these things are still valuable in a leadership‑heavy role, but they just aren&rsquo;t necessarily the most valuable things you could be doing.</p> <h2 id="operating-in-a-leadership-heavy-role">Operating in a leadership-heavy role</h2> <p>There is a steep learning curve for managers who find themselves in a leadership‑heavy role, because it&rsquo;s a much more expansive role. However, it&rsquo;s important to realize that there are no senior engineering leadership roles focused solely on orchestration. You either learn this leadership style or you get stuck in mid‑level roles (even in organizations that lean orchestration-heavy).</p> <p>Further, the technology industry generally believes it overinvested in orchestration‑heavy roles in the 2010s. Consequently, companies are eliminating many of those roles and preventing similar roles from being created in the next generation of firms. There&rsquo;s a pervasive narrative attributing this shift to the increased productivity brought by LLMs, but I&rsquo;m skeptical of that relationship—this change was already underway before LLMs became prominent.</p> <p>My advice for folks working through the leadership‑heavy role learning curve is:</p> <ol> <li> <p><strong>Think of your job&rsquo;s core loop as four steps</strong>:</p> <ol> <li>Identify the problems your team should be working on</li> <li>Decide on a destination that solves those problems</li> <li>Explain to your team, stakeholders, and executives the path the team will follow to reach that destination</li> <li>Communicate both data and narratives that provide evidence you&rsquo;re walking that path successfully</li> </ol> <p><strong>If you are not doing these four things, you are not performing your full role</strong>, <strong>even if people say you do some parts well.</strong> Similarly, if you want to get promoted or secure more headcount, those four steps are the path to doing so (I previously discussed this in <a href="https://lethain.com/how-to-get-more-headcount/">How to get more headcount</a>).</p> </li> <li> <p><strong>Ask your team for priorities and problems to solve.</strong> Mining for bottom‑up projects is a critical part of your role. If you wait only for top‑down and lateral priorities, you aren&rsquo;t performing the first step of the core loop.</p> <p>It’s easy to miss this expectation—it’s invisible to you but obvious to everyone else, so they don’t realize it needs to be said. If you&rsquo;re not sure, ask.</p> </li> <li> <p><strong>If your leadership chain is running the core loop for your team, it&rsquo;s because they lack evidence that you can run it yourself.</strong> That&rsquo;s a bad sign. What’s “business as usual” in an orchestration‑heavy role actually signals a struggling manager in a leadership‑heavy role.</p> </li> <li> <p><strong>Get your projects prioritized by following the core loop.</strong> If you have a major problem on your team and wonder why it isn’t getting solved, that’s on you. Leadership‑heavy roles won’t have someone else telling you how to frame your team’s work—unless they think you’re doing a bad job.</p> </li> <li> <p><strong>Picking the right problems and solutions is your highest‑leverage work.</strong> No, this is not only your product manager’s job or your tech lead’s—it is <em>your</em> job. It’s <strong>also</strong> theirs, but leadership overlaps because getting it right is so valuable.</p> <p>Generalizing a bit, your focus now is effectiveness of your team’s work, not efficiency in implementing it. Moving quickly on the wrong problem has no value.</p> </li> <li> <p><strong>Understand your domain and technology in detail.</strong> You don’t have to write all the software—but you should have written some simple pull requests to verify you can reason about the codebase. You don’t have to author every product requirement or architecture proposal, but you should write one occasionally to prove you understand the work.</p> <p>If you don’t feel capable of that, that’s okay. But you need to <em>urgently</em> write down steps you’ll take to close that gap and share that plan with your team and manager. They currently see you as not meeting expectations and want to know how you’ll start meeting them.</p> <p>If you think that gap cannot be closed or that it’s unreasonable to expect you to close it, you misunderstand your role. Some organizations will allow you to misunderstand your role for a long time, provided you perform parts of it well, but they rarely promote you under those circumstances—and most won’t tolerate it for senior leaders.</p> </li> <li> <p><strong>Align with your team and cross‑functional stakeholders as much as you align with your executive.</strong> If your executive is wrong and you follow them, it is <em>your fault</em> that your team and stakeholders are upset: part of your job is changing your executive’s mind.</p> <p>Yes, it can feel unfair if you’re the type to blame everything on your executive. But it’s still true: expecting your executive to get everything right is a sure way to feel superior without accomplishing much.</p> </li> </ol> <p>Now that I&rsquo;ve shared my perspective, I admit I&rsquo;m being a bit extreme on purpose—people who don’t pick up on this tend to dispute its validity strongly unless there is no room to debate. There is room for nuance, but if you think my entire point is invalid, I encourage you to have a direct conversation with your manager and team about their expectations and how they feel you’re meeting them.</p>Advancing the industry, part two.https://lethain.com/advancing-the-industry-pt2/Fri, 18 Jul 2025 04:00:00 -0700https://lethain.com/advancing-the-industry-pt2/<p>I&rsquo;m turning forty in a few weeks, and there&rsquo;s a listicle archetype along the lines of &ldquo;Things I&rsquo;ve learned in the first half of my career as I turn forty and have now worked roughly twenty years in the technology industry.&rdquo; How do you write that and make it good? Don&rsquo;t ask me. I don&rsquo;t know!</p> <p>As I considered what I <em>would</em> write to summarize my career learnings so far, I kept thinking about updating my post <a href="https://lethain.com/advancing-the-industry/">Advancing the industry</a> from a few years ago, where I described using that concept as a north star for my major career decisions. So I wrote about that instead.</p> <h2 id="recapping-the-concept">Recapping the concept</h2> <p>Adopting <a href="https://lethain.com/advancing-the-industry/">advancing the industry</a> as my framework for career decisions came down to three things:</p> <ol> <li> <p><strong>The opportunity to be more intentional:</strong> After ~15 years in the industry, I entered a &ldquo;third stage&rdquo; of my career where neither financial considerations (1st stage) nor controlling pace to support an infant/toddler (2nd stage) were my highest priorities. Although I might not be working wholly by choice, I had enough flexibility that I could no longer hide behind &ldquo;maximizing financial return&rdquo; to guide, or excuse, my decision making.</p> </li> <li> <p><strong>My decade goals kept going stale.</strong> Since 2020, I&rsquo;ve tracked against <a href="https://lethain.com/tags/year-in-review/">my decade goals for the 2020s</a>, and annual tracking has been extremely valuable. Part of that value was realizing that I&rsquo;d made enough progress on several initial goals that they weren&rsquo;t meaningful to continue measuring.</p> <p>For example, I had written and published three professional books. Publishing another book was not a <em>goal</em> for me. That’s not to say I <em>wouldn&rsquo;t</em> write another—in fact, I have—but it would serve another goal, not be a goal in itself. As a second example, I set a goal to get twenty people I&rsquo;ve managed or mentored into VPE/CTO roles running engineering organizations of 50+ people or $100M+ valuation. By the end of last year, ten people met that criteria after four years. Based on that, it seems quite likely I&rsquo;ll reach twenty within the next six years, and I&rsquo;d already increased that goal from ten to twenty a few years ago, so I&rsquo;m not interested in raising it again.</p> </li> <li> <p><strong>&ldquo;Advancing the industry&rdquo; offered a solution to both</strong>, giving me a broader goal to work toward and reframe my decade and annual goals.</p> </li> </ol> <p>That mission still resonates with me: it&rsquo;s large, broad, and ambiguous enough to support many avenues of progress while feeling achievable within two decades. Though the goal resonates, my thinking about the best mechanism to make progress toward it has shifted over the past few years.</p> <h2 id="writing-from-primary-to-secondary-mechanism">Writing from primary to secondary mechanism</h2> <p>Roughly a decade ago, I discovered the most effective mechanism I&rsquo;ve found to advance the industry: learn at work, write blog posts about those learnings, and then aggregate the posts into a book.</p> <p><img src="https://lethain.com/static/blog/2025/book-loop.png" alt="Loop of learning at work to writing blog posts to writing books"></p> <p><em>An Elegant Puzzle</em> was the literal output of that loop. <em>Staff Engineer</em> was a more intentional effort but still the figurative output. My last two books have been more designed than aggregated, but still generally followed this pattern. That said, as I finish up <em>Crafting Engineering Strategy</em>, I think the loop remains valid, but it&rsquo;s run its course for me personally. There are several reasons:</p> <p>First, what was energizing four books ago feels like a slog today. Making a book is a lot of work, and much of it isn&rsquo;t fun, so you need to be <em>really</em> excited about the fun parts to balance it out. I used to check my Amazon sales standing every day, thrilled to see it move up and down the charts. Each royalty payment felt magical: something I created that people paid real money for. It&rsquo;s still cool, but the excitement has tempered over six years.</p> <p>Second, most of my original thinking is already captured in my books or fits shorter-form content like blog posts. I won&rsquo;t get much incremental leverage from another book. I do continue to get leverage from shorter-form writing and will keep doing it.</p> <p>Finally, as I wrote in <a href="https://lethain.com/writers-who-operate/">Writers who operate</a>, professional writing quality often suffers when writing becomes the &ldquo;first thing&rdquo; rather than the &ldquo;second thing.&rdquo; Chasing distribution subtly damages quality. I&rsquo;ve tried hard to keep writing as a second thing, but over the past few years my topic choices have been overly pulled toward filling book chapters instead of what&rsquo;s most relevant to my day-to-day work.</p> <h2 id="if-writing-is-second-what-is-first">If writing is second, what is first?</h2> <p>My current thinking on how to best advance the industry rests on four pillars:</p> <ol> <li>Industry leadership and management practices are generally poor.</li> <li>We can improve these by making better practices more accessible (my primary focus in years past but where I&rsquo;ve seen diminishing returns).</li> <li>We can improve practices by growing the next generation of industry leaders (the rationale behind my decade goal to mentor/manage people into senior roles, but I can&rsquo;t scale it much through executive roles alone)</li> <li>We can improve practices by modeling them authentically in a very successful company and engineering organization.</li> </ol> <p>The fourth pillar is my current focus and likely will remain so for the upcoming decade, though who knows—your focus can change a lot over ten years.</p> <p>Why now? Six years ago, I wouldn&rsquo;t have believed I could influence my company enough to make this impact, but the head of engineering roles I&rsquo;ve pursued are exactly those that can. With access to such roles at companies with significant upward trajectories, I have the best laboratory to validate and evolve ways to advance the industry: leading engineering in great companies. Cargo-culting often spreads the most influential ideas—20% time at Google, AI adoption patterns at Spotify, memo culture at Amazon, writing culture at Stripe, etc. Hopefully, developing and documenting ideas with integrity will hopefully be even more effective than publicity-driven cargo-culting. That said, I&rsquo;d be glad to accept the &ldquo;mere&rdquo; success of ideas like 20% time.</p> <h2 id="returning-to-the-details">Returning to the details</h2> <p>Most importantly for me personally, focusing on modeling ideas in my own organization aligns &ldquo;advancing the industry&rdquo; with something I&rsquo;ve been craving for a long time now: spending more time in the details of the work. Writing for broad audiences is a process of generalizing, but day-to-day execution succeeds or fails on particulars. I&rsquo;ve spent much of the past decade translating between the general and the particular, and I&rsquo;m relieved to return fully to the particulars.</p> <p>Joining <a href="https://imprint.co">Imprint</a> six weeks ago gave me a chance to practice this: I&rsquo;ve written/merged/deployed six pull requests at work, tweaked our incident tooling to eliminate gaps in handoff with Zapier integrations, written an RFC, debugged a production incident, and generally been two or three layers deeper than at Carta. Part of that is that Imprint&rsquo;s engineering team is currently much smaller— 40 rather than 350—and another part is that industry expectations in the post-ZIRP reentrenchment and LLM boom pull leaders towards the details. But mostly, it&rsquo;s just where my energy is pulling me lately.</p>What can agents actually do?https://lethain.com/what-can-agents-do/Sun, 06 Jul 2025 04:00:00 -0700https://lethain.com/what-can-agents-do/<p>There&rsquo;s a lot of excitement about what AI (specifically the latest wave of LLM-anchored AI) can do, and how AI-first companies are different from the prior generations of companies. There are a lot of important and real opportunities at hand, but I find that many of these conversations occur at such an abstract altitude that they border on meaningless. Sort of like saying that your company could be much better if you <em>merely adopted more software</em>. That&rsquo;s certainly true, but it&rsquo;s not a particularly helpful claim.</p> <p>This post is an attempt to concisely summarize how AI agents work, apply that summary to a handful of real-world use cases for AI, and to generally make the case that agents are a multiplier on the quality of your software and system design. If your software or systems are poorly designed, agents will only cause harm. If there&rsquo;s any meaningful definition of an AI-first company, it must be companies where their software and systems are designed with an immaculate attention to detail.</p> <p>By the end of this writeup, my hope is that you&rsquo;ll be well-armed to have a concrete discussion about how LLMs and agents could change the shape of your company, and to avoid getting caught up in the needlessly abstract discussions that are often taking place today.</p> <h2 id="how-do-agents-work">How do agents work?</h2> <p>At its core, using an LLM is an API call that includes a prompt. For example, you might call Anthropic&rsquo;s <a href="https://docs.anthropic.com/en/api/messages"><code>/v1/message</code></a> with a prompt: <code>How should I adopt LLMs in my company?</code> That prompt is used to fill the LLM&rsquo;s context window, which conditions the model to generate certain kinds of responses.</p> <p>This is the first important thing that agents can do: <strong>use an LLM to evaluate a context window and get a result.</strong></p> <p>Prompt engineering, or <a href="https://www.philschmid.de/context-engineering">context engineering</a> as it&rsquo;s being called now, is deciding what to put into the context window to best generate the responses you&rsquo;re looking for. For example, In-Context Learning (ICL) is one form of context engineering, where you supply a bunch of similar examples before asking a question. If I want to determine if a transaction is fraudulent, then I might supply a bunch of prior transactions and whether they were, or were not, fraudulent as ICL examples. Those examples make generating the correct answer more likely.</p> <p>However, composing the perfect context window is very time intensive, benefiting from techniques like <a href="https://cookbook.openai.com/examples/enhance_your_prompts_with_meta_prompting">metaprompting</a> to improve your context. Indeed, the human (or automation) creating the initial context might not know enough to do a good job of providing relevant context. For example, if you prompt, <code>Who is going to become the next mayor of New York City?</code>, then <em>you</em> are unsuited to include the answer to that question in your prompt. To do that, you would need to already know the answer, which is why you&rsquo;re asking the question to begin with!</p> <p>This is where we see model chat experiences from OpenAI and Anthropic use web search to pull in context that you likely don&rsquo;t have. If you ask a question about the new mayor of New York, they use a tool to retrieve web search results, then add the content of those searches to your context window.</p> <p>This is the second important thing that agents can do: <strong>use an LLM to suggest tools relevant to the context window, then enrich the context window with the tool&rsquo;s response.</strong></p> <p>However, it&rsquo;s important to clarify how &ldquo;tool usage&rdquo; actually works. An LLM does not actually call a tool. (You can skim OpenAI&rsquo;s <a href="https://platform.openai.com/docs/guides/function-calling?api-mode=responses">function calling documentation</a> if you want to see a specific real-world example of this.) Instead there is a five-step process to calling tools that can be a bit counter-intuitive:</p> <ol> <li>The program designer that calls the LLM API must also define a set of tools that the LLM is allowed to suggest using.</li> <li>Every API call to the LLM includes that defined set of tools as <em>options</em> that the LLM is allowed to <em>recommend</em></li> <li>The response from the API call with defined functions is either: <ol> <li> <p>Generated text as any other call to an LLM might provide</p> </li> <li> <p>A <em>recommendation</em> to call a specific tool with a specific set of parameters, e.g. an LLM that knows about a <code>get_weather</code> tool, when prompted about the weather in Paris, might return this response:</p> <pre><code> [{ &quot;type&quot;: &quot;function_call&quot;, &quot;name&quot;: &quot;get_weather&quot;, &quot;arguments&quot;: &quot;{\&quot;location\&quot;:\&quot;Paris, France\&quot;}&quot; }] </code></pre> </li> </ol> </li> <li>The program that calls the LLM API then decides whether and how to honor that requested tool use. The program might decide to reject the requested tool because it&rsquo;s been used too frequently recently (e.g. rate limiting), it might check if the associated user has permission to use the tool (e.g. maybe it&rsquo;s a premium only tool), it might check if the parameters match the user&rsquo;s role-based permissions as well (e.g. the user can check weather, but only admin users are allowed to check weather in France).</li> <li>If the program does decide to call the tool, it invokes the tool, then calls the LLM API with the output of the tool appended to the prior call&rsquo;s context window.</li> </ol> <p>The important thing about this loop is that the LLM itself can still only do one interesting thing: taking a context window and returning generated text. It is the broader program, which we can start to call an agent at this point, that calls tools and sends the tools&rsquo; output to the LLM to generate more context.</p> <p>What&rsquo;s magical is that LLMs plus tools start to <em>really</em> improve how you can generate context windows. Instead of having to have a very well-defined initial context window, you can use tools to inject relevant context to improve the initial context.</p> <p>This brings us to the third important thing that agents can do: <strong>they manage flow control for tool usage.</strong> Let&rsquo;s think about three different scenarios:</p> <ol> <li><em>Flow control via rules</em> has concrete rules about how tools can be used. Some examples: <ol> <li>it might only allow a given tool to be used once in a given workflow (or a usage limit of a tool for each user, etc)</li> <li>it might require that a human-in-the-loop approves parameters over a certain value (e.g. refunds more than $100 require human approval)</li> <li>it might run a generated Python program and return the output to analyze a dataset (or provide error messages if it fails)</li> <li>apply a permission system to tool use, restricting who can use which tools and which parameters a given user is able to use (e.g. you can only retrieve your own personal data)</li> <li>a tool to escalate to a human representative can only be called after five back and forths with the LLM agent</li> </ol> </li> <li><em>Flow control via statistics</em> can use statistics to identify and act on abnormal behavior: <ol> <li>if the size of a refund is higher than 99% of other refunds for the order size, you might want to escalate to a human</li> <li>if a user has used a tool more than 99% of other users, then you might want to reject usage for the rest of the day</li> <li>it might escalate to a human representative if tool parameters are more similar to prior parameters that required escalation to a human agent</li> </ol> </li> </ol> <p>LLMs themselves absolutely cannot be trusted. Anytime you rely on an LLM to enforce something important, you will fail. Using agents to manage flow control is <em>the</em> mechanism that makes it possible to build safe, reliable systems with LLMs. Whenever you find yourself dealing with an unreliable LLM-based system, you can <em>always</em> find a way to shift the complexity to a tool to avoid that issue. As an example, if you want to do algebra with an LLM, the solution is not asking the LLM to directly perform algebra, but instead providing a tool capable of algebra to the LLM, and then relying on the LLM to call that tool with the proper parameters.</p> <p>At this point, there is one final important thing that agents do: <strong>they are software programs.</strong> This means they can do anything software can do to build better context windows to pass on to LLMs for generation. This is an infinite category of tasks, but <em>generally</em> these include:</p> <ol> <li>Building general context to add to context window, sometimes thought of as maintaining memory</li> <li>Initiating a workflow based on an incoming ticket in a ticket tracker, customer support system, etc</li> <li>Periodically initiating workflows at a certain time, such as hourly review of incoming tickets</li> </ol> <p>Alright, we&rsquo;ve now summarized what AI agents can do down to four general capabilities. Recapping a bit, those capabilities are:</p> <ol> <li>Use an LLM to evaluate a context window and get a result</li> <li>Use an LLM to suggest tools relevant to the context window, then enrich the context window with the tool&rsquo;s response</li> <li>Manage flow control for tool usage via rules or statistical analysis</li> <li>Agents are software programs, and can do anything other software programs do</li> </ol> <p>Armed with these four capabilities, we&rsquo;ll be able to think about the ways we can, and cannot, apply AI agents to a number of opportunities.</p> <h2 id="use-case-1-customer-support-agent">Use Case 1: Customer Support Agent</h2> <p>One of the first scenarios that people often talk about deploying AI agents is customer support, so let&rsquo;s start there. A typical customer support process will have multiple tiers of agents who handle increasingly complex customer problems. So let&rsquo;s set a goal of taking over the easiest tier first, with the goal of moving up tiers over time as we show impact.</p> <p>Our approach might be:</p> <ol> <li>Allow tickets (or support chats) to flow into an AI agent</li> <li>Provide a variety of tools to the agent to support: <ol> <li>Retrieving information about the user: recent customer support tickets, account history, account state, and so on</li> <li>Escalating to next tier of customer support</li> <li>Refund a purchase (almost certainly implemented as &ldquo;refund purchase&rdquo; referencing a specific purchase by the user, rather than &ldquo;refund amount&rdquo; to prevent scenarios where the agent can be fooled into refunding too much)</li> <li>Closing the user account on request</li> </ol> </li> <li>Include customer support guidelines in the context window, describe customer problems, map those problems to specific tools that should be used to solve the problems</li> <li>Flow control rules that ensure all calls escalate to a human if not resolved within a certain time period, number of back-and-forth exchanges, if they run into an error in the agent, and so on. These rules should be both rules-based and statistics-based, ensuring that gaps in your rules are neither exploitable nor create a terrible customer experience</li> <li>Review agent-customer interactions for quality control, making improvements to the support guidelines provided to AI agents. Initially you would want to review every interaction, then move to interactions that lead to unusual outcomes (e.g. escalations to human) and some degree of random sampling</li> <li>Review hourly, then daily, and then weekly metrics of agent performance</li> <li>Based on your learnings from the metric reviews, you should set baselines for alerts which require more immediate response. For example, if a new topic comes up frequently, it probably means a serious regression in your product or process, and it requires immediate review rather than periodical review.</li> </ol> <p>Note that even when you&rsquo;ve moved &ldquo;Customer Support to AI agents&rdquo;, you still have:</p> <ul> <li>a tier of human agents dealing with the most complex calls</li> <li>humans reviewing the periodic performance statistics</li> <li>humans performing quality control on AI agent-customer interactions</li> </ul> <p>You absolutely can replace each of those downstream steps (reviewing performance statistics, etc) with its <em>own</em> AI agent, but doing that requires going through the development of an AI product for each of those flows. There is a recursive process here, where <em>over time</em> you can eliminate many human components of your business, in exchange for increased fragility as you have more tiers of complexity. The most interesting part of complex systems isn&rsquo;t how they work, it&rsquo;s how they fail, and agent-driven systems will fail occasionally, as all systems do, very much including human-driven ones.</p> <p>Applied with care, the above series of actions <em>will work</em> successfully. However, it&rsquo;s important to recognize that this is building an entire software pipeline, and then learning to <em>operate</em> that software pipeline in production. These are both very doable things, but they are meaningful work, turning customer support leadership into product managers and requiring an engineering team building and operating the customer support agent.</p> <h2 id="use-case-2-triaging-incoming-bug-reports">Use Case 2: Triaging incoming bug reports</h2> <p>When an incident is raised within your company, or when you receive a bug report, the first problem of the day is determining how severe the issue might be. If it&rsquo;s potentially quite severe, then you want on-call engineers immediately investigating; if it&rsquo;s certainly not severe, then you want to triage it in a less urgent process of some sort. It&rsquo;s interesting to think about how an AI agent might support this triaging workflow.</p> <p>The process might work as follows:</p> <ol> <li>Pipe all created incidents and all created tickets to this agent for review.</li> <li>Expose these tools to the agent: <ol> <li>Open an incident</li> <li>Retrieve current incidents</li> <li>Retrieve recently created tickets</li> <li>Retrieve production metrics</li> <li>Retrieve deployment logs</li> <li>Retrieve feature flag change logs</li> <li>Toggle known-safe feature flags</li> <li>Propose merging an incident with another for human approval</li> <li>Propose merging a ticket with another ticket for human approval</li> </ol> </li> <li><strong>Redundant LLM providers for critical workflows.</strong> If the LLM provider&rsquo;s API is unavailable, retry three times over ten seconds, then resort to using a second model provider (e.g. Anthropic first, if unavailable try OpenAI), and then finally create an incident that the triaging mechanism is unavailable. For critical workflows, we can&rsquo;t simply assume the APIs will be available, because in practice all major providers seem to have monthly availability issues.</li> <li><strong>Merge duplicates.</strong> When a ticket comes in, first check ongoing incidents and recently created tickets for potential duplicates. If there is a probable duplicate, suggest merging the ticket or incident with the existing issue and exit the workflow.</li> <li><strong>Assess impact.</strong> If production statistics are severely impacted, or if there is a new kind of error in production, then this is likely an issue that merits quick human review. If it&rsquo;s high priority, open an incident. If it&rsquo;s low priority, create a ticket.</li> <li><strong>Propose cause.</strong> Now that the incident has been sized, switch to analyzing the potential causes of the incident. Look at the code commits in recent deploys and suggest potential issues that might have caused the current error. In some cases this will be obvious (e.g. spiking errors with a traceback of a line of code that changed recently), and in other cases it will only be proximity in time.</li> <li><strong>Apply known-safe feature flags.</strong> Establish an allow list of known safe feature flags that the system is allowed to activate itself. For example, if there are expensive features that are safe to disable, it could be allowed to disable them, e.g. restricting paginating through deeper search results when under load might be a reasonable tradeoff between stability and user experience.</li> <li><strong>Defer to humans.</strong> At this point, rely on humans to drive incident, or ticket, remediation to completion.</li> <li><strong>Draft initial incident report.</strong> If an incident was opened, the agent should draft an initial incident report including the timeline, related changes, and the human activities taken over the course of the incident. This report should then be finalized by the human involved in the incident.</li> <li><strong>Run incident review.</strong> Your existing incident review process should take the incident review and determine how to modify your systems, including the triaging agent, to increase reliability over time.</li> <li><strong>Safeguard to reenable feature flags.</strong> Since we now have an agent disabling feature flags, we also need to add a periodic check (agent-driven or otherwise) to reenable the &ldquo;known safe&rdquo; feature flags if there isn&rsquo;t an ongoing incident to avoid accidentally disabling them for long periods of time.</li> </ol> <p>This is another AI agent that will absolutely work as long as you treat it as a software product. In this case, engineering is likely the product owner, but it will still require thoughtful iteration to improve its behavior over time. Some of the ongoing validation to make this flow work includes:</p> <ol> <li> <p>The role of humans in incident response and review will remain significant, merely aided by this agent. This is especially true in the review process, where an agent cannot solve the review process because it&rsquo;s about actively learning what to change based on the incident.</p> <p>You <em>can</em> make a reasonable argument that an agent could decide what to change and then hand that specification off to another agent to implement it. Even today, you can easily imagine low risk changes (e.g. a copy change) being automatically added to a ticket for human approval.</p> <p>Doing this for more complex, or riskier changes, is possible but requires an extraordinary degree of care and nuance: it is the polar opposite of the idea of &ldquo;just add agents and things get easy.&rdquo; Instead, enabling that sort of automation will require immense care in constraining changes to systems that cannot expose unsafe behavior. For example, one startup I know has represented their domain logic in a domain-specific language (DSL) that can be safely generated by an LLM, and are able to represent many customer-specific features solely through that DSL.</p> </li> <li> <p>Expanding the list of known-safe feature flags to make incidents remediable. To do this widely will require enforcing very specific requirements for how software is developed. Even doing this narrowly will require changes to ensure the known-safe feature flags <em>remain</em> safe as software is developed.</p> </li> <li> <p>Periodically reviewing incident statistics over time to ensure mean-time-to-resolution (MTTR) is decreasing. If the agent is truly working, this should decrease. If the agent isn&rsquo;t driving a reduction in MTTR, then something is rotten in the details of the implementation.</p> </li> </ol> <p>Even a very effective agent doesn&rsquo;t relieve the responsibility of careful system design. Rather, <strong>agents are a multiplier on the quality of your system design</strong>: done well, agents can make you significantly more effective. Done poorly, they&rsquo;ll only amplify your problems even more widely.</p> <h2 id="do-ai-agents-represent-entirety-of-this-generation-of-ai">Do AI Agents Represent Entirety of this Generation of AI?</h2> <p>If you accept my definition that AI agents are any combination of LLMs and software, then I think it&rsquo;s true that there&rsquo;s not much this generation of AI can express that doesn&rsquo;t fit this definition. I&rsquo;d readily accept the argument that LLM is too narrow a term, and that perhaps foundational model would be a better term. My sense is that this is a place where frontier definitions and colloquial usage have deviated a bit.</p> <h2 id="closing-thoughts">Closing thoughts</h2> <p>LLMs and agents are powerful mechanisms. I think they will truly change how products are designed and how products work. An entire generation of software makers, and company executives, are in the midst of learning how these tools work.</p> <p>For everything that AI agents can do, there are equally important things they cannot. They cannot make restoring a database faster than the network bandwidth supports. Access to text-based judgment does create missing tools. Nor does text-based judgment solve access controls, immediately make absent document exist, or otherwise solve the many real systems problems that exist in your business today. It is only the combination of agents, great system design, and great software design that will make agents truly shine.</p> <p>As it&rsquo;s always been, software isn&rsquo;t magic. Software is very logical. However, what software can accomplish is magical, if we use it effectively.</p>