Irrational Exuberancehttps://lethain.com/Recent content on Irrational ExuberanceHugo -- gohugo.ioen-usWill LarsonSun, 06 Jul 2025 04:00:00 -0700What can agents actually do?https://lethain.com/what-can-agents-do/Sun, 06 Jul 2025 04:00:00 -0700https://lethain.com/what-can-agents-do/<p>There&rsquo;s a lot of excitement about what AI (specifically the latest wave of LLM-anchored AI) can do, and how AI-first companies are different from the prior generations of companies. There are a lot of important and real opportunities at hand, but I find that many of these conversations occur at such an abstract altitude that they border on meaningless. Sort of like saying that your company could be much better if you <em>merely adopted more software</em>. That&rsquo;s certainly true, but it&rsquo;s not a particularly helpful claim.</p> <p>This post is an attempt to concisely summarize how AI agents work, apply that summary to a handful of real-world use cases for AI, and to generally make the case that agents are a multiplier on the quality of your software and system design. If your software or systems are poorly designed, agents will only cause harm. If there&rsquo;s any meaningful definition of an AI-first company, it must be companies where their software and systems are designed with an immaculate attention to detail.</p> <p>By the end of this writeup, my hope is that you&rsquo;ll be well-armed to have a concrete discussion about how LLMs and agents could change the shape of your company, and to avoid getting caught up in the needlessly abstract discussions that are often taking place today.</p> <h2 id="how-do-agents-work">How do agents work?</h2> <p>At its core, using an LLM is an API call that includes a prompt. For example, you might call Anthropic&rsquo;s <a href="https://docs.anthropic.com/en/api/messages"><code>/v1/message</code></a> with a prompt: <code>How should I adopt LLMs in my company?</code> That prompt is used to fill the LLM&rsquo;s context window, which conditions the model to generate certain kinds of responses.</p> <p>This is the first important thing that agents can do: <strong>use an LLM to evaluate a context window and get a result.</strong></p> <p>Prompt engineering, or <a href="https://www.philschmid.de/context-engineering">context engineering</a> as it&rsquo;s being called now, is deciding what to put into the context window to best generate the responses you&rsquo;re looking for. For example, In-Context Learning (ICL) is one form of context engineering, where you supply a bunch of similar examples before asking a question. If I want to determine if a transaction is fraudulent, then I might supply a bunch of prior transactions and whether they were, or were not, fraudulent as ICL examples. Those examples make generating the correct answer more likely.</p> <p>However, composing the perfect context window is very time intensive, benefiting from techniques like <a href="https://cookbook.openai.com/examples/enhance_your_prompts_with_meta_prompting">metaprompting</a> to improve your context. Indeed, the human (or automation) creating the initial context might not know enough to do a good job of providing relevant context. For example, if you prompt, <code>Who is going to become the next mayor of New York City?</code>, then <em>you</em> are unsuited to include the answer to that question in your prompt. To do that, you would need to already know the answer, which is why you&rsquo;re asking the question to begin with!</p> <p>This is where we see model chat experiences from OpenAI and Anthropic use web search to pull in context that you likely don&rsquo;t have. If you ask a question about the new mayor of New York, they use a tool to retrieve web search results, then add the content of those searches to your context window.</p> <p>This is the second important thing that agents can do: <strong>use an LLM to suggest tools relevant to the context window, then enrich the context window with the tool&rsquo;s response.</strong></p> <p>However, it&rsquo;s important to clarify how &ldquo;tool usage&rdquo; actually works. An LLM does not actually call a tool. (You can skim OpenAI&rsquo;s <a href="https://platform.openai.com/docs/guides/function-calling?api-mode=responses">function calling documentation</a> if you want to see a specific real-world example of this.) Instead there is a five-step process to calling tools that can be a bit counter-intuitive:</p> <ol> <li>The program designer that calls the LLM API must also define a set of tools that the LLM is allowed to suggest using.</li> <li>Every API call to the LLM includes that defined set of tools as <em>options</em> that the LLM is allowed to <em>recommend</em></li> <li>The response from the API call with defined functions is either: <ol> <li> <p>Generated text as any other call to an LLM might provide</p> </li> <li> <p>A <em>recommendation</em> to call a specific tool with a specific set of parameters, e.g. an LLM that knows about a <code>get_weather</code> tool, when prompted about the weather in Paris, might return this response:</p> <pre><code> [{ &quot;type&quot;: &quot;function_call&quot;, &quot;name&quot;: &quot;get_weather&quot;, &quot;arguments&quot;: &quot;{\&quot;location\&quot;:\&quot;Paris, France\&quot;}&quot; }] </code></pre> </li> </ol> </li> <li>The program that calls the LLM API then decides whether and how to honor that requested tool use. The program might decide to reject the requested tool because it&rsquo;s been used too frequently recently (e.g. rate limiting), it might check if the associated user has permission to use the tool (e.g. maybe it&rsquo;s a premium only tool), it might check if the parameters match the user&rsquo;s role-based permissions as well (e.g. the user can check weather, but only admin users are allowed to check weather in France).</li> <li>If the program does decide to call the tool, it invokes the tool, then calls the LLM API with the output of the tool appended to the prior call&rsquo;s context window.</li> </ol> <p>The important thing about this loop is that the LLM itself can still only do one interesting thing: taking a context window and returning generated text. It is the broader program, which we can start to call an agent at this point, that calls tools and sends the tools&rsquo; output to the LLM to generate more context.</p> <p>What&rsquo;s magical is that LLMs plus tools start to <em>really</em> improve how you can generate context windows. Instead of having to have a very well-defined initial context window, you can use tools to inject relevant context to improve the initial context.</p> <p>This brings us to the third important thing that agents can do: <strong>they manage flow control for tool usage.</strong> Let&rsquo;s think about three different scenarios:</p> <ol> <li><em>Flow control via rules</em> has concrete rules about how tools can be used. Some examples: <ol> <li>it might only allow a given tool to be used once in a given workflow (or a usage limit of a tool for each user, etc)</li> <li>it might require that a human-in-the-loop approves parameters over a certain value (e.g. refunds more than $100 require human approval)</li> <li>it might run a generated Python program and return the output to analyze a dataset (or provide error messages if it fails)</li> <li>apply a permission system to tool use, restricting who can use which tools and which parameters a given user is able to use (e.g. you can only retrieve your own personal data)</li> <li>a tool to escalate to a human representative can only be called after five back and forths with the LLM agent</li> </ol> </li> <li><em>Flow control via statistics</em> can use statistics to identify and act on abnormal behavior: <ol> <li>if the size of a refund is higher than 99% of other refunds for the order size, you might want to escalate to a human</li> <li>if a user has used a tool more than 99% of other users, then you might want to reject usage for the rest of the day</li> <li>it might escalate to a human representative if tool parameters are more similar to prior parameters that required escalation to a human agent</li> </ol> </li> </ol> <p>LLMs themselves absolutely cannot be trusted. Anytime you rely on an LLM to enforce something important, you will fail. Using agents to manage flow control is <em>the</em> mechanism that makes it possible to build safe, reliable systems with LLMs. Whenever you find yourself dealing with an unreliable LLM-based system, you can <em>always</em> find a way to shift the complexity to a tool to avoid that issue. As an example, if you want to do algebra with an LLM, the solution is not asking the LLM to directly perform algebra, but instead providing a tool capable of algebra to the LLM, and then relying on the LLM to call that tool with the proper parameters.</p> <p>At this point, there is one final important thing that agents do: <strong>they are software programs.</strong> This means they can do anything software can do to build better context windows to pass on to LLMs for generation. This is an infinite category of tasks, but <em>generally</em> these include:</p> <ol> <li>Building general context to add to context window, sometimes thought of as maintaining memory</li> <li>Initiating a workflow based on an incoming ticket in a ticket tracker, customer support system, etc</li> <li>Periodically initiating workflows at a certain time, such as hourly review of incoming tickets</li> </ol> <p>Alright, we&rsquo;ve now summarized what AI agents can do down to four general capabilities. Recapping a bit, those capabilities are:</p> <ol> <li>Use an LLM to evaluate a context window and get a result</li> <li>Use an LLM to suggest tools relevant to the context window, then enrich the context window with the tool&rsquo;s response</li> <li>Manage flow control for tool usage via rules or statistical analysis</li> <li>Agents are software programs, and can do anything other software programs do</li> </ol> <p>Armed with these four capabilities, we&rsquo;ll be able to think about the ways we can, and cannot, apply AI agents to a number of opportunities.</p> <h2 id="use-case-1-customer-support-agent">Use Case 1: Customer Support Agent</h2> <p>One of the first scenarios that people often talk about deploying AI agents is customer support, so let&rsquo;s start there. A typical customer support process will have multiple tiers of agents who handle increasingly complex customer problems. So let&rsquo;s set a goal of taking over the easiest tier first, with the goal of moving up tiers over time as we show impact.</p> <p>Our approach might be:</p> <ol> <li>Allow tickets (or support chats) to flow into an AI agent</li> <li>Provide a variety of tools to the agent to support: <ol> <li>Retrieving information about the user: recent customer support tickets, account history, account state, and so on</li> <li>Escalating to next tier of customer support</li> <li>Refund a purchase (almost certainly implemented as &ldquo;refund purchase&rdquo; referencing a specific purchase by the user, rather than &ldquo;refund amount&rdquo; to prevent scenarios where the agent can be fooled into refunding too much)</li> <li>Closing the user account on request</li> </ol> </li> <li>Include customer support guidelines in the context window, describe customer problems, map those problems to specific tools that should be used to solve the problems</li> <li>Flow control rules that ensure all calls escalate to a human if not resolved within a certain time period, number of back-and-forth exchanges, if they run into an error in the agent, and so on. These rules should be both rules-based and statistics-based, ensuring that gaps in your rules are neither exploitable nor create a terrible customer experience</li> <li>Review agent-customer interactions for quality control, making improvements to the support guidelines provided to AI agents. Initially you would want to review every interaction, then move to interactions that lead to unusual outcomes (e.g. escalations to human) and some degree of random sampling</li> <li>Review hourly, then daily, and then weekly metrics of agent performance</li> <li>Based on your learnings from the metric reviews, you should set baselines for alerts which require more immediate response. For example, if a new topic comes up frequently, it probably means a serious regression in your product or process, and it requires immediate review rather than periodical review.</li> </ol> <p>Note that even when you&rsquo;ve moved &ldquo;Customer Support to AI agents&rdquo;, you still have:</p> <ul> <li>a tier of human agents dealing with the most complex calls</li> <li>humans reviewing the periodic performance statistics</li> <li>humans performing quality control on AI agent-customer interactions</li> </ul> <p>You absolutely can replace each of those downstream steps (reviewing performance statistics, etc) with its <em>own</em> AI agent, but doing that requires going through the development of an AI product for each of those flows. There is a recursive process here, where <em>over time</em> you can eliminate many human components of your business, in exchange for increased fragility as you have more tiers of complexity. The most interesting part of complex systems isn&rsquo;t how they work, it&rsquo;s how they fail, and agent-driven systems will fail occasionally, as all systems do, very much including human-driven ones.</p> <p>Applied with care, the above series of actions <em>will work</em> successfully. However, it&rsquo;s important to recognize that this is building an entire software pipeline, and then learning to <em>operate</em> that software pipeline in production. These are both very doable things, but they are meaningful work, turning customer support leadership into product managers and requiring an engineering team building and operating the customer support agent.</p> <h2 id="use-case-2-triaging-incoming-bug-reports">Use Case 2: Triaging incoming bug reports</h2> <p>When an incident is raised within your company, or when you receive a bug report, the first problem of the day is determining how severe the issue might be. If it&rsquo;s potentially quite severe, then you want on-call engineers immediately investigating; if it&rsquo;s certainly not severe, then you want to triage it in a less urgent process of some sort. It&rsquo;s interesting to think about how an AI agent might support this triaging workflow.</p> <p>The process might work as follows:</p> <ol> <li>Pipe all created incidents and all created tickets to this agent for review.</li> <li>Expose these tools to the agent: <ol> <li>Open an incident</li> <li>Retrieve current incidents</li> <li>Retrieve recently created tickets</li> <li>Retrieve production metrics</li> <li>Retrieve deployment logs</li> <li>Retrieve feature flag change logs</li> <li>Toggle known-safe feature flags</li> <li>Propose merging an incident with another for human approval</li> <li>Propose merging a ticket with another ticket for human approval</li> </ol> </li> <li><strong>Redundant LLM providers for critical workflows.</strong> If the LLM provider&rsquo;s API is unavailable, retry three times over ten seconds, then resort to using a second model provider (e.g. Anthropic first, if unavailable try OpenAI), and then finally create an incident that the triaging mechanism is unavailable. For critical workflows, we can&rsquo;t simply assume the APIs will be available, because in practice all major providers seem to have monthly availability issues.</li> <li><strong>Merge duplicates.</strong> When a ticket comes in, first check ongoing incidents and recently created tickets for potential duplicates. If there is a probable duplicate, suggest merging the ticket or incident with the existing issue and exit the workflow.</li> <li><strong>Assess impact.</strong> If production statistics are severely impacted, or if there is a new kind of error in production, then this is likely an issue that merits quick human review. If it&rsquo;s high priority, open an incident. If it&rsquo;s low priority, create a ticket.</li> <li><strong>Propose cause.</strong> Now that the incident has been sized, switch to analyzing the potential causes of the incident. Look at the code commits in recent deploys and suggest potential issues that might have caused the current error. In some cases this will be obvious (e.g. spiking errors with a traceback of a line of code that changed recently), and in other cases it will only be proximity in time.</li> <li><strong>Apply known-safe feature flags.</strong> Establish an allow list of known safe feature flags that the system is allowed to activate itself. For example, if there are expensive features that are safe to disable, it could be allowed to disable them, e.g. restricting paginating through deeper search results when under load might be a reasonable tradeoff between stability and user experience.</li> <li><strong>Defer to humans.</strong> At this point, rely on humans to drive incident, or ticket, remediation to completion.</li> <li><strong>Draft initial incident report.</strong> If an incident was opened, the agent should draft an initial incident report including the timeline, related changes, and the human activities taken over the course of the incident. This report should then be finalized by the human involved in the incident.</li> <li><strong>Run incident review.</strong> Your existing incident review process should take the incident review and determine how to modify your systems, including the triaging agent, to increase reliability over time.</li> <li><strong>Safeguard to reenable feature flags.</strong> Since we now have an agent disabling feature flags, we also need to add a periodic check (agent-driven or otherwise) to reenable the &ldquo;known safe&rdquo; feature flags if there isn&rsquo;t an ongoing incident to avoid accidentally disabling them for long periods of time.</li> </ol> <p>This is another AI agent that will absolutely work as long as you treat it as a software product. In this case, engineering is likely the product owner, but it will still require thoughtful iteration to improve its behavior over time. Some of the ongoing validation to make this flow work includes:</p> <ol> <li> <p>The role of humans in incident response and review will remain significant, merely aided by this agent. This is especially true in the review process, where an agent cannot solve the review process because it&rsquo;s about actively learning what to change based on the incident.</p> <p>You <em>can</em> make a reasonable argument that an agent could decide what to change and then hand that specification off to another agent to implement it. Even today, you can easily imagine low risk changes (e.g. a copy change) being automatically added to a ticket for human approval.</p> <p>Doing this for more complex, or riskier changes, is possible but requires an extraordinary degree of care and nuance: it is the polar opposite of the idea of &ldquo;just add agents and things get easy.&rdquo; Instead, enabling that sort of automation will require immense care in constraining changes to systems that cannot expose unsafe behavior. For example, one startup I know has represented their domain logic in a domain-specific language (DSL) that can be safely generated by an LLM, and are able to represent many customer-specific features solely through that DSL.</p> </li> <li> <p>Expanding the list of known-safe feature flags to make incidents remediable. To do this widely will require enforcing very specific requirements for how software is developed. Even doing this narrowly will require changes to ensure the known-safe feature flags <em>remain</em> safe as software is developed.</p> </li> <li> <p>Periodically reviewing incident statistics over time to ensure mean-time-to-resolution (MTTR) is decreasing. If the agent is truly working, this should decrease. If the agent isn&rsquo;t driving a reduction in MTTR, then something is rotten in the details of the implementation.</p> </li> </ol> <p>Even a very effective agent doesn&rsquo;t relieve the responsibility of careful system design. Rather, <strong>agents are a multiplier on the quality of your system design</strong>: done well, agents can make you significantly more effective. Done poorly, they&rsquo;ll only amplify your problems even more widely.</p> <h2 id="do-ai-agents-represent-entirety-of-this-generation-of-ai">Do AI Agents Represent Entirety of this Generation of AI?</h2> <p>If you accept my definition that AI agents are any combination of LLMs and software, then I think it&rsquo;s true that there&rsquo;s not much this generation of AI can express that doesn&rsquo;t fit this definition. I&rsquo;d readily accept the argument that LLM is too narrow a term, and that perhaps foundational model would be a better term. My sense is that this is a place where frontier definitions and colloquial usage have deviated a bit.</p> <h2 id="closing-thoughts">Closing thoughts</h2> <p>LLMs and agents are powerful mechanisms. I think they will truly change how products are designed and how products work. An entire generation of software makers, and company executives, are in the midst of learning how these tools work.</p> <p>For everything that AI agents can do, there are equally important things they cannot. They cannot make restoring a database faster than the network bandwidth supports. Access to text-based judgment does create missing tools. Nor does text-based judgment solve access controls, immediately make absent document exist, or otherwise solve the many real systems problems that exist in your business today. It is only the combination of agents, great system design, and great software design that will make agents truly shine.</p> <p>As it&rsquo;s always been, software isn&rsquo;t magic. Software is very logical. However, what software can accomplish is magical, if we use it effectively.</p>What is the competitive advantage of authors in the age of LLMs?https://lethain.com/competitive-advantage-author-llms/Sat, 14 Jun 2025 05:00:00 -0700https://lethain.com/competitive-advantage-author-llms/<p>Over the past 19 months, I&rsquo;ve written <em><a href="https://craftingengstrategy.com/">Crafting Engineering Strategy</a></em>, a book on creating engineering strategy. I&rsquo;ve also been working increasingly with large language models at work. Unsurprisingly, the intersection of those two ideas is a topic that I&rsquo;ve been thinking about a lot. What, I&rsquo;ve wondered, is the role of the author, particularly the long-form author, in a world where an increasingly large percentage of writing is intermediated by large language models?</p> <p>One framing I&rsquo;ve heard somewhat frequently is the view that LLMs are first and foremost a great pillaging of authors&rsquo; work. It&rsquo;s true. They are that. At some point there was a script to let you check which books had been loaded into Meta&rsquo;s LLaMa, and every book I&rsquo;d written at that point was included, none of them with my consent. However, I long ago made my peace with <a href="https://lethain.com/plagarism-idea-theft-writing-online/">plagiarism online</a>, and this strikes me as not particularly different, albeit conducted by larger players. The folks using this writing are going to keep using it beyond the constraints I&rsquo;d prefer it to be used in, and I&rsquo;m disinterested in investing my scarce mental energy chasing through digital or legal mazes.</p> <p>Instead, I&rsquo;ve been thinking about how this transition might go <em>right</em> for authors. My favorite idea that I&rsquo;ve come up with is the idea of written content as &ldquo;datapacks&rdquo; for thinking. Buy someone&rsquo;s book / &ldquo;datapack&rdquo;, then upload it into your LLM, and you can immediately operate <em>almost</em> as if you knew the book&rsquo;s content.</p> <p>Let&rsquo;s start with an example. Imagine you want help onboarding as an executive, and you&rsquo;ve bought a copy of <em><a href="https://www.amazon.com/Engineering-Executives-Primer-Impactful-Leadership/dp/1098149483/">The Engineering Executive&rsquo;s Primer</a></em>, you could create a <a href="https://claude.ai/projects">project in Anthropic&rsquo;s Claude</a>, and upload the LLM-optimized book into your project.</p> <p>Here is what your Claude project might look like.</p> <p><img src="https://lethain.com/static/blog/2025/exec-onboarding-project.png" alt="Setup of &ldquo;executive onboarding&rdquo; project in Claude"></p> <p>Once you have it set up, you can ask it to help you create your onboarding plan.</p> <p><img src="https://lethain.com/static/blog/2025/exec-onboarding-plan.png" alt="Answers in Claude to question about planning first 90 days for an executive in Claude project"></p> <p>This guidance makes sense, largely pulled from <a href="https://lethain.com/first-ninety-days-cto-vpe/">Your first 90 days as CTO</a>. As always, you can iterate on your initial prompt&ndash;including more details you want to include into the plan&ndash;along with follow ups to improve the formatting and so on.</p> <p>One interesting thing here, is that I don&rsquo;t currently have a datapack for <em>The Engineering Executive&rsquo;s Primer</em>! To solve that, I built one from all my blog posts marked with the &ldquo;executive&rdquo; tag. I did that using <a href="https://gist.github.com/lethain/992f43ce1125b6891e2e22a9a1422d5d">this script</a> that packages Hugo blog posts, that I generated using <a href="https://gist.githubusercontent.com/lethain/20aff40e7f2fb9dcb8066f731f382db6/raw/a2aef51fbe43a6d533f3ef979902897bcc9281c6/prompt.md">this prompt with Claude 3.7 Sonnet</a>.</p> <p>The output of that script gets passed into <a href="https://repomix.com/">repomix</a> via:</p> <pre><code>repomix --include &quot;`./scripts/tags.py content executive | paste -d, -s -`&quot; </code></pre> <p>The mess with <code>paste</code> is to turn the multiline output from <code>tags.py</code> into a comma-separated list that repomix knows how to use.</p> <p>This is a really neat pattern, and starts to get at where I see the long-term advantage of writers in the current environment: if you&rsquo;re a writer and have access to your raw content, you can create a problem-specific datapack to discuss the problem. You can also give that datapack to someone else, or use it to answer their questions.</p> <p>For example, someone asked me a very detailed followup question about a recent blog post. It was a very long question, and I was on a weekend trip. I already had a Claude project setup with the contents of <em>Crafting Engineering Strategy</em>, so I just passed the question verbatim into that project, and sent the answer back to the person who asked it. (I did have to ask Claude to revise the answer once to focus more on what I thought the most important part of the answer was.)</p> <p><img src="https://lethain.com/static/blog/2025/x-strategy-answer.png" alt="The image shows a message exchange where a person named Jason asks for advice on determining problems for strategy exploration and whether to write one or multiple strategies. The response notes that Jason&rsquo;s query was used to generate an answer through a language model, with suggestions for managing multiple strategies."></p> <p>This, for what it&rsquo;s worth, wasn&rsquo;t a perfect answer, but it&rsquo;s pretty good. If the question asker had the right datapack, they could have gotten it themselves, without needing me to decide to answer it.</p> <p>However, this post is less worried about <em>the reader</em> than it is about <em>the author</em>. What is our competitive advantage as authors in a future where people are not <em>reading</em> our work? Well, maybe they&rsquo;re still <em>buying</em> our work in the form of datapacks and such, but it certainly seems likely that book sales, like blog traffic, will be impacted negatively.</p> <p>In trade, it&rsquo;s now possible for machines to understand our thinking that we&rsquo;ve recorded down into words over time. There&rsquo;s a running joke in my executive learning circle that I&rsquo;ve written a blog post on every topic that comes up, and that&rsquo;s <em>kind of</em> true. That means that I am on the cusp of the opportunity to uniquely scale myself by connecting &ldquo;intelligence on demand for a few cents&rdquo; with the written details of my thinking built over the past two decades of being a <a href="https://lethain.com/writers-who-operate/">writer who operates</a>.</p> <p>The tools that exist today are not quite there yet, although a combination of selling datapacks like the one for <em>Crafting Engineering Strategy</em> and tools like Claude&rsquo;s projects are a good start. There are many ways the exact details might come together, but I&rsquo;m optimistic that writing will become <em>more</em> powerful rather than less in this new world, even if the particular formats change.</p> <p>(For what it&rsquo;s worth, I don&rsquo;t think human readers are going away either.)</p> <hr> <p><em>If you&rsquo;re interested in the fully fleshed out version of this idea, <a href="https://craftingengstrategy.com/aic/preface/">starting here</a></em> <em>you can read the full AI Companion to Crafting Engineering Strategy.</em> <em>The datapack will be available via O&rsquo;Reilly in the next few months.</em></p> <p><em>If you&rsquo;re an existing O&rsquo;Reilly author who&rsquo;s skepical of this idea, don&rsquo;t worry: I worked with them</em> <em>to sign a custom contract, this usage&ndash;as best I understood it, although I am not a lawyer and am not providing legal advice&ndash;is outside</em> <em>of the scope of the default contract I signed with my prior book, and presumably most others&rsquo; contracts as well.</em></p>My desk setup in 2025.https://lethain.com/desk-setup-2025/Sat, 07 Jun 2025 04:00:00 -0700https://lethain.com/desk-setup-2025/<p>Since 2020, I&rsquo;ve been working on my desk setup, and I think I finally have it mostly pulled together at this point. I don&rsquo;t <em>really</em> think my desk setup is very novel, and I&rsquo;m sure there are better ways to pull it together, but I will say that it finally works the way I want since I added the <a href="https://www.caldigit.com/thunderbolt-5-dock-ts5-plus/">CalDigit TS5 Plus</a>, which has been a long time coming.</p> <p>My requirements for my desk are:</p> <ol> <li>Has support for 2-3 Mac laptops</li> <li>Has support for a Windows gaming desktop with a dedicated GPU</li> <li>Has a dedicated microphone</li> <li>Has good enough lighting</li> <li>Is not too messy</li> <li>I can switch between any laptop and desktop with a single Thunderbolt cable</li> </ol> <p>Historically the issue here has been the final requirement, where switching required moving two cables&ndash;a Thunderbolt and a cable for the dedicated graphics card&ndash;but with my new dock this finally works with just one cable.</p> <p><img src="https://lethain.com/static/blog/2025/desktop-far.jpg" alt=""></p> <p>The equipment shown here, and my brief review of each piece, is:</p> <ol> <li> <p><a href="https://www.upliftdesk.com/uplift-v2-standing-desk-v2-or-v2-commercial/?4275=4857&amp;4276=4657&amp;4279=3821&amp;27944=4849&amp;4280=11854&amp;10958=8704&amp;14905=11667">UPLIFT v2 Standing Desk</a> &ndash; is the standing desk I use. I both have a lot of stuff on my desk, and also want my desk to feel minimal, so I opted for the 72&quot; x 30&quot; verison. At the time I ordered it in 2020, the only option shipping quickly was the bamboo finish, so that&rsquo;s what I got.</p> </li> <li> <p><a href="https://www.caldigit.com/thunderbolt-5-dock-ts5-plus/">CalDigit TS5 Plus Dock</a> &ndash; this was the missing component that has three Thunderbolt ports <em>and</em> a DisplayPort. I have the external graphics card directly connected to the DisplayPort, and then move the Thunderbolt port from computer to computer to change which one is active.</p> <p>It also has enough USB-A ports to connect the adapters for my wireless keyboard and mouse, to avoid needing to pair them across computers which would create friction in switching computers.</p> </li> <li> <p><a href="https://www.apple.com/studio-display/">Apple Studio Display</a> &ndash; I experimented with dedicated speakers and video camera, but for me having them built into the monitor was helpful to reduce the number of things on my desk. The Studio Display&rsquo;s monitor, speakers and video camera are all solidly good enough for my purposes: I&rsquo;m sure I could get better on each dimension, but in practice I never think about this and don&rsquo;t find any issues with them.</p> <p>On the other hand, while I was initially hopeful that I could also get rid of my microphone, the microphone quality just wasn&rsquo;t that good for me, as I spend a lot of time on video conferences and recording podcasts, etc.</p> </li> <li> <p><a href="https://www.bee-link.com/products/beelink-gti-ex-bundle">Beelink GTi Ultra &amp; EX Pro Docking Station</a> &ndash; are my Windows mini desktop and dock which allows mounting an external GPU to the mini desktop. Beelink itself is slightly aggrevating because as best I can tell they&rsquo;ve done something quite odd in terms of custom patching Windows 11, but ultimately it&rsquo;s worked well for me as a dedicated gaming machine, and the build quality and size profile are both just fantastic.</p> </li> <li> <p><a href="https://www.amazon.com/dp/B0CSG8NYT3">MSI Gaming RTX 4070 Ti Super 16G Graphics Card</a> &ndash; I bought this earlier this year, looking for something that was in stock, and was good enough that it would last me a generation or two of graphics card upgrades without shelling out a truly massive amount for a 50XX edition (some of which don&rsquo;t seem to be upgrades on the 40XX predecesors anyway).</p> </li> <li> <p><a href="https://www.hexcal.com/products/hexcal-studio">Hexcal Studio</a> &ndash; this is the workstation / monitor stand / cable management system, with lighting and so on. I ultimately <em>do</em> like this, but it&rsquo;s not perfect, e.g. my Qi charger technically works but provides such bad charging speeds that it effectively doesn&rsquo;t work. It&rsquo;s definitely too expensive for something that doesn&rsquo;t entirely work, so I can&rsquo;t really recommend it, although now that I&rsquo;ve paid for it, I wouldn&rsquo;t bother replacing it either.</p> </li> <li> <p><a href="https://www.amazon.com/dp/B001AS6OYC">Audio-Technica AT2020USB Cardioid Condenser USB Microphone</a> &ndash; this is the microphone I&rsquo;ve been using for six years, and it&rsquo;s really quite good and cost something like $120 at the time. It&rsquo;s discontinued now, but presumably there&rsquo;s a more modern version somewhere. I have it mounted on this <a href="https://www.amazon.com/dp/B001D7UYBO">boom arm</a>.</p> </li> <li> <p><a href="https://www.amazon.com/dp/B0CNS5XJKD">LUME CUBE Edge 2.0 LED Desk Lamp</a> &ndash; I have two of these for lighting during recordings. I don&rsquo;t actually like using them very much, I just hate looking into lights, but I do use them periodically when I want to make sure lighting is actually correct.</p> </li> <li> <p><a href="https://www.amazon.com/dp/B082TSD2W5">Logitech MX Keys Advanced Wireless Illuminated Keyboard for Mac</a> &ndash; this keyboard works well for me, and has a USB-C so I can use a single powered USB-C cable from the Hexcal to charge my keyboard, my mouse, my phone, and my headphones.</p> </li> <li> <p><a href="https://www.amazon.com/dp/B09HM94VDS">Logitech MX Master 3S Wireless Mouse</a> &ndash; I&rsquo;ve been using variations of this mouse for a long time, I specifically bought this version a year or two ago to standardize all charging ports on USB-C.</p> </li> <li> <p>Laptop stand &ndash; I&rsquo;m not actually sure where I got this laptop stand from, it might have been Etsy. I found it relatively hard to find stands that support three laptops rather than just two. Before finding this one, I used <a href="https://www.amazon.com/dp/B078W3QSZY">this two-laptop stand</a> which is fine.</p> </li> <li> <p>Laptops &ndash; these are my personal and work Macbooks.</p> </li> </ol> <p>Here&rsquo;s a slightly closer look at the left side of the desk.</p> <p><img src="https://lethain.com/static/blog/2025/desktop-near.jpg" alt=""></p> <p>At this point, I really have nothing left that I&rsquo;m upset about with my setup, and I can&rsquo;t imagine changing this again in the next few years. As a bonus, my office has a handful of pieces of &ldquo;professional art&rdquo; that represent things I am proud of. From left to right, it&rsquo;s the cover of <em>An Elegant Puzzle</em>, a map of San Francisco drawn exclusively from Uber trip data on the night of Halloween 2014, and then the cover of <em>The Engineering Executive&rsquo;s Primer</em>. It&rsquo;s probably a bit vain, but I like to remember some of the accomplishments.</p>Stuff I learned at Carta.https://lethain.com/stuff-learned-at-carta/Fri, 23 May 2025 04:00:00 -0700https://lethain.com/stuff-learned-at-carta/<p>Today&rsquo;s my last day at Carta, where I got the chance to serve as their CTO for the past two years. I&rsquo;ve learned <em>so much</em> working there, and I wanted to end my chapter there by collecting my thoughts on what I learned. (I am heading somewhere, and will share news in a week or two after firming up the communication plan with my new team there.)</p> <p>The most important things I learned at Carta were:</p> <ul> <li> <p><strong>Working in the details</strong> &ndash; if you took a critical lens towards my historical leadership style, I think the biggest issue you&rsquo;d point at is my being too comfortable operating at a high level of abstraction. Utilizing the expertise of others to fill in your gaps is a valuable skill, but&ndash;like any single approach&ndash;it&rsquo;s limiting when utilized too frequently.</p> <p>One of the strengths of Carta&rsquo;s &ldquo;house leadership style&rdquo; is expecting leaders to go deep into the details to get informed and push pace. What I practiced there turned into the pieces on <a href="https://lethain.com/testing-strategy-iterative-refinement/">strategy testing</a> and <a href="https://lethain.com/domain-expertise/">developing domain expertise</a>.</p> </li> <li> <p><strong>Refining my approach to engineering strategy</strong> &ndash; over the past 18 months, I&rsquo;ve written a book on engineering strategy (posts are all in <a href="https://lethain.com/tags/eng-strategy-book/">#eng-strategy-book</a>), with initial chapters coming available for early release with O&rsquo;Reilly next month. Fingers crossed, the book will be released in approximately October.</p> <p>Coming into Carta, I already had <em>much</em> of my core thesis about how to do engineering strategy, but Carta gave me a number of complex projects to practice on, and excellent people to practice with: thank you to Dan, Shawna and Vogl in particular! More on this project in the next few weeks.</p> </li> <li> <p><strong><a href="https://lethain.com/extract-the-kernel/">Extract the kernel</a></strong> &ndash; everywhere I&rsquo;ve ever worked, teams have struggled understanding executives. In every case, the executives <em>could be clearer</em>, but it&rsquo;s not particularly interesting to frame these problems as something the executives need to fix. Sure, that&rsquo;s true they could communicate better, but that framing makes you powerless, when you have a great deal of power to understand confusing communication. After all, even good communicators communicate poorly sometimes.</p> </li> <li> <p><strong>Meaningfully adopting LLMs</strong> &ndash; a year ago I wrote up <a href="https://lethain.com/mental-model-for-how-to-use-llms-in-products/">notes on adopting LLMs in your products</a>, based on what we&rsquo;d learned so far. Since then, we&rsquo;ve learned a lot more, and LLMs themselves have significantly improved. Carta has been using LLMs in real, business-impacting workflows for over a year. That&rsquo;s continuing to expand into solving more complex internal workflows, and even more interestingly into creating net-new product capabilities that ought to roll out more widely in the next few months (currently released to small beta groups).</p> <p>This is the first major technology transition that I&rsquo;ve experienced in a senior leadership role (since I was earlier in my career when mobile internet transitioned from novelty to commodity). The immense pressure to adopt faster, combined with the immense uncertainty if it&rsquo;s a meaningful change or a brief blip was a lot of fun, and <a href="https://lethain.com/llm-adoption-strategy/">was the inspiration for this strategy document around LLM adoption</a>.</p> </li> <li> <p><strong>Multi-dimensional tradeoffs</strong> &ndash; a phrase that Henry Ward uses frequent is that &ldquo;everyone&rsquo;s right, just at a different altitude.&rdquo; That idea resonates with me, and meshes well with the ideas of <a href="https://lethain.com/multi-dimensional-tradeoffs/">multi-dimensional tradeoffs</a> and <a href="https://lethain.com/layers-of-context/">layers of context</a> that I find improve decision making for folks in roles that require making numerous, complex decisions. Working at Carta, these ideas formalized from something I intuited into something I could explain clearly.</p> </li> <li> <p><strong><a href="https://lethain.com/navigators/">Navigators</a></strong> &ndash; I think our most successful engineering strategy at Carta was rolling out the Navigator program, which ensured senior-most engineers had context and direct representation, rather than relying exclusively on indirect representation via engineering management. Carta&rsquo;s engineering managers are excellent, but there&rsquo;s always something lost as discussions extend across layers. The Navigator program probably isn&rsquo;t a perfect fit for particularly small companies, but I think any company with more than 100-150 engineers would benefit from something along these lines.</p> </li> <li> <p><strong><a href="https://lethain.com/quality/">How to create software quality</a></strong> &ndash; I&rsquo;ve evolved my thinking about software quality quite a bit over time, but Carta was particularly helpful in distinguishing why some pieces of software are so hard to build despite having little-to-no scale from a data or concurrency perspective. These systems, which I label as &ldquo;high essential complexity&rdquo;, deserve more credit for their complexity, even if they have little in the way of complexity from infrastructure scaling.</p> </li> <li> <p><strong>Shaping eng org costs</strong> &ndash; a few years ago, I wrote about <a href="https://infraeng.dev/efficiency/">my mental model for managing infrastructure costs</a>. At Carta, I got to refine my thinking about engineering salary costs, with most of those ideas getting incorporated in the <a href="https://lethain.com/private-equity-strategy/">Navigating Private Equity ownership</a> strategy, and the <a href="https://lethain.com/engineering-cost-model/">eng org seniority mix model</a>.</p> <p>The three biggest levers are (1) &ldquo;N-1 backfills&rdquo;, (2) requiring a business rationale for promotions into senior-most levels, and (3) shifting hiring into cost efficient hiring regions. None of these are the sort of inspiring topics that excite folks, but they are all essential to the long term stability of your organization.</p> </li> <li> <p><strong>Explaining engineering costs to boards/execs</strong> &ndash; Similarly, I finally have a clear perspective on how to represent R&amp;D investment to boards in the same language that they speak in, <a href="https://lethain.com/public-company-comparables/">which I wrote up here</a>, and know how to do it quickly without relying on any manually curated internal datasets.</p> </li> <li> <p>Lots of smaller stuff, like the <a href="https://lethain.com/no-wrong-doors/">no wrong doors policy</a> for routing colleagues to appropriate channels, <a href="https://lethain.com/how-to-get-more-headcount/">how to request headcount</a> in a way that is convincing to executives, <a href="https://lethain.com/load-bearing-career-minded-act-two-rationales/">Act Two rationales</a> for how people&rsquo;s motivations evolve over the course of long careers (and my own personal career mission to <a href="https://lethain.com/advancing-the-industry/">advance the industry</a>, why <a href="https://lethain.com/friction-vs-velocity/">friction isn&rsquo;t velocity</a> even though many folks act like it is.</p> </li> </ul> <p>I&rsquo;ve also learned quite a bit about venture capital, fund administration, cap tables, non-social network products, operating a multi-business line company, and various operating models. Figuring out how to sanitize those learnings to share the interesting tidbits without leaking internal details is a bit too painful, so I&rsquo;m omitting them for now. Maybe some will be shareable in four or five years after my context goes sufficiently stale.</p> <p>As a closing thought, I just want to say how much I&rsquo;ve appreciated the folks I&rsquo;ve gotten to work with at Carta. From the executive team (Ali, April, Charly, Davis, Henry, Jeff, Nicole, Vrushali) to my directs (Adi, Ciera, Dan, Dave, Jasmine, Javier, Jayesh, Karen, Madhuri, Sam, Shawna) to the navigators (there&rsquo;s a bunch of y&rsquo;all). The people truly are always the best part, and that was certainly true at Carta.</p>systems-mcp: generate systems models via LLMhttps://lethain.com/systems-mcp/Sun, 11 May 2025 06:00:00 -0700https://lethain.com/systems-mcp/<p>Back in 2018, I wrote <a href="https://github.com/lethain/systems"><code>lethain/systems</code></a> as a domain-specific language for writing runnable systems models, and introduced it with <a href="https://lethain.com/modeling-hiring-funnel-systems/">this blog post modeling a hiring funnel</a>. While it&rsquo;s far from a perfect system, I&rsquo;ve gotten a lot of value out of it over the last seven years, because it allows me to maintain systems models in version control.</p> <p>As I&rsquo;ve been playing with writing <a href="https://modelcontextprotocol.io/introduction">Model Context Protocol</a> (MCP) servers, one I&rsquo;ve been thinking about frequently is one to help writing <code>systems</code> syntax, and I finally put that together in the <a href="https://github.com/lethain/systems-mcp/"><code>lethain/systems-mcp</code></a> repository.</p> <p>More detailed installation and usage instructions are in the GitHub repository, so I&rsquo;ll just share a couple of screenshots and comments here. Starting with the <code>load_systems_documentation</code> tool which loads a copy of <code>lethain/systems/README.md</code> and <a href="https://raw.githubusercontent.com/lethain/systems-mcp/refs/heads/main/docs/examples.md">a file with example systems</a> into the context window.</p> <p><img src="https://lethain.com/static/blog/strategy/sys-mcp-load-prompt.png" alt="The image contains text and code snippets related to creating a systems model for user acquisition in a social network. It includes a brief description of systems modeling tools and a specification for modeling user flow from initial discovery to active contribution within the network."></p> <p>The biggest challenge of properly writing DSLs with an LLM is providing enough in-context learning (ICL) examples, and I think the idea of providing tools that are specifically designed to provide that context is a very interesting idea. Eventually I imagine there will be generalized tools for this, e.g. a search index of the best ICL examples for a wide variety of DSLs. Until then, my guess is that this sort of tool is particularly valuable.</p> <p>The second tool is <code>run_systems_model</code> which passes the DSL (and an optional parameter for number of rounds) to the tool and then returns the result.</p> <p><img src="https://lethain.com/static/blog/strategy/sys-mcp-load-artifact.png" alt="The image shows an interactive chart and data selection interface for a social network user acquisition model, allowing the user to select and display various metrics such as user types and funnel stages. The chart visualizes data over simulation rounds, indicating trends among different user segments like AwareUsers, CasualUsers, PowerUsers, and EngagedUsers."></p> <p>I experimented with interface design here, initially trying to return a rendered chart of the results, but ultimately even multi-modal models are just much better at working with text than with images. This meant that I had the best results returning JSON of the results and then having the LLM build a tool for interacting with the results.</p> <p>Altogether, a fun little experiment, and another confirmation in my mind that the most interesting part of designing MCPs today is deciding where to introduce and eliminate complexity from the LLM. Introduce too little and the tool lacks power; eliminate too little and the combination rarely works.</p>How to provide feedback on documents.https://lethain.com/providing-feedback-on-writing/Sun, 11 May 2025 04:00:00 -0700https://lethain.com/providing-feedback-on-writing/<p>At Carta, we recently ran a reading group for <a href="https://www.amazon.com/Facilitating-Software-Architecture-Empowering-Architectural-ebook/dp/B0DMHGWCPN/"><em>Facilitating Software Architecture</em></a> by Andrew Harmel-Law. We already <em>loosely</em> followed the ideas of an architectural advice process (from this <a href="https://martinfowler.com/articles/scaling-architecture-conversationally.html">2021 article</a> by the same Andrew Harmel-Law), but in practice we found that internal tech spec and architecture decision record (ADR) authors tended to exclusively share their documents locally within their team rather than more widely.</p> <p>As we asked authors why they preferred sharing locally, the most common answer was that they got enough feedback from their team that they didn&rsquo;t want to pay the time overhead of sharing widely. The wider feedback wasn&rsquo;t necessarily bad or combative. It just wasn&rsquo;t good enough to compensate for the additional time it cost to process. This made sense from the authors&rsquo; perspectives, but didn&rsquo;t work well for me from the executive perspective, as I was seeing teams make misaligned decisions due to lack of cross-team communication.</p> <p>As one step in reducing the overhead of sharing documents widely, I wrote up and shared this recommended process for providing feedback on documents:</p> <ol> <li> <p>Before starting, remember that the goal of providing feedback on a document is to help the author. Optimizing for anything else, even if it&rsquo;s a worthy cause, discourages authors from sharing their future writing. If you prioritize something other than helping the author, you are discouraging them from sharing future work.</p> </li> <li> <p>Start by skimming the document to understand its structure and where various kinds of topics are addressed. Why? This helps avoid giving feedback on ways the document&rsquo;s actual structure diverges from how you imagined it would be structured. It also reduces questions about topics that are answered later in the document.</p> <p>Both of these sorts of feedback are a distraction during a discussion on a tech spec. In general, it&rsquo;s better to avoid them. If you notice an author making the same significant structural mistake over several ADRs, it&rsquo;s worth delivering that feedback separately.</p> </li> <li> <p>After skimming, reread the document, leaving comments with concerns. Each comment should include these details:</p> <ol> <li>What your suggested change or concern is</li> <li>Why you believe this is meaningful to address</li> <li>How important this seems (from ignorable nitpick to critical)</li> </ol> </li> <li> <p>If you find yourself leaving more than three or four issues, then you should either raise your threshold for commenting or you should schedule time with the individual to talk over the feedback. If the document is unreasonably weak, then it&rsquo;s appropriate to nudge their leadership to dig into what&rsquo;s happening on that team.</p> </li> </ol> <p>The most important idea behind these steps is that your goal as a feedback giver is to <em>help the document&rsquo;s author</em>. It is not to protect <em>your</em> team&rsquo;s strategy or platform. It is not to optimize for <em>your</em> goals. It&rsquo;s to help the author. This might feel wrong, but ultimately optimizing for anything else will lead to an environment where sharing widely is an irrational behavior.</p> <p>As a final aside, I think the user experience around commenting on documents is fundamentally wrong in most document editors. For example, Google Docs treats individual comments as first-order objects, similarly to how old version control systems like <a href="https://en.wikipedia.org/wiki/Concurrent_Versions_System">CVS</a> tracked changes to individual files without tracking an overall state of the project. Ultimately, you want to collect all your comments into a bundle, then review that bundle for consistency and duplicates, and then submit that bundle as commentary, but editors don&rsquo;t support that flow particularly well.</p>Public company comparables.https://lethain.com/public-company-comparables/Sat, 03 May 2025 07:00:00 -0700https://lethain.com/public-company-comparables/<p>A few years ago I wrote about <a href="https://lethain.com/profit-and-loss-statement/">reading a Profit &amp; Loss statement</a>, which is a foundational executive skill. I also subsequently wrote about <a href="https://lethain.com/measuring-engineering-organizations/">ways to measure your engineering organization</a>. Despite having written those, I still spend a lot of time wondering about effective ways to represent an engineering organization to your board of directors.</p> <p>Over the past few years, one of the most useful charts I&rsquo;ve found for explaining an R&amp;D organization is a scatterplot of R&amp;D spend as a % of margin versus YoY growth of last twelve months (LTM) revenue. Unlike so many other measures, this is an explicit measure of your R&amp;D organization value as an investment relative to peer organizations.</p> <p>Until recently, I assumed building this dataset required reading financial filings but my strategic finance partner at Carta, Tyler Braslow, pointed out that you can get all of this data for the tech section from <a href="https://meritechanalytics.com/">Meritech Analytics</a>, for free.</p> <p><img src="https://lethain.com/static/blog/2025/maritech-cover.png" alt="The image shows a webpage for Meritech Analytics, which offers a public software benchmarking solution with features for analyzing SaaS company metrics. It includes options to sign up or explore features, and provides tools like regression analysis for users."></p> <p>When you login to Meritech, you&rsquo;re dropped into a table of <a href="https://app.meritechanalytics.com/comps-table">public company comparables for tech companies</a>. This is the exact dataset I&rsquo;d been looking for to build this chart.</p> <p><img src="https://lethain.com/static/blog/2025/maritech-sheet.png" alt="The image displays a table from the Meritech Software Index showing various metrics such as market cap, enterprise value, and percentage price changes for different companies like Adobe and AppLovin. It also includes averages (mean and median) for these metrics over specified periods."></p> <p>After logging in, you can then copy the contents of that table into a Google Sheets spreadsheet or Excel, or whatever you&rsquo;re most comfortable with.</p> <p><img src="https://lethain.com/static/blog/2025/maritech-in-gsheets.png" alt="The image shows a spreadsheet from Google Sheets titled &ldquo;PublicComps,&rdquo; displaying financial data for various companies, including columns for price changes, market capitalization, enterprise value, and several valuation metrics. The data is presented with calculations for mean and median values across the companies listed."></p> <p>Within that sheet, the columns you care about are:</p> <ul> <li>% YoY Growth LTM Rev (column <code>Q</code> for me) &ndash; how much &ldquo;last twelve months revenue&rdquo; has grown year over year, as a percentage</li> <li>% LTM Margins for R&amp;D (column <code>U</code> for me) &ndash; how much R&amp;D spend is as a percentage of last twelve months margin</li> <li>LTM Revenue (column <code>O</code> for me) &ndash; although I don&rsquo;t show this in the scatterplot, I find this one useful for debugging outlier values</li> </ul> <p>Hiding the other columns gives you a much simpler table.</p> <p><img src="https://lethain.com/static/blog/2025/maritech-hidden-sheet.png" alt="The image is a table listing companies along with their Last Twelve Months (LTM) Revenue, Year-over-Year (YoY) Growth percentage, and LTM R&amp;D margins. Notably, Adobe has the highest LTM Revenue at $21,505 with an 11% YoY growth and 14% R&amp;D margin."></p> <p>From that table, you&rsquo;re then able to build the scatterplot. Note that being &ldquo;higher&rdquo; means your R&amp;D spend as a percentage of LTM margin is higher, which is a bad thing. The best companies are to the bottom and to the right; the worst companies are to the top and to the left.</p> <p><img src="https://lethain.com/static/blog/2025/maritech-rd-spend-rev.png" alt="The scatter plot displays the relationship between R&amp;D Spend as a percentage of margin and Year-over-Year Revenue Growth, with data points clustered mostly in the 10%-30% range for both axes. Each point is labeled with a percentage representing the R&amp;D Spend as a percentage of margin."></p> <p>With this chart as a starting point, you can then plot your company in and show where you stand. You could also show how your company&rsquo;s position in the chart has evolved over time: hopefully improving. Finally, you might want to cull some of these data points to better determine your public company comparables. The Meritech dataset has 106 entries, but you might prefer a more representative thirty entries.</p>How to filter out old email from inboxhttps://lethain.com/filter-old-gmail-messages/Sat, 03 May 2025 06:00:00 -0700https://lethain.com/filter-old-gmail-messages/<p>Every few years I take a pass at reducing the chaos in my personal inboxes. There are simply too many emails to deal with, and that generally leads to me increasingly failing to follow up on important email.</p> <p>Up to this point, my strategy has largely been filtering out emails that I <em>never</em> want to read. But there&rsquo;s another category of email which is stuff I often want to read when it&rsquo;s fresh, but never want to read after it&rsquo;s fresh. For example, calendar reminders, <em>some</em> mailing lists, <em>some</em> news letters, etc.</p> <p>I decided to figure out how I could setup a system where I could mark a number of things as &ldquo;filter three days after receipt&rdquo;. This is a nice compromise, because I <em>do</em> want to see those things, but I <em>don&rsquo;t</em> want to have to remember to archive them after the fact.</p> <p>You can write a search query for this in GMail:</p> <pre><code>from:(calendar-notification@google.com) older_than:3d </code></pre> <p>However, if you try to create a GMail filter using that, it turns the <code>older_than:3d</code> into a fixed point in time rather than doing what you want.</p> <p><img src="https://lethain.com/static/blog/2025/gmail-filter.png" alt="The image shows a Gmail search filter setup, looking for emails from &ldquo;calendar-notification@google.com&rdquo; that are older than three days and have a size greater than a specified amount, dated within one day of May 3, 2025. Options to filter by attachments and exclude chats are available, with &ldquo;Create filter&rdquo; and &ldquo;Search&rdquo; buttons at the bottom."></p> <p>It seems that this is unsolvable within GMail itself. However, some quick searching suggested it was possible to create a Google App Script to solve this, and asked <a href="https://claude.ai/share/c87e3409-46f1-4389-9b60-dd3f1386fed3">Claude to write the script for me</a>.</p> <p>Following those instructions, I went to <a href="https://script.google.com/">script.google.com</a>, which I have not gone to in many years. I edited the generated script from Claude to use the tag &ldquo;TempMsg&rdquo;, to archive messages (as originally it had those commented out), and to limit itself to the first fifty items matching that tag. You can find the full code <a href="https://gist.github.com/lethain/b3766a24c77925f54b38345b9dd7f544">in this gist</a>.</p> <p><img src="https://lethain.com/static/blog/2025/gmail-script-setup.png" alt="This image provides instructions on setting up a Google Apps Script to filter Gmail messages with a tag older than three days. It includes steps for opening Gmail settings and creating a new script project."></p> <p>I attempted to run this as is, and got an error message that I needed to grant permissions. That requires three clicks within the Google Scripts UI.</p> <p><img src="https://lethain.com/static/blog/2025/gmail-permissions.png" alt="The image shows a Google Apps Script interface where a user is adding the Gmail API service to a project. Red arrows highlight the action of selecting the Gmail API from a list of services and the &ldquo;Add&rdquo; button."></p> <p>This also requires approving the somewhat scary message that I trust myself.</p> <p><img src="https://lethain.com/static/blog/2025/gmail-auth.png" alt="The image displays a warning that Google hasn&rsquo;t verified a specific app, advising users not to use it until the developer verifies it with Google due to its request for sensitive account information. Users have the option to return to safety or proceed with caution if they trust the developer."></p> <p>From there I tried to run this script, and it failed because the <code>TempMsg</code> tag doesn&rsquo;t exist in my inbox.</p> <p><img src="https://lethain.com/static/blog/2025/gmail-run-failed.png" alt="The image shows a script written in JavaScript for Google Apps Script, designed to filter Gmail messages older than three days with a specific label. The execution log indicates that the label &ldquo;TempMsg&rdquo; was not found during the script&rsquo;s execution."></p> <p>So I went ahead and created that tag, and setup some filters to assign that tag to certain email senders.</p> <p><img src="https://lethain.com/static/blog/2025/gmail-add-filter.png" alt="This image shows the Gmail filter creation settings page, where emails from &ldquo;calendar-notification@google.com&rdquo; are set to be labeled as &ldquo;TempMsg.&rdquo; Options for other actions like skipping the inbox, marking as read, or forwarding are available but not selected."></p> <p>After that, I was able to run the script and it worked properly. Note that I convinced myself that it was failing for a bit, because it doesn&rsquo;t remove messages from the past three days. That is exactly how it&rsquo;s supposed to work, but I would run it and then see messages with the tag there and think it was failing. Woops.</p> <p>After convincing myself it was working, I added a periodic trigger to run this.</p> <p><img src="https://lethain.com/static/blog/2025/gmail-add-trigger.png" alt="This image shows a Google Cloud Platform interface for adding a time-driven trigger to run the function &ldquo;filterOldGmailMessages&rdquo; daily between midnight and 1am, with immediate failure notifications."></p> <p>I now have this running on a daily basis, and it&rsquo;s given me a nice new tool for managing my email a bit better. After verifying it, I also used the tag manager to &ldquo;hide&rdquo; this tag in the inbox, so I don&rsquo;t have to see the <code>TempMsg</code> tag everywhere. If I ever need to debug things, I can always make it visible again.</p>