What can agents actually do?
There’s a lot of excitement about what AI (specifically the latest wave of LLM-anchored AI) can do, and how AI-first companies are different from the prior generations of companies. There are a lot of important and real opportunities at hand, but I find that many of these conversations occur at such an abstract altitude that they border on meaningless. Sort of like saying that your company could be much better if you merely adopted more software. That’s certainly true, but it’s not a particularly helpful claim.
This post is an attempt to concisely summarize how AI agents work, apply that summary to a handful of real-world use cases for AI, and to generally make the case that agents are a multiplier on the quality of your software and system design. If your software or systems are poorly designed, agents will only cause harm. If there’s any meaningful definition of an AI-first company, it must be companies where their software and systems are designed with an immaculate attention to detail.
By the end of this writeup, my hope is that you’ll be well-armed to have a concrete discussion about how LLMs and agents could change the shape of your company, and to avoid getting caught up in the needlessly abstract discussions that are often taking place today.
How do agents work?
At its core, using an LLM is an API call that includes a prompt.
For example, you might call Anthropic’s /v1/message
with a prompt: How should I adopt LLMs in my company?
That prompt is used to fill the LLM’s context window, which conditions the model to
generate certain kinds of responses.
This is the first important thing that agents can do: use an LLM to evaluate a context window and get a result.
Prompt engineering, or context engineering as it’s being called now, is deciding what to put into the context window to best generate the responses you’re looking for. For example, In-Context Learning (ICL) is one form of context engineering, where you supply a bunch of similar examples before asking a question. If I want to determine if a transaction is fraudulent, then I might supply a bunch of prior transactions and whether they were, or were not, fraudulent as ICL examples. Those examples make generating the correct answer more likely.
However, composing the perfect context window is very time intensive, benefiting from techniques like
metaprompting to improve your context.
Indeed, the human (or automation) creating the initial context might not know enough to do a good job of providing
relevant context.
For example, if you prompt, Who is going to become the next mayor of New York City?
,
then you are unsuited to include the answer to that question in your prompt. To do that, you would need to already know
the answer, which is why you’re asking the question to begin with!
This is where we see model chat experiences from OpenAI and Anthropic use web search to pull in context that you likely don’t have. If you ask a question about the new mayor of New York, they use a tool to retrieve web search results, then add the content of those searches to your context window.
This is the second important thing that agents can do: use an LLM to suggest tools relevant to the context window, then enrich the context window with the tool’s response.
However, it’s important to clarify how “tool usage” actually works. An LLM does not actually call a tool. (You can skim OpenAI’s function calling documentation if you want to see a specific real-world example of this.) Instead there is a five-step process to calling tools that can be a bit counter-intuitive:
- The program designer that calls the LLM API must also define a set of tools that the LLM is allowed to suggest using.
- Every API call to the LLM includes that defined set of tools as options that the LLM is allowed to recommend
- The response from the API call with defined functions is either:
Generated text as any other call to an LLM might provide
A recommendation to call a specific tool with a specific set of parameters, e.g. an LLM that knows about a
get_weather
tool, when prompted about the weather in Paris, might return this response:[{ "type": "function_call", "name": "get_weather", "arguments": "{\"location\":\"Paris, France\"}" }]
- The program that calls the LLM API then decides whether and how to honor that requested tool use. The program might decide to reject the requested tool because it’s been used too frequently recently (e.g. rate limiting), it might check if the associated user has permission to use the tool (e.g. maybe it’s a premium only tool), it might check if the parameters match the user’s role-based permissions as well (e.g. the user can check weather, but only admin users are allowed to check weather in France).
- If the program does decide to call the tool, it invokes the tool, then calls the LLM API with the output of the tool appended to the prior call’s context window.
The important thing about this loop is that the LLM itself can still only do one interesting thing: taking a context window and returning generated text. It is the broader program, which we can start to call an agent at this point, that calls tools and sends the tools’ output to the LLM to generate more context.
What’s magical is that LLMs plus tools start to really improve how you can generate context windows. Instead of having to have a very well-defined initial context window, you can use tools to inject relevant context to improve the initial context.
This brings us to the third important thing that agents can do: they manage flow control for tool usage. Let’s think about three different scenarios:
- Flow control via rules has concrete rules about how tools can be used.
Some examples:
- it might only allow a given tool to be used once in a given workflow (or a usage limit of a tool for each user, etc)
- it might require that a human-in-the-loop approves parameters over a certain value (e.g. refunds more than $100 require human approval)
- it might run a generated Python program and return the output to analyze a dataset (or provide error messages if it fails)
- apply a permission system to tool use, restricting who can use which tools and which parameters a given user is able to use (e.g. you can only retrieve your own personal data)
- a tool to escalate to a human representative can only be called after five back and forths with the LLM agent
- Flow control via statistics can use statistics to identify and act on abnormal behavior:
- if the size of a refund is higher than 99% of other refunds for the order size, you might want to escalate to a human
- if a user has used a tool more than 99% of other users, then you might want to reject usage for the rest of the day
- it might escalate to a human representative if tool parameters are more similar to prior parameters that required escalation to a human agent
LLMs themselves absolutely cannot be trusted. Anytime you rely on an LLM to enforce something important, you will fail. Using agents to manage flow control is the mechanism that makes it possible to build safe, reliable systems with LLMs. Whenever you find yourself dealing with an unreliable LLM-based system, you can always find a way to shift the complexity to a tool to avoid that issue. As an example, if you want to do algebra with an LLM, the solution is not asking the LLM to directly perform algebra, but instead providing a tool capable of algebra to the LLM, and then relying on the LLM to call that tool with the proper parameters.
At this point, there is one final important thing that agents do: they are software programs. This means they can do anything software can do to build better context windows to pass on to LLMs for generation. This is an infinite category of tasks, but generally these include:
- Building general context to add to context window, sometimes thought of as maintaining memory
- Initiating a workflow based on an incoming ticket in a ticket tracker, customer support system, etc
- Periodically initiating workflows at a certain time, such as hourly review of incoming tickets
Alright, we’ve now summarized what AI agents can do down to four general capabilities. Recapping a bit, those capabilities are:
- Use an LLM to evaluate a context window and get a result
- Use an LLM to suggest tools relevant to the context window, then enrich the context window with the tool’s response
- Manage flow control for tool usage via rules or statistical analysis
- Agents are software programs, and can do anything other software programs do
Armed with these four capabilities, we’ll be able to think about the ways we can, and cannot, apply AI agents to a number of opportunities.
Use Case 1: Customer Support Agent
One of the first scenarios that people often talk about deploying AI agents is customer support, so let’s start there. A typical customer support process will have multiple tiers of agents who handle increasingly complex customer problems. So let’s set a goal of taking over the easiest tier first, with the goal of moving up tiers over time as we show impact.
Our approach might be:
- Allow tickets (or support chats) to flow into an AI agent
- Provide a variety of tools to the agent to support:
- Retrieving information about the user: recent customer support tickets, account history, account state, and so on
- Escalating to next tier of customer support
- Refund a purchase (almost certainly implemented as “refund purchase” referencing a specific purchase by the user, rather than “refund amount” to prevent scenarios where the agent can be fooled into refunding too much)
- Closing the user account on request
- Include customer support guidelines in the context window, describe customer problems, map those problems to specific tools that should be used to solve the problems
- Flow control rules that ensure all calls escalate to a human if not resolved within a certain time period, number of back-and-forth exchanges, if they run into an error in the agent, and so on. These rules should be both rules-based and statistics-based, ensuring that gaps in your rules are neither exploitable nor create a terrible customer experience
- Review agent-customer interactions for quality control, making improvements to the support guidelines provided to AI agents. Initially you would want to review every interaction, then move to interactions that lead to unusual outcomes (e.g. escalations to human) and some degree of random sampling
- Review hourly, then daily, and then weekly metrics of agent performance
- Based on your learnings from the metric reviews, you should set baselines for alerts which require more immediate response. For example, if a new topic comes up frequently, it probably means a serious regression in your product or process, and it requires immediate review rather than periodical review.
Note that even when you’ve moved “Customer Support to AI agents”, you still have:
- a tier of human agents dealing with the most complex calls
- humans reviewing the periodic performance statistics
- humans performing quality control on AI agent-customer interactions
You absolutely can replace each of those downstream steps (reviewing performance statistics, etc) with its own AI agent, but doing that requires going through the development of an AI product for each of those flows. There is a recursive process here, where over time you can eliminate many human components of your business, in exchange for increased fragility as you have more tiers of complexity. The most interesting part of complex systems isn’t how they work, it’s how they fail, and agent-driven systems will fail occasionally, as all systems do, very much including human-driven ones.
Applied with care, the above series of actions will work successfully. However, it’s important to recognize that this is building an entire software pipeline, and then learning to operate that software pipeline in production. These are both very doable things, but they are meaningful work, turning customer support leadership into product managers and requiring an engineering team building and operating the customer support agent.
Use Case 2: Triaging incoming bug reports
When an incident is raised within your company, or when you receive a bug report, the first problem of the day is determining how severe the issue might be. If it’s potentially quite severe, then you want on-call engineers immediately investigating; if it’s certainly not severe, then you want to triage it in a less urgent process of some sort. It’s interesting to think about how an AI agent might support this triaging workflow.
The process might work as follows:
- Pipe all created incidents and all created tickets to this agent for review.
- Expose these tools to the agent:
- Open an incident
- Retrieve current incidents
- Retrieve recently created tickets
- Retrieve production metrics
- Retrieve deployment logs
- Retrieve feature flag change logs
- Toggle known-safe feature flags
- Propose merging an incident with another for human approval
- Propose merging a ticket with another ticket for human approval
- Redundant LLM providers for critical workflows. If the LLM provider’s API is unavailable, retry three times over ten seconds, then resort to using a second model provider (e.g. Anthropic first, if unavailable try OpenAI), and then finally create an incident that the triaging mechanism is unavailable. For critical workflows, we can’t simply assume the APIs will be available, because in practice all major providers seem to have monthly availability issues.
- Merge duplicates. When a ticket comes in, first check ongoing incidents and recently created tickets for potential duplicates. If there is a probable duplicate, suggest merging the ticket or incident with the existing issue and exit the workflow.
- Assess impact. If production statistics are severely impacted, or if there is a new kind of error in production, then this is likely an issue that merits quick human review. If it’s high priority, open an incident. If it’s low priority, create a ticket.
- Propose cause. Now that the incident has been sized, switch to analyzing the potential causes of the incident. Look at the code commits in recent deploys and suggest potential issues that might have caused the current error. In some cases this will be obvious (e.g. spiking errors with a traceback of a line of code that changed recently), and in other cases it will only be proximity in time.
- Apply known-safe feature flags. Establish an allow list of known safe feature flags that the system is allowed to activate itself. For example, if there are expensive features that are safe to disable, it could be allowed to disable them, e.g. restricting paginating through deeper search results when under load might be a reasonable tradeoff between stability and user experience.
- Defer to humans. At this point, rely on humans to drive incident, or ticket, remediation to completion.
- Draft initial incident report. If an incident was opened, the agent should draft an initial incident report including the timeline, related changes, and the human activities taken over the course of the incident. This report should then be finalized by the human involved in the incident.
- Run incident review. Your existing incident review process should take the incident review and determine how to modify your systems, including the triaging agent, to increase reliability over time.
- Safeguard to reenable feature flags. Since we now have an agent disabling feature flags, we also need to add a periodic check (agent-driven or otherwise) to reenable the “known safe” feature flags if there isn’t an ongoing incident to avoid accidentally disabling them for long periods of time.
This is another AI agent that will absolutely work as long as you treat it as a software product. In this case, engineering is likely the product owner, but it will still require thoughtful iteration to improve its behavior over time. Some of the ongoing validation to make this flow work includes:
The role of humans in incident response and review will remain significant, merely aided by this agent. This is especially true in the review process, where an agent cannot solve the review process because it’s about actively learning what to change based on the incident.
You can make a reasonable argument that an agent could decide what to change and then hand that specification off to another agent to implement it. Even today, you can easily imagine low risk changes (e.g. a copy change) being automatically added to a ticket for human approval.
Doing this for more complex, or riskier changes, is possible but requires an extraordinary degree of care and nuance: it is the polar opposite of the idea of “just add agents and things get easy.” Instead, enabling that sort of automation will require immense care in constraining changes to systems that cannot expose unsafe behavior. For example, one startup I know has represented their domain logic in a domain-specific language (DSL) that can be safely generated by an LLM, and are able to represent many customer-specific features solely through that DSL.
Expanding the list of known-safe feature flags to make incidents remediable. To do this widely will require enforcing very specific requirements for how software is developed. Even doing this narrowly will require changes to ensure the known-safe feature flags remain safe as software is developed.
Periodically reviewing incident statistics over time to ensure mean-time-to-resolution (MTTR) is decreasing. If the agent is truly working, this should decrease. If the agent isn’t driving a reduction in MTTR, then something is rotten in the details of the implementation.
Even a very effective agent doesn’t relieve the responsibility of careful system design. Rather, agents are a multiplier on the quality of your system design: done well, agents can make you significantly more effective. Done poorly, they’ll only amplify your problems even more widely.
Do AI Agents Represent Entirety of this Generation of AI?
If you accept my definition that AI agents are any combination of LLMs and software, then I think it’s true that there’s not much this generation of AI can express that doesn’t fit this definition. I’d readily accept the argument that LLM is too narrow a term, and that perhaps foundational model would be a better term. My sense is that this is a place where frontier definitions and colloquial usage have deviated a bit.
Closing thoughts
LLMs and agents are powerful mechanisms. I think they will truly change how products are designed and how products work. An entire generation of software makers, and company executives, are in the midst of learning how these tools work.
For everything that AI agents can do, there are equally important things they cannot. They cannot make restoring a database faster than the network bandwidth supports. Access to text-based judgment does create missing tools. Nor does text-based judgment solve access controls, immediately make absent document exist, or otherwise solve the many real systems problems that exist in your business today. It is only the combination of agents, great system design, and great software design that will make agents truly shine.
As it’s always been, software isn’t magic. Software is very logical. However, what software can accomplish is magical, if we use it effectively.