Other ideas for using this book with an LLM
In order to reach this final chapter of the AI Companion to Crafting Engineering Strategy, you’ve co-written and revised a strategy with an LLM. We’ve also used both in-context learning examples and Model Context Protocol servers to prime an LLM to work on complex, domain-specific problems such as creating a systems model or Wardley map.
The two biggest remaining questions to engage with at this point are:
- How should you actually use the LLM-optimized edition of Crafting Engineering Strategy going forward?
- Is the concept of an LLM-optimized book actually a valuable one?
These two questions are really the key questions when it comes to evaluating this project, and will help determine whether this format represents a meaningful advance in how books are released, or whether it’s merely a hacky concept to be forgotten.
This is a draft chapter from the The AI Companion to Crafting Engineering Strategy, which discusses how to use Crafting Engineering Strategy in an LLM to draft, refine, and improve engineering strategies.
Using the LLM-optimized book in practice
One of the gifts of writing down what I’ve learned over the past two decades is that I can load that writing into the context window of any LLM, and have that context improve the model’s generation. At this point, I do all my LLM work in a context-rich project to improve the generated responses, and for strategy related topics, that means including this book.
That approach, ambiently including this book in your context window when you do strategy work, is my best suggestion for getting long-term value out of it. That’s in addition, certainly, to utilizing the techniques and examples from this companion text when you are actively developing your next engineering strategy.
A few other experiments that are worth trying in your organization are:
- Use as a strategy coach or mentor to guide your strategy practice along the lines described in How to get better at strategy?
- Rewrite your existing strategies into consistent, readable formats
- Identify recurring topics and themes in Architecture Decision Records that could be moved into a durable strategy rather than frequently rehashed
- Summarize your organization’s current strategy altitudes, to facilitate a structured discussion about which kinds of decisions are being made where
Depending on your current writing culture, some of these will work better than others. That said, as you test them out, I’m confident you’ll find more opportunities as well.
Are LLM-optimized books a gimmick?
Many authors are concerned about LLMs stealing their work and their livelihood. This is not an abstract concern, but a very real one: already we see search traffic decreasing for many websites, as folks get answers directly from LLMs. LLMs that have been trained on those sites’ content, but which aren’t compensating the authors for their usage.
As web search became pervasive in the 1990s, there were similar debates about fair use. Eventually the courts decided on the exact parameters of what fair use means in the context of web crawlers, and the debate has faded as websites learned to take advantage of the reach created by search engines. The status quo was not preserved, and left the newspaper industry irrevocably changed, but a new status quo was eventually established. Today, LLMs are in a similar moment, where authors need to discover the opportunity that LLMs represent, while also grappling with the reality that they may significantly change how books are read and sold.
Working with O’Reilly to release the LLM-optimized edition of Crafting Engineering Strategy, along with this AI Companion is a joint experiment in finding what might work. The market for people buying LLM-optimized books is essentially non-existent today, but it’s a problem that some of the most thoughtful people I know have mentioned struggling with, and that I think this approach solves well.
The most extreme version of this future is one where people’s entire collection of books is stored in a personal library that is accessible to LLM agents running on their behalf. Another vision is that reading behavior doesn’t change much, with humans mostly reading on behalf of humans. Although I’m not sure which of those we’ll be living in next year or next decade, my best guess is a bit of both.
While this approach has its imperfections, I’m personally finding it useful today, and I hope it makes it easier to write effective engineering strategies for your organization.