In a surprising twist, the rise of AI Large Language Models (LLMs) is driving solution architecture toward greater simplicity, not complexity. This counterintuitive trend, coupled with a sensible selection of the actions that are off-loaded to LLMs, could reshape how we build software solutions.
The Traditional Approach: Building Complexity to Handle Complexity
For decades, software architects faced with complex requirements would typically respond by building increasingly sophisticated tools and systems. Want to handle more complex calendar scheduling patterns? Build more complex calendar functionality. Need more nuanced document processing? Create a more feature-rich document processing system.
This approach led to bloated systems, complicated integration, and tools that required extensive documentation and training to use effectively. It also meant that adding new capabilities often required significant development effort and introduced new potential points of failure.
The LLM Revolution: Simplicity Through Intelligence
The emergence of LLMs (such as OpenAI’s ChatGPT, Anthropic’s Claude) is fundamentally changing this paradigm. Instead of building complexity into our tools, we can now shift to simpler, atomic tools and let LLMs handle the complexity of orchestration and composition. Conceptually, and in a simplified form, this is shown in the following diagram.
We all use calendars…in a simple or complex way
Consider a simple calendar tool that can only handle two basic actions: 1. It can read a calendar, identifying existing events, and 2. It can add a calendar event.
Traditional thinking would suggest that handling complex calendar requirements - with varying intervals and attributes - would require an equally complex tool, with capabilities in line with the complexity of the underlying requirements.
However, with an LLM in the architecture, the calendar application can remain simple. Why? Because the LLM can interpret complex natural language requests, extract all the required parameters, decompose it to a number of simpler actions, and orchestrate those actions to achieve the required outcome.
Let's see how this approach can handle adding events to a calendar with different levels of requirements.
Meeting a simple requirement
This example shows one of the atomic operations; adding a single calendar event, "Organise dinner" with simple parameters (time, duration). It demonstrates the baseline capability of one of the atomic functions of the tool.
Result:
Meeting a complex requirement
This will demonstrate how the LLM handles a much more complex requirement by breaking it down and orchestrating multiple calls to the same simple tool.
LLM response:
Result:
This example demonstrates that the LLM:
Understands the complex requirements; across 6 days, same time slots, and the constraint that each task needs to be a unique colour
Plans out the sequence (Monday through Saturday)
Handles the colour logic, including interpreting "herb" as green
Makes six separate calls to the same simple "add task" function
Maintains consistency across all the calls
Provides a clear summary of what it did
What's powerful here is that the calendar tool itself didn't need any special features for:
Batch creation
Colour management
Sequential scheduling
Pattern recognition
Constraint satisfaction (ie no duplicate colours)
The LLM handled all that complexity through orchestrating calls to two simple functions. Although a rather straightforward example, this demonstrates how keeping tools simple and letting LLMs handle complexity can work in practice.
Implications for Digital Solution Design
This shift could have profound implications for how we approach system design:
Tool Design: Focus on building simple, reliable tools with clear, well-documented interfaces rather than complex, feature-rich systems. Optimise for simplicity over comprehensiveness. The goal is to make tools easily composable by LLMs, not to cover every possible use case directly.
Composition Over Complexity: Design systems with the expectation that complex requirements will be handled through composition of simple operations, not through complex individual operations.
Natural Language First: Design with the understanding that natural language will increasingly become the primary interface for expressing users’ requirements.
The Benefits of This Approach
This architectural shift offers several significant advantages:
Reduced Development Complexity: Simpler tools are easier to build, test, and maintain.
Improved Reliability: Fewer complex features mean fewer potential points of failure
Greater Flexibility: New use cases can often be handled through novel combinations of existing tools rather than requiring new development.
Better User Experience: Users can express their requirements in natural language rather than learning complex functions; which will help to shorten the user manual and training.
Striking a Balance: Atomic Tools and Strategic Complexity
While the concept of leveraging LLMs to orchestrate simple, atomic tools offers significant potential for architectural simplification, a balanced approach is crucial.
Overly simplistic tool design risks shifting excessive complexity to the LLM layer, potentially straining its context window and increasing error rates. A more pragmatic strategy involves identifying tasks that genuinely benefit from decomposition into atomic operations, while retaining some level of targeted complexity within tools for frequently recurring or computationally intensive processes.
This hybrid approach allows LLMs to focus on high-level orchestration and complex logic, while specialised tools efficiently handle well-defined sub-tasks. Furthermore, acknowledging the limitations of natural language interfaces and incorporating robust error handling and fallback mechanisms are essential for creating resilient and user-friendly systems.
By carefully balancing simplicity and strategic complexity, architects can harness the power of LLMs without sacrificing performance, reliability, or maintainability.
Looking Ahead
As LLMs continue to evolve, the trend toward intelligent orchestration will undoubtedly accelerate. However, the future of solution architecture likely lies not solely in building simpler systems, but in strategically combining atomic tools with targeted complexity.
Organisations that embrace a balanced approach—building composable tools optimised for both direct use and LLM orchestration, while acknowledging the limitations and costs associated with LLM usage—will be best positioned to leverage AI capabilities effectively.
This requires a shift away from a purely "complexity-driven" or "simplicity-at-all-costs" mindset and toward a more nuanced understanding of how to best integrate LLMs into complex digital ecosystems. This new paradigm will require careful consideration of prompt engineering, error handling, cost management, and the strategic allocation of complexity between tools and LLM orchestration.
Comments