Design
Designing for the Agentic Layer

Designing for the Agentic Layer
For much of the history of digital products, design has been shaped by a simple assumption: a person interacts directly with a system through an interface.
Users navigate screens, compare options, and complete tasks step by step. Interfaces structure these journeys, and design disciplines — from UX to service design — have largely focused on making those paths clearer, faster, and easier to understand.
As AI systems become more capable, that model is beginning to shift.
Increasingly, people do not interact with software solely through interfaces. Instead, they describe a goal, and a system attempts to determine how best to achieve it. In this emerging model, AI systems can gather information, compare alternatives, and take actions on behalf of users.
This development introduces a new layer of experience design, one that extends beyond screens and interactions, and into how autonomous systems interpret and act on human intentions.
From interfaces to intentions
Traditional digital products rely on explicit navigation structures. Users move through defined flows: selecting options, reviewing information, confirming decisions. Agentic systems operate differently. Rather than guiding a user through a predetermined sequence, they attempt to infer intent and coordinate the steps required to reach an outcome.
The difference may appear subtle, but it changes where the experience is shaped. In traditional software, the experience is largely defined by the interface. In agentic systems, a significant part of the experience happens within the system itself — how it interprets requests, prioritises information, and determines appropriate actions.
A different role for design
This shift raises questions about the role of design within organisations building AI-enabled products.
Historically, many product decisions were expressed through interface elements: navigation structures, information hierarchies, and interaction patterns. When systems become more autonomous, some of those decisions move into the underlying logic of the product.
Questions that designers increasingly encounter include:
- Under what conditions should an AI system act independently?
- When should it ask for clarification?
- How should uncertainty be communicated?
- How visible should the system’s reasoning be?
These are not purely technical considerations. They sit at the intersection of technology, user expectations, and organisational responsibility.
As AI systems take on more initiative, design becomes less about arranging interface elements and more about defining the relationship between human intent and machine behaviour.
Designing for collaboration
One way to understand agentic systems is to view them not simply as tools, but as collaborators.
Traditional software executes explicit instructions. Agentic systems often interpret instructions, combine them with contextual information, and determine a course of action. In some cases, they may coordinate multiple tools or services to achieve a result.
This introduces a more complex interaction model.
If the system behaves too passively, the benefits of autonomy are limited. If it behaves too independently, users may lose confidence in what the system is doing on their behalf.
Designing for agentic systems therefore involves managing this balance — creating systems that can take initiative while still allowing users to understand and influence outcomes.
Trust and transparency
Trust becomes a particularly important consideration in this context.
When people interact with traditional interfaces, actions are typically visible and sequential. Users click a button, submit a form, or confirm a transaction. The cause-and-effect relationship is clear.
Agentic systems can operate through less visible processes. They may collect information from multiple sources, evaluate options, and execute tasks across different services.
For users, this can create uncertainty about how decisions are being made.
Design patterns are beginning to emerge that attempt to address this challenge — through mechanisms such as progress indicators, explanation layers, and opportunities for user oversight. These approaches aim to make the system’s behaviour more legible without undermining the efficiency that autonomy provides.
Implications for design practice
Agentic systems also change how design work itself is approached.
Traditional UX practices often involve mapping journeys, defining flows, and specifying the sequence of interactions a user follows. With agentic systems, these flows are less predictable. A single user request may be resolved in different ways depending on context, available data, or system interpretation.
In this environment, designers increasingly focus on defining the boundaries within which systems operate, what they are allowed to do, when they should ask for input and how they communicate their actions.
Some teams describe this as designing systems of behaviour rather than individual interfaces.
An evolving discipline
It is still early in the development of agentic systems, and many of the practices around them are experimental.
Interfaces will remain important, particularly in situations that require clarity, accountability, or deliberate decision-making. At the same time, a growing share of digital experiences may take place through systems that act partially on behalf of users.
For design as a discipline, this represents less a replacement of existing practices than an expansion of scope.
The focus is gradually moving from shaping interactions to shaping behaviours, defining how intelligent systems interpret goals, make decisions, and collaborate with the people who rely on them.
How this balance evolves will likely depend not only on technological progress, but also on how organisations choose to structure responsibility, transparency and control within these systems.