AI & Strategy

The shortlist effect: AI and the changing nature of search

The shortlist effect: AI and the changing nature of search

Most brands are still optimising for a model of search that is gradually changing. Increasingly, searches end without a click to an external website. AI-driven interfaces such as ChatGPT, Google AI Overviews and Perplexity often generate direct answers rather than presenting long lists of links.

In these answers, only a small number of sources tend to be referenced. In practice this means that visibility is no longer distributed across many search results, but concentrated among a few cited sources.

We refer to this dynamic as The Shortlist Effect. Instead of competing for a position somewhere in a ranked list, brands increasingly compete to be included among a small number of referenced sources.

A changing search model

For the past two decades, the mechanics of online discovery have been relatively stable. Companies optimised websites for search engines, worked to rank for relevant queries, and converted visitors once they arrived.

This model has not disappeared. Traditional search remains important, and many companies still rely on it for traffic and customer acquisition. However, AI-generated responses are beginning to change how information is surfaced and consumed.

ChatGPT now processes roughly two billion queries per day. Google’s AI Overviews reach well over a billion users globally, and tools such as Perplexity are growing quickly. These systems are increasingly used not just to retrieve information, but to summarise and interpret it.

When someone asks an AI system for a recommendation or explanation, the response often cites only a handful of sources. In this context, visibility depends less on ranking among many results and more on being selected as a reference.

Why this matters beyond SEO

This shift has implications beyond traditional search engine optimisation.

Discovery, evaluation and trust are increasingly mediated by AI systems that summarise information on behalf of users. For many people, these systems act less like search engines and more like advisors: tools that interpret information and present conclusions.

Recent surveys suggest that consumers are already relying on AI in this way. A majority report that AI tools help them discover products they would not have otherwise encountered, and many say they feel more confident in decisions when AI-assisted recommendations are involved.

Whether these perceptions are fully justified remains open to debate. AI systems are still imperfect and occasionally opaque in how they choose sources. But they are nevertheless becoming a meaningful layer between companies and potential customers.

Generative Engine Optimization

In response, a new set of practices is emerging, often referred to as Generative Engine Optimization (GEO). The idea is not simply to rank well in traditional search results, but to structure content in ways that AI systems are more likely to interpret and reference.

Early research from Princeton and Georgia Tech, analysing thousands of queries, suggests a few patterns.

First, topic coverage matters. Language models tend to favour sources that cover a subject in depth rather than superficially. A site with multiple well-developed resources on a topic is more likely to be referenced than one with a single page.

Second, information density matters. AI systems tend to prioritise content that contains concrete facts, references and structured explanations. Generic marketing language is less useful to models attempting to summarise information.

Third, structure helps machines interpret content. Clear headings, schema markup, FAQs and structured data make it easier for AI systems to understand what a page contains and how its information should be extracted.

Interestingly, some early studies suggest that GEO can benefit smaller or less established sites, particularly when they produce detailed, well-structured information. In some cases, sources that rank relatively low in traditional search appear more frequently in AI-generated responses.

The emerging agentic layer

A deeper shift may lie ahead. AI systems are gradually moving from answering questions to acting as intermediaries in decision-making.

In what is sometimes described as an agentic model, software agents may evaluate options, compare vendors and make recommendations on behalf of users. Gartner has projected that a significant share of B2B purchasing processes could involve AI agents within the next few years.

Such projections should be treated cautiously, but the direction of travel is clear. Increasingly, companies may need to communicate not only with human readers but also with systems that evaluate information algorithmically.

In that context, visibility becomes less about attracting attention and more about being interpretable and credible to machines.

A period of experimentation

For many organisations, the practical implications are still uncertain.

While a growing number of marketing and product teams recognise that AI-mediated discovery is becoming more important, relatively few have established clear strategies or metrics around it. The tools themselves are evolving quickly, and the mechanics of citation within AI systems remain somewhat opaque.

For now, the most practical step may simply be observation. Asking AI systems the kinds of questions customers might ask — and seeing which sources are referenced — can provide a first sense of how a company appears within this emerging layer of discovery.

The early days of search engine optimisation offered a similar window of experimentation. Companies that explored the space early often gained advantages that persisted for years.

Whether the same pattern will hold for AI-driven discovery remains to be seen. But it is clear that the way information is surfaced online is changing, and that organisations will need to adapt as the role of AI in discovery continues to evolve.