The idea that the internet is built for people is crumbling. That has huge implications for your business

America post Staff
12 Min Read



For years, companies have assumed the internet was built for people. 

Websites were designed to attract human attention, explain, persuade, reassure, and eventually convert. Search engine optimization, user experience, digital merchandising, and checkout design all rested on the same basic premise: the user was a person sitting in front of a screen. 

That premise is beginning to crack. 

Not because people are disappearing, but because they are starting to delegate. More and more often, the first system reading your site, comparing your offer, interpreting your policies, or even initiating a purchase will not be a human being. It will be a software agent acting on someone’s behalf. That is the direction implied by Anthropic’s Model Context Protocol, by Google’s Agent2Agent protocol, its guide to agent protocols and its Universal Commerce Protocol, by OpenAI’s Operator and Agents SDK, and by the growing work from companies such as Visa, Mastercard, and Cloudflare to make agentic commerce trustworthy and operational at scale. 

This is not just a story about better chatbots or prettier interfaces. It is a story about the web acquiring a second interface: one for humans, and another for machines. 

From pages to actions 

The old web revolved around pages. You published information, people found it, and then clicked through a sequence you controlled. The emerging web revolves more and more around actions. Agents do not care very much about your homepage, your visual hierarchy, or the emotional arc of your funnel. They care about whether they can understand your catalog, verify your policies, access reliable data, and complete a task without unnecessary friction. 

That is why the most consequential developments in AI are increasingly not just models, but protocols. Anthropic describes MCP as “a universal, open standard for connecting AI systems with data sources,” meant to replace fragmented integrations with a single protocol. Google’s A2A describes a world in which agents advertise capabilities through an “Agent Card,” discover one another, and collaborate on tasks. Google’s own commerce work goes one step further: UCP is explicitly designed to integrate checkout logic directly with Google AI Mode and Gemini, with “native checkout” framed as the default path for unlocking “full agentic potential.” In other words, the stack is moving from content to execution. 

The next SEO is not SEO 

For two decades, companies learned that visibility depended on being legible to search engines. What is now emerging is more demanding. It is no longer enough to be indexable. You have to become usable. 

That is why ideas such as llms.txt matter. As I argued in a recent piece, websites were built for humans, while language models are better served by a concise, “fat-free” entry point that reduces ambiguity and strips away the noise of menus, scripts, repeated elements, and layout. The llms.txt proposal is simple: place a markdown file at /llms.txt that acts as a curated map for language models, exposing what matters, what is canonical, and where the useful resources live. The official proposal frames it as a way to “provide information to help LLMs use a website at inference time,” precisely because context windows are limited and converting complex HTML into useful plain text is often difficult and imprecise. 

That does not make llms.txt some magical ranking hack. It is not. It is closer to digital housekeeping for a world in which more and more discovery, summarization, and recommendation are mediated by AI systems. The point is not to game a ranking algorithm. The point is to reduce machine confusion. That distinction matters. 

The same logic applies to newer, more experimental ideas such as identity.txt. The site describes it as “a portable identity file that tells AI tools who you are, how you think, and on what terms,” adding that “llms.txt tells AI about websites. identity.txt tells AI about people.” Whether identity.txt itself becomes broadly adopted is almost secondary. What matters is the direction of travel: the web is beginning to produce machine-readable self-descriptions on purpose, rather than leaving models and agents to infer everything from noisy HTML, metadata fragments, and guesswork. 

And this is unlikely to stop with those two examples. Google’s agent protocol guide explains that each A2A agent can publish an Agent Card describing its name, capabilities, and endpoint. The point is obvious: systems are starting to announce themselves to other systems in standardized ways. Once that logic takes hold, it is easy to imagine a broader ecosystem of machine-readable files for policies, permissions, provenance, fulfillment, pricing logic, returns, and authenticated identity. 

Brands will still matter. But brands will no longer be enough 

Many companies still treat AI as something layered on top of the web: a chatbot in customer service, some generated copy in marketing, an assistant in the app. That view is too shallow. 

What is actually happening is that a machine-facing layer is being added underneath the visible web and, in some contexts, in front of it. When a user asks an agent to find the best black blazer under a certain price, with quick delivery, decent return conditions, and a fit similar to previous purchases, the interaction does not begin with a homepage visit. It begins with machine interpretation. 

That changes the basis of competition. Strong brands will still matter because trust still matters. But trust will increasingly need to be expressed in forms machines can process: structured attributes, current inventory, transparent return rules, delivery promises, verified merchant identity, and payment systems that can distinguish a legitimate agent from a malicious bot. Visa says its aim is to “ensure only approved AI agents transact,” while Mastercard argues that protocols are essential to scaling agentic commerce because they support clear user intent, secure credentials, and verifiable agent identity. Cloudflare, working with the payment ecosystem, has made the same point more bluntly: merchants will need ways to grant access to legitimate AI agents while stopping fraudulent traffic at the front door. 

What this means for companies: the case of Inditex 

A global leader such as Inditex makes this shift easier to understand because it sits right at the intersection of brand, logistics, e-commerce, and scale. 

Inditex started relatively late in e-commerce compared with digital natives, but it eventually built one of the most effective integrated retail systems in the market. In its FY2025 results, the company reported sales of €39.9 billion, online sales of €10.7 billion, and explicitly highlighted that the integration of store and online operations enables a “seamless global omnichannel experience.” 

That gives Inditex a major advantage in an agent-mediated environment. Zara and the rest of the group already possess many of the things agents are likely to value: strong brand recognition, rapid inventory rotation, integrated logistics, broad geographic coverage, and operational coordination between physical and digital channels. 

But there is also a risk. Fashion has historically depended on presentation, aspiration, curation, and friction that was often commercially useful. Agents compress all of that. They reduce merchandising to a decision layer in which price, availability, size confidence, delivery date, returns, and trusted identity can become more visible than the atmosphere of the site itself. In that world, the question is no longer “Is your site compelling?” It becomes: “Can an agent use you efficiently?” For Inditex, the strategic response is not cosmetic. It is structural.

So what should Inditex do? 

  • First, it should start treating its websites not only as destinations for humans, but as structured surfaces for agents. That means richer machine-readable catalogs, more explicit size and fit signals, clearer inventory and delivery metadata, cleaner policy exposure, and more robust authentication layers. 
  • Second, it should seriously experiment with machine-oriented descriptive files. A well-designed llms.txt at group and brand level would make sense, especially for clarifying what is canonical, how content is organized, how fast product information changes, and which resources are official. It would not be an SEO trick. It would be an agent usability layer. 
  • Third, it should prepare for protocol-driven commerce rather than assuming that all transactions will continue to begin inside its own interface. If Google is building UCP to support commerce inside AI-native environments, and if payment networks and infrastructure companies are building trust layers for agentic commerce, then large retailers should assume that agent-facing checkout, verification, and discovery will become strategically important. 

Inditex could be unusually well positioned for that transition. But the companies that win in the next phase of commerce will not necessarily be the ones with the prettiest interfaces. They will be the ones that make themselves easiest for agents to understand, trust, and use. 

The web is starting to expose its machine layer 

There is an understandable temptation to dismiss things like llms.txt, identity.txt, Agent Cards, or machine-readable policy layers as marginal technical curiosities. That would be a mistake. 

They are early signposts. 

No, llms.txt is not yet some universally adopted standard. And no, adding it will not magically transform a company overnight. But that misses the point. Small files and lightweight conventions matter because they reveal where infrastructure is going. The web spent decades perfecting interfaces for human eyes. Now it is beginning, awkwardly but unmistakably, to expose interfaces for software agents. 

That is the deeper shift. 

The original web connected documents. The platform web connected users and services. The next one will increasingly connect agents, tools, merchants, payment systems, and authenticated identities. And when that happens, the strategic question changes. 

It is no longer just, “How do I get people to visit my website?” 

It becomes, “How do I make my company understandable, trustworthy, and actionable to the systems that increasingly stand between me and my customers?” 

That is not a design tweak. It is a new layer of digital strategy. 



Source link

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *