Building Agents That Read the Web Right
The Other Side of the Coin
In a recent article, I made my website AI-agent friendly [1], adding llms.txt, Markdown output, and content negotiation to a Hugo site on AWS. That article was about the producer side: how to serve content that agents can consume efficiently.
But I left a question unanswered: what does it look like from the agent’s perspective?
In this article, I’m building two agents. Same task, same website, same model. The only difference: one reads the web the old way, the other uses the infrastructure I just built. The code is written with the Strands Agents SDK [2], an open-source framework from AWS for building AI agents in Python.
Making My Website AI-Agent Friendly — Here's What Changed
The Test That Failed
Last weekend, I pointed an AI agent at my own blog and asked it a simple question about an article I’d just published — my hands-on experiment with self-reflection on Amazon Bedrock [12]: “What scored 3/15 and why?”
The agent received 29,099 bytes of HTML. After stripping navigation, CSS, scripts, headers, and footers, only about 4,600 characters of actual content remained — 69% of the response was noise. The agent consumed 6,083 input tokens, then gave a confused answer about “personal growth.” It couldn’t find the article content buried in the markup.
"It's Faster If I Just Do It Myself" — The Most Expensive Sentence in AI
The Moment I Almost Gave Up
A few weeks ago, I spent 45 minutes teaching my AI agent how to prepare customer meetings. Pulling context from Slack, checking the CRM, looking up LinkedIn profiles, assembling a briefing document. I could have done it myself in 20 minutes.
The next morning, the agent prepared three meetings in 12 minutes. By the end of the week, it had prepared every meeting for five days — while I was still drinking my coffee.
Technology Evolution Doesn't Move in a Straight Line—It Spirals
Technology Evolution Doesn’t Move in a Straight Line—It Spirals
The Proud Ops Colleague
Years ago, an Ops colleague proudly showed me something new. ClusterSSH—cssh [1]. A tool that opens multiple terminals to multiple machines—at the same time. You type once, it executes everywhere.
Back then, machines still had names. Ops folks knew their history, their specs, their quirks. They could tell you which server had been acting up last Thursday and what firmware it was running. And cssh? It let them follow the runbook consistently across every node. No more SSH-ing into machines one by one, hoping you didn’t forget a step on node 7.
𝗙𝗿𝗼𝗺 𝗖𝗮𝗹𝗹 𝗖𝗲𝗻𝘁𝗲𝗿 𝘁𝗼 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁 𝗛𝘂𝗯: 𝗧𝗵𝗲 𝗙𝘂𝘁𝘂𝗿𝗲 𝗼𝗳 𝗖𝘂𝘀𝘁𝗼𝗺𝗲𝗿 𝗦𝘂𝗽𝗽𝗼𝗿𝘁 𝗜𝘀 𝗛𝗲𝗿𝗲
𝗙𝗿𝗼𝗺 𝗖𝗮𝗹𝗹 𝗖𝗲𝗻𝘁𝗲𝗿 𝘁𝗼 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁 𝗛𝘂𝗯: 𝗧𝗵𝗲 𝗙𝘂𝘁𝘂𝗿𝗲 𝗼𝗳 𝗖𝘂𝘀𝘁𝗼𝗺𝗲𝗿 𝗦𝘂𝗽𝗽𝗼𝗿𝘁 𝗜𝘀 𝗛𝗲𝗿𝗲
Just returning from an internal Amazon Connect deep[1] dive. I haven’t touched this particular product since maybe 5 years?! Dirk Fröhner — I’m sure you remember our joint large-scale workshop with one of your customers.
What stuck with me from that time was how fast you can actually set up a contact center in the cloud — less than 30 minutes from zero to the first call received — and how much AI was already improving both the customer and agent experience back then. That hasn’t changed.
From my perspective balancing AI Agents Agency with Control is one of the most important themes for
From my perspective balancing AI Agents Agency with Control is one of the most important themes for 2026. We need to get this right both as builders and users for AI Agentic systems.
Anthropic’s study “Measuring AI agent autonomy in practice”[1] nicely fits into this as the started to study autonomy of ai agents based of usage of Claude Code and tool invocations. Already this first iteration provides some nice insights. Obviously a high focus on Coding use cases, but also indicating a wide variety of other use cases which resonate with my experience from customers I’m working with.
🎯 'How do we pick the RIGHT AI agent use case?
🎯 “How do we pick the RIGHT AI agent use case?
This is the question I hear most from customers exploring agentic AI.
Here’s the mechanism I run through together with the customer:
The 4-Quadrant Evaluation
When a customer brings me 5-10 agent ideas, we structure each one across four dimensions:
📊 Business Value & Strategic Fit → What pain does it solve? For whom? How often? → Can we quantify the impact? (Revenue, cost, time, quality) → Which KPI moves if this works for 6 months?
Passing on control to your AI coding agent team entirely?
Passing on control to your AI coding agent team entirely?
Anthropic researcher Nicholas Carlini conducted a stress test of their Claude Opus 4.6 model by deploying 16 parallel AI agents to build a complete C compiler in Rust from scratch(https://lnkd.in/eGMp4b2K). Over approximately two weeks and nearly 2,000 Claude Code sessions, the agents autonomously produced a 100,000-line compiler capable of compiling the Linux 6.9 kernel across multiple architectures (x86, ARM, and RISC-V). The experiment cost around $20,000 in API fees and demonstrated that coordinated AI agent teams can tackle complex systems programming challenges traditionally requiring significant human expertise and architectural oversight.
AI coding has quickly developed from an interesting research project to an important tool in the bel
AI coding has quickly developed from an interesting research project to an important tool in the belt of every software developer. Tools like #kiro allow to define subagents, which take on specific responsibilities within the software project and speed up development and improve quality. Nice way to navigate overcrowded context windows.
But where to start? How to identify subagents which can improve the team and subsequently - how to come up with a first version of those agents?