Building Agents That Read the Web Right
The Other Side of the Coin
In a recent article, I made my website AI-agent friendly [1], adding llms.txt, Markdown output, and content negotiation to a Hugo site on AWS. That article was about the producer side: how to serve content that agents can consume efficiently.
But I left a question unanswered: what does it look like from the agent’s perspective?
In this article, I’m building two agents. Same task, same website, same model. The only difference: one reads the web the old way, the other uses the infrastructure I just built. The code is written with the Strands Agents SDK [2], an open-source framework from AWS for building AI agents in Python.
Making My Website AI-Agent Friendly — Here's What Changed
The Test That Failed
Last weekend, I pointed an AI agent at my own blog and asked it a simple question about an article I’d just published, my hands-on experiment with self-reflection on Amazon Bedrock [12]: “What scored 3/15 and why?”
The agent received 29,099 bytes of HTML. After stripping navigation, CSS, scripts, headers, and footers, only about 4,600 characters of actual content remained, 69% of the response was noise. The agent consumed 6,083 input tokens, then gave a confused answer about “personal growth.” It couldn’t find the article content buried in the markup.
When Thinking Twice Helps — And When It Doesn't
The Saturday Morning Experiment
Last Saturday, I installed a Python library, pointed it at Amazon Bedrock, and asked a model the same questions three times, with zero, one, and three rounds of self-reflection.
The results surprised me.
Q Refl Time Acc Comp Nuan Total
1 0 3.0s 4 3 3 10
1 1 5.5s 4 2 3 9
1 3 8.8s 4 3 4 11
2 0 2.6s 4 2 2 8
2 1 5.7s 4 2 2 8
2 3 8.5s 4 2 2 8
3 0 3.1s 1 1 1 3
3 1 5.2s 1 1 1 3
3 3 8.6s 1 1 1 3
Q is the question number, Refl the number of self-reflection rounds (0 = straight answer, 1 = one revision, 3 = three revisions). Acc, Comp, and Nuan are the judge’s scores for Accuracy, Completeness, and Nuance, each on a 1-5 scale, 15 max total.