The AI Track at AWS Summit Hamburg 2026: From Demo to Deployment

Last year, I wrote about the dedicated Gen AI track at AWS Summit Hamburg 2025. The response was overwhelming — the track was packed, conversations spilled into the hallways, and the Fischbrötchen at the Landungsbrücken afterwards sealed the deal. Hamburg won me over.
This year, the AI track is back — bigger, sharper, and with a clear theme: from demo to deployment. If 2025 was about showing what generative AI can do, 2026 is about making it work in production. And the track reflects that shift.
MCP Sampling & Elicitation: When Servers Talk Back
From Request-Response to Collaboration

MCP evolves: servers don’t just respond anymore. They ask questions back.
When I wrote about the CLI vs MCP debate [1], I focused on the infrastructure patterns underneath. But MCP itself has been evolving, and the latest additions change what’s architecturally possible.
The Model Context Protocol started as a clean way for AI agents to call tools: agent sends request, server returns response. Simple, stateless, effective. But real-world agent workflows need more than request-response. They need the server to ask questions back.
Self-Improving Models: What MiniMax M2.7 Actually Does
The Headline vs The Reality

Self-evolution: the model improves the process that improves the model.
“Model trains itself over 100+ autonomous cycles.” That was the headline when MiniMax released M2.7 on March 18, 2026 [1]. It sounds like science fiction: a model bootstrapping its own intelligence in a recursive loop.
The reality is more nuanced, more interesting, and more relevant to how we’ll build AI systems in the near future.
Fischbrötchen and Failure Rates — I'm Speaking at AWS Summit Hamburg
Fischbrötchen and Failure Rates

Hamburg won me over.
Last year, the AWS Summit left Berlin for Hamburg. After years of presenting at the Berlin Summit, I wasn’t sure how I’d feel about the move. Then I opened the Generative AI track to a packed room — people standing in the back — and spent the rest of the day in conversations that reminded me why these events matter. The Fischbrötchen at the Landungsbrücken afterwards sealed the deal. Hamburg won me over [1].
From Cloud-Native to AI-Native: What Actually Changes
The Fifteen-Year Echo

Fifteen years apart. Same stage. Different world.
In 2010, Adrian Cockcroft stood on the QCon stage and told the audience that Netflix was running its entire business on a public cloud. Most people in the room thought he was crazy.
Fifteen years later, Cockcroft was back at QCon, this time explaining how he manages swarms of autonomous AI agents that produce several days’ worth of code in fifteen minutes [1]. The audience reaction was different. Nobody called him crazy. They were taking notes.
The Protocol We Should Have Built for Humans
Namaste from 6,165 Meters
I just summited Imja Tse (Island Peak, 6,165 meters) in Nepal. No Slack, no email, no MCP servers crashing in the background. Just ice, thin air, and the kind of clarity that only comes when every step costs you something.
At that altitude, you don’t tolerate inefficiency. Every piece of gear earns its place or stays behind. Every movement is deliberate. You can’t afford to fumble with equipment that doesn’t work the first time.
Security Is Job Zero — Even (Especially) in the Age of Coding Agents
$20 and Two Hours
On February 28, 2026, security startup CodeWall gave an autonomous AI agent a single input: a domain name. Two hours and approximately $20 in API tokens later, the agent had full read/write access to the production database of McKinsey’s internal AI platform, Lilli [1] [2].
The attack vector? SQL injection — a vulnerability class from the 1990s. But in a novel context: the injection was in JSON keys, not values, which standard security scanners missed [3].
AI Coding Productivity: 10%, Not 10x
The Number Nobody Wants to Hear
A few weeks ago, I wrote about running my entire workday through an AI agent [1] — meetings, research, CRM, content creation. Eight hours of productive work, not a single line of code. The response was overwhelmingly positive. But one comment stuck with me: “If AI agents are this good, why isn’t my team shipping 10x more?”
The answer is now backed by data from multiple independent studies — and it’s not what the vendor pitches suggest.
CLI vs MCP: The Wrong Debate
The Zombie Processes and the 50GB Cache
A few weeks ago, I noticed my MacBook was sluggish. I found orphaned MCP server processes that had failed to shut down cleanly — a problem Didier Durand describes vividly in his analysis [2], where users report finding over 100 zombie Node.js processes after a single session. I killed mine, freed some RAM, and went back to work.
Then last week, Brooke Jamieson — a fellow AWS Developer Advocate — published a post about running uv cache prune and freeing 75GB of disk space [9]. The culprit? Every uvx invocation from MCP servers (Kiro, Cursor, Claude Code all use them under the hood) silently caches packages, and the cache never cleans itself up. I ran the same command and got back 50GB. Fifty gigabytes of invisible MCP debt, sitting on my drive.
The Coding Agent That Doesn't Code
The Friday That Wrote Itself
Last Friday, I used a coding agent for eight hours straight. I didn’t write a single line of code.
I prepared a customer meeting by pulling context from Slack threads, calendar events, and our CRM. I researched a technical paper on geometric memory architectures and wrote a structured analysis. I collected travel expense receipts from my email — train tickets, hotel invoices, an Uber receipt forwarded from my personal phone — downloaded the PDFs, and assembled them into an expense report. I curated a reading list from articles I’d bookmarked throughout the week. I drafted the research note you’re reading the seeds of right now.