๐ฏ 'How do we pick the RIGHT AI agent use case?
๐ฏ “How do we pick the RIGHT AI agent use case?
This is the question I hear most from customers exploring agentic AI.
Here’s the mechanism I run through together with the customer:
The 4-Quadrant Evaluation
When a customer brings me 5-10 agent ideas, we structure each one across four dimensions:
๐ Business Value & Strategic Fit โ What pain does it solve? For whom? How often? โ Can we quantify the impact? (Revenue, cost, time, quality) โ Which KPI moves if this works for 6 months?
Passing on control to your AI coding agent team entirely?
Passing on control to your AI coding agent team entirely?
Anthropic researcher Nicholas Carlini conducted a stress test of their Claude Opus 4.6 model by deploying 16 parallel AI agents to build a complete C compiler in Rust from scratch(https://lnkd.in/eGMp4b2K). Over approximately two weeks and nearly 2,000 Claude Code sessions, the agents autonomously produced a 100,000-line compiler capable of compiling the Linux 6.9 kernel across multiple architectures (x86, ARM, and RISC-V). The experiment cost around $20,000 in API fees and demonstrated that coordinated AI agent teams can tackle complex systems programming challenges traditionally requiring significant human expertise and architectural oversight.
AI coding has quickly developed from an interesting research project to an important tool in the bel
AI coding has quickly developed from an interesting research project to an important tool in the belt of every software developer. Tools like #kiro allow to define subagents, which take on specific responsibilities within the software project and speed up development and improve quality. Nice way to navigate overcrowded context windows.
But where to start? How to identify subagents which can improve the team and subsequently - how to come up with a first version of those agents?
Kiro Subagents: Scaling Development with Specialized AI Agents
Kiro Subagents: Scaling Development with Specialized AI Agents
When you’re building complex software, context management becomes your bottleneck. Your AI agent is juggling frontend components, backend APIs, database schemas, testing frameworks, and documentationโall competing for limited context window space. The result? Diluted focus and suboptimal outputs.
Kiro Subagents solve this architectural challenge by enabling parallel task execution through specialized, autonomous agents that maintain independent context windows.
๐๏ธ The Architecture: Parallel Contexts, Focused Execution
Subagents operate as independent processes with their own context management. This architectural pattern delivers several technical advantages:
๐ฏ From Chaos to Control: Building Predictable AI Agents That Get Smarter Over Time
๐ฏ From Chaos to Control: Building Predictable AI Agents That Get Smarter Over Time
โ๏ธ We need to balance Agency versus Control. We want AI systems to be super easy to use, read our minds, and just provide the answer we need. But we also need to make sure that nothing goes wrong. The more we control, the less agency we get. This is a balancing act.
Let’s focus on the control part. There are many different mechanisms to increase and guarantee control. Things like policies and guardrails come to mind. Those are obvious and powerful. I will cover them in a dedicated post.
๐ฏ From Chaos to Control: Building Predictable AI Agents That Get Smarter Over Time
๐ฏ From Chaos to Control: Building Predictable AI Agents That Get Smarter Over Time
Agentic systems are incredibly flexible, but ad-hoc code generation means unpredictable results and wasted resources. How do we fix this without losing the magic? The answer lies in toolsโprebuilt, tested, reusable components that make your AI agents more capable, reliable, and cost-efficient with every interaction.
With the right approach, your agents can become smarter and more efficient over time. Dive deep in the article below.
๐ฑ ๐๐ฟ๐ด๐ต - ๐ ๐ ๐๐ ๐๐ด๐ฒ๐ป๐ ๐ฑ๐ฒ๐น๐ฒ๐๐ฒ๐ฑ ๐ฎ๐น๐น ๐บ๐ ๐ณ๐ถ๐น๐ฒ๐!!!! Worried about AI agents running amok with your data? B
๐ฑ ๐๐ฟ๐ด๐ต - ๐ ๐ ๐๐ ๐๐ด๐ฒ๐ป๐ ๐ฑ๐ฒ๐น๐ฒ๐๐ฒ๐ฑ ๐ฎ๐น๐น ๐บ๐ ๐ณ๐ถ๐น๐ฒ๐!!!! Worried about AI agents running amok with your data? Before panicking, consider this: we’ve been solving permission and access control problems for decades with human coworkers. Let’s apply those same principles to our new AI teammates and find the right balance between agency and control. #AIAgents #FutureOfWork
Argh - My AI Agent deleted all my files
๐ฑ “Argh - My AI Agent deleted all my files!!!!”
When was the last time one of your co-workers deleted an important file from your desktop?
I would hope it has been a long time ago and quite possibly never.
๐ฅ๏ธ Even at the time we used shared computers, home directories have been separated.
There was a notion of shared folders or network folders or whatever nomenclature the system you were using. So in order to provide access to files to coworkers, you would need to take the decision first to upload it to a shared folder.
When a 'Model' Isn't Just a Model: Redefining AI Systems for the Builder's Era
When a ‘Model’ Isn’t Just a Model: Redefining AI Systems for the Builder’s Era
๐ฌ Great keynote by Jensen Huang at CES 2026 [1]! Great content and also love the ease of his presentation style. Miguel: We are not the only ones presenting in front of a black screen once in a while ;)
๐ I agree with Jensen, it’s super exciting to see more and ๐บ๐ผ๐ฟ๐ฒ ๐ผ๐ฝ๐ฒ๐ป-๐ถ๐๐ต ๐ณ๐ฟ๐ผ๐ป๐๐ถ๐ฒ๐ฟ ๐บ๐ผ๐ฑ๐ฒ๐น๐ ๐ฏ๐ฒ๐ถ๐ป๐ด ๐ฝ๐๐ฏ๐น๐ถ๐๐ต๐ฒ๐ฑ by different providers. Sounds like NVIDIA is taking a big stake in this. Really key for me is that providers not “just” release open-weight models but also the data they trained on and the process used to train them. Jensen mentions the obvious responsible AI argument which is super important. This is the only way 3rd parties can verify the models and understand things like bias being introduced by the training data, copyright infringements, and alike. From my perspective, equally important: ๐ข๐ฝ๐ฒ๐ป ๐ถ๐ ๐ผ๐ป๐น๐ ๐๐ฟ๐๐น๐ ๐ผ๐ฝ๐ฒ๐ป ๐๐ผ ๐บ๐ฒ ๐ถ๐ณ ๐ ๐ฐ๐ฎ๐ป ๐ฏ๐๐ถ๐น๐ฑ ๐ถ๐, ๐บ๐ผ๐ฑ๐ถ๐ณ๐ ๐ถ๐ ๐๐ผ ๐บ๐ฎ๐ธ๐ฒ ๐บ๐ ๐ผ๐๐ป ๐๐ฎ๐ฟ๐ถ๐ฎ๐ป๐, ๐ฎ๐ป๐ฑ ๐’๐บ ๐ฎ๐น๐น๐ผ๐๐ฒ๐ฑ ๐๐ผ ๐ฑ๐ผ ๐๐ผ.
โจ It has never been a better time to be excited about the future.
โจ “It has never been a better time to be excited about the future.”
๐ I missed this interview back in October last year when Jeff Bezos compared today’s AI boom to the internet bubble of the 2000s at Italian Tech Week 2025 [1]. He warned of hype but insisted AI is “real” and will transform every industry. In the interview, he explained why industrial bubbles can benefit society and predicted that AI will raise both productivity and quality worldwide.