From my perspective balancing AI Agents Agency with Control is one of the most important themes for
From my perspective balancing AI Agents Agency with Control is one of the most important themes for 2026. We need to get this right both as builders and users for AI Agentic systems.
Anthropic’s study “Measuring AI agent autonomy in practice”[1] nicely fits into this as the started to study autonomy of ai agents based of usage of Claude Code and tool invocations. Already this first iteration provides some nice insights. Obviously a high focus on Coding use cases, but also indicating a wide variety of other use cases which resonate with my experience from customers I’m working with.
Further insights how Anthropic observed autonomy changing over time, model versions and season.
Again a reminder of everyone building and deploying AI Agents to not forget to 1/ build in observability to be able to understand what their agents are doing and how they are used and then 2/ actually use this data to derive insights.
What is your take on this?
Sources [1] https://lnkd.in/de8K58RS
Cross-posted to LinkedIn