"It's Faster If I Just Do It Myself" — The Most Expensive Sentence in AI
The Moment I Almost Gave Up
A few weeks ago, I spent 45 minutes teaching my AI agent how to prepare customer meetings. Pulling context from Slack, checking the CRM, looking up LinkedIn profiles, assembling a briefing document. I could have done it myself in 20 minutes.
The next morning, the agent prepared three meetings in 12 minutes. By the end of the week, it had prepared every meeting for five days — while I was still drinking my coffee.
That 45-minute investment has now saved me hours. And it keeps compounding. But here’s the thing: most people never get past the 45 minutes. They try an AI agent, find it slower than doing the task themselves, and conclude that agents don’t work. They’re not wrong about the observation. They’re wrong about the conclusion.
A Pattern as Old as Civilization
This isn’t a new problem. It’s not even a technology problem. It’s a human problem — and we’ve been studying it for centuries.
The same pattern keeps repeating: teaching is slower than doing, but it’s the only thing that scales.
Teach a Man to Fish (1885)

Teaching capability is slow but creates independence.
“Give a man a fish and you feed him for a day; teach a man to fish and you feed him for a lifetime.”
Most people attribute this to ancient Chinese philosophy. The earliest traceable version is actually from Anne Isabella Thackeray Ritchie’s 1885 novel Mrs. Dymond [1]. Maimonides expressed a similar idea in the 12th century — that the highest form of charity is helping someone become self-sufficient [2].
The proverb captures a fundamental trade-off: providing immediate relief is fast but creates dependency. Building capability is slow but creates independence.
Every time you do a task yourself instead of teaching your AI agent how to do it, you’re giving it a fish.
The Delegation Dip (Management Theory)
Managers have a name for what happens when you hand off a task: the Delegation Dip — a 3-6 month slowdown in productivity that occurs when you delegate [3]. The person you delegate to will be slower, make mistakes, and need your input. It feels like a step backward.
The Manager’s Handbook frames it perfectly: “The paradox of delegation is that it requires a degree of inefficiency and failure in the short term to work in the long term.” [4]
One article I found during research describes a manager working 70-hour weeks while her team of eight sat underutilized. Her reason? “It’s faster if I just do it myself” [5]. Sound familiar?
CCSalesPro puts it even more bluntly: “Delegation Always Makes You Less Productive… Initially.” After 3-4 weeks of mess and lower productivity, you have processes in place and the person knows how you think [6]. The same is true for agents — except the “3-4 weeks” compresses to hours, because agents don’t forget what you taught them yesterday.
The J-Curve Effect from organizational change theory visualizes this pattern: performance dips below baseline before rising above it [7]. Every new technology, every new team member, every new process follows this curve. AI agents are no exception.
Vygotsky’s Zone of Proximal Development (1930s)

Scaffolding: temporary support structures that help learners operate in the gap between what they can and can’t do.
Lev Vygotsky, a Soviet psychologist who died at 37, left behind one of the most influential ideas in education: the Zone of Proximal Development (ZPD). It’s the gap between what a learner can do independently and what they can achieve with guidance from a “More Knowledgeable Other” [8].
The teacher’s job isn’t to do the work — it’s to provide scaffolding: temporary support structures that help the learner operate in that gap. As competence grows, the scaffolding is gradually removed [9].
This maps directly to working with AI agents. When I write a skill file — a structured set of instructions that tells the agent how to perform a workflow — I’m building scaffolding. I provide context, examples, constraints, and quality checks. The agent operates within that scaffold. Over time, as I refine the instructions based on what works, the scaffold becomes more efficient and the agent becomes more autonomous.
Prompt engineering is scaffolding. The skill file is the scaffold. And the patience required is the same patience every teacher needs: the discipline to guide rather than do.
Shu Ha Ri — The Stages of Mastery

The master watches calmly as the student practices — patience is the teaching.
From Japanese martial arts comes Shu Ha Ri — three stages of learning that Alistair Cockburn and Martin Fowler popularized in software development [10]:
| Stage | Meaning | Behavior |
|---|---|---|
| Shu (守) | Follow | Follow the master’s teachings precisely. Focus on how, not why. |
| Ha (破) | Break | Branch out. Learn underlying principles. Integrate other perspectives. |
| Ri (離) | Transcend | Create your own approaches. Adapt freely to circumstances. |
Clark Terry, the legendary jazz musician, expressed the same idea: Imitate, Assimilate, Innovate.
Here’s the insight for AI agents: they are perpetually in the Shu stage. They follow instructions precisely. They don’t break rules creatively. They don’t transcend into their own style. The patience required isn’t in waiting for them to reach Ha or Ri — it’s in writing Shu-stage instructions good enough that precise following produces excellent results.
The human stays in Ri. You’re the master who knows when to break the rules. The agent is the diligent student who follows them perfectly — if you write them well.
The Solow Productivity Paradox (1987)
In 1987, Nobel laureate Robert Solow observed: “You can see the computer age everywhere but in the productivity statistics.” [11] Despite massive investment in information technology throughout the 1970s and 80s, productivity growth had actually slowed.
The paradox resolved in the 1990s when productivity surged. The explanation? Learning and adjustment time. Organizations needed to restructure their workflows around the new technology — not just drop computers into existing processes [12].
The pattern repeats for every general-purpose technology. Steam power, electricity, computers — each showed an initial productivity dip before the gains materialized. A 2019 meta-study identified four explanations: adjustment delays, measurement issues, exaggerated expectations, and mismanagement [13].
We’re living through the Solow Paradox for AI agents right now. I wrote about this recently in The AI Investment Paradox — $37 billion invested in generative AI in 2025, yet 95% of businesses see no measurable ROI. Rogers’ Diffusion of Innovations explains the macro picture. The Solow Paradox explains the micro: individual teams adopt agents, find them slower than doing things manually, and abandon them — right at the bottom of the J-Curve, just before the gains would have kicked in.
The Eisenhower Matrix
Eisenhower reportedly said: “What is important is seldom urgent, and what is urgent is seldom important.” [14]
Teaching an AI agent is a Quadrant 2 activity — important but not urgent. There’s always a more pressing task you could do yourself in less time. The discipline is in choosing the slower path that compounds.
Every time you choose “I’ll just do it myself” over “let me teach the agent,” you’re choosing Quadrant 1 over Quadrant 2. You’re optimizing for today at the expense of every future day.
The Apprenticeship Math
All of these frameworks are nice in theory. But do the numbers actually work?
For humans, the data is clear:
- 68% of employers achieved a positive net return on apprenticeships over five years. 92% of apprentices maintained employment after completion [15].
- 80% of interns with a positive mentorship experience accept return offers [16].
- Structured mentorship reduces onboarding friction and accelerates time-to-productivity [17].
The math for AI agents is even better — because agents have three properties that humans don’t:
- They don’t quit. No retention risk. No competing offers. The investment stays.
- They don’t forget. What you teach them today works tomorrow, next month, next year. No knowledge decay.
- They can be cloned. Teach one agent a workflow, and every instance of that agent inherits it. The ROI multiplies with every copy.
The Delegation Dip for a human employee might last 3-6 months. For an AI agent, it lasts hours — maybe days for complex workflows. And the upside curve is steeper because the agent executes without fatigue, without context-switching, without Monday mornings.
What This Means in Practice
I’ve been building this muscle for months now. My setup is essentially a system for teaching agents — skills that encode workflows, configuration files that set constraints, a knowledge base that serves as long-term memory, and retrospectives that improve the whole harness after every session.
The pattern I described in The Coding Agent That Doesn’t Code — using a coding agent for eight hours without writing code — only works because I invested the patience upfront. The travel expenses skill didn’t exist on Friday morning. By Friday evening, it was a reusable workflow. That’s the J-Curve in a single day.
In On the Loop, Not In It I explored the idea that the human’s role shifts from doing to designing constraints. That’s the Ri stage — you’re the master who writes the rules, not the student who follows them. But writing good rules requires patience. It requires resisting the urge to just do it yourself.
And in From Chaos to Control, I described the flywheel: every interaction is both productive work AND system improvement. The agent gets better because you invested the patience to teach it. And then it teaches you what to teach it next, through its failures.
The Real Skill
Harvard Business School published research in March 2026 showing that generative AI boosts productivity but can’t turn novices into experts [18]. The human still needs to know what good looks like. You can’t delegate what you don’t understand.
This connects back to Vygotsky: the “More Knowledgeable Other” must actually be more knowledgeable. If you don’t understand the workflow yourself, you can’t scaffold it for the agent. The patience isn’t just in the teaching — it’s in the learning that precedes it.
Tim Scarfe, on the Machine Learning Street Talk podcast, coined the term “delegation of competence” and “understanding debt” [19] — when you delegate to AI without understanding what you’re delegating, you accumulate a debt that eventually comes due. The patience to teach properly is also the patience to understand deeply.
The Takeaway
The frameworks converge on a single insight:
Teaching is slower than doing. But it’s the only thing that scales.
This is true for children in school. For interns in companies. For employees learning new roles. For teams adopting new technology. And now, for humans working with AI agents.
The patience required isn’t passive waiting. It’s active investment — writing clear instructions, providing examples, defining constraints, reviewing outputs, refining the process. It’s Quadrant 2 work. It’s scaffolding. It’s the Shu stage done well.
The people who will thrive with AI agents aren’t the ones with the best prompts or the fastest models. They’re the ones with the most patience — the ones willing to be slower today so they can be faster every day after.
Next time you reach for the keyboard instead of the skill file, ask yourself: am I giving a fish, or teaching to fish?
Sources: [1] Anne Isabella Thackeray Ritchie — Mrs. Dymond (1885), earliest traceable source of the “teach a man to fish” proverb: English Grammar Lessons [2] Maimonides — Eight Levels of Charity, 12th century: Kylian.ai [3] The Delegation Dip — 3-6 month productivity slowdown concept: Schools of Excellence [4] The Manager’s Handbook — Delegation paradox: themanagershandbook.com [5] Delegation Explained — the 70-hour-week manager: When Notes Fly [6] “Delegation Always Makes You Less Productive… Initially”: CCSalesPro [7] The J-Curve Effect in organizational change: Agile Lab [8] Vygotsky’s Zone of Proximal Development — teacher’s guide: Structural Learning [9] Scaffolding in education: Teach Educator [10] Martin Fowler — Shu Ha Ri: martinfowler.com [11] Robert Solow — Productivity Paradox (1987): Wikiwand [12] AI and the Productivity Paradox: All Things Supply Chain [13] Meta-study on IT productivity paradox explanations: Springer [14] Eisenhower’s Urgent/Important Principle: MindTools [15] Apprenticeship ROI — 68% positive return over 5 years: e4STEM [16] Intern mentorship and retention: Rewriting the Code [17] Onboarding mentorship and time-to-productivity: Upscend [18] HBS — “Gen AI Boosts Productivity, But Can’t Turn Novices Into Experts”: HBS Working Knowledge [19] Tim Scarfe / MLST — “Delegation of competence” and “understanding debt”: Machine Learning Street Talk — Jeremy Howard episode